How To Upgrade The Kubernetes Version Of Your AKS Default Node Pool With Terraform

Dennis Riemenschneider
3 min readJan 13, 2021

Did you ever noticed, when upgrading your Azure Kubernetes Service with Terraform, the default node pool will not be upgraded?

Introduction

The documentation of the AzureRM Kubernetes Cluster is as the following for the two relevant arguments setting the Kubernetes Version for the Cluster API and the Cluster Agends (of the Default Node Pool):

kubernetes_version - (Optional) Version of Kubernetes specified when creating the AKS managed cluster. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade).

orchestrator_version - (Optional) Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)

Issue

When you set the “kubernetes_version” and run terraform apply , the Cluster API will be updated. This is a change which Terraform detects.

But, when you set the “orchestrator_version” of the Default Node Pool and run terraform apply, nothing happens. Terraform (or maybe the provider) is not detecting any change.

Solution

A proven workaround to get this done, is to use the Azure REST API.

I made this with the Terraform null_resource and the local-exec provisioner. The null_resource will run a shell script, which authenticates to the Azure REST API and do some magic with the Kubernetes Agent Pool resource.

Prerequisites

$ export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
$ export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
$ export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
$ export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"

Example Implementation

resource "azurerm_kubernetes_cluster" "kubernetes_cluster" {
...
# your cluster definition
...
}

module "aks_default_node_pool_upgrader" {
source = "300481/aks-default-node-pool-upgrader/azurerm"
version = "0.0.5"

kubernetes_node_version = "1.19.3"
resource_group_name = "The_RG_of_your_cluster"
kubernetes_cluster_name = azurerm_kubernetes_cluster.kubernetes_cluster.name
default_pool_name = "default"
}

Run Terraform as a container

Depending on the Linux Distribution you are using and the SELinux settings, you eventually have to adjust the volume mount of your working directory. Also the user and group of the running container can be a point to adjust. Podman and Docker commands may also differ.

For me, running Terraform in a container, is the best solution available. You’ll never need to install Terraform and you don’t need to install tools like tfswitch (Terraform Switcher). It keeps all your systems (Dev Workstation, CI/CD Server, etc.) clean and only make a container engine neccessary.

For this Terraform Module I suggest to use my special Terraform container image ghcr.io/300481/terraform. This image already includes the needed tools jqand curl.

I’m running terraform-docs, helm, kubectl, and nearly all tools as a container. It gives you so much freedom and a defined versioning of your tools. It gives you the possibility to run your loved tools from cloud shells like Google Cloud Platform Shell f.e. Isn’t it really awesome?

# with docker
alias terraform='docker run --env-file <(env | grep -v PATH) -it --rm -v $PWD:$PWD:rw,Z -w $PWD ghcr.io/300481/terraform:0.14.5'
# with podman
alias terraform='podman run --env-file <(env | grep -v PATH) -it --rm -v $PWD:$PWD:rw,Z -w $PWD ghcr.io/300481/terraform:0.14.5'

Thanks for reading!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Dennis Riemenschneider
Dennis Riemenschneider

No responses yet

Write a response