terraform-provider-azurerm: Modifying node pool for deprecated k8s version is not possible
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform (and AzureRM Provider) Version
Terraform v0.12.26
- provider.azuread v0.11.0
- provider.azurerm v2.18.0
- provider.helm v1.2.4
- provider.kubernetes v1.12.0
- provider.random v2.3.0
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
No special configuration is necessary but you must have an AKS using a version of k8s that is no longer supported.
Debug Output
I have debug output but haven’t removed any sensitive data from it. If it’s necessary to share, I will do so.
Expected Behavior
Terraform modification of a node pool using an unsupported k8s version should be successful.
Actual Behavior
An error is displayed:
The Kubernetes/Orchestrator Version “1.16.8” is not available for Node Pool “default”.
Please confirm that this version is supported by the Kubernetes Cluster “aks-staging” (Resource Group “rg-staging”) - which may need to be upgraded first.
The Kubernetes Cluster is running version “1.16.8”.
The supported Orchestrator Versions for this Node Pool/supported by this Kubernetes Cluster are:
- 1.15.12
- 1.15.11
Steps to Reproduce
- Hopefully you already have an AKS deployed for an unsupported version. I was using 1.16.8.
- Modify your terraform script to make a change to the node pool (change the node count).
- Apply
Important Factoids
This error appears to have been introduced with the June AKS updates.
One thing in particular to call out is updating Node Pools - where a Kubernetes Cluster/Control Plane must be updated before the Node Pools can be updated
I hope this isn’t meant to imply that you must be on a supported k8s version in order to make modifications through terraform. I can still make modifications to my unsupported cluster version through the Portal UI. I would think this is a pretty common scenario where a tweak may need to be done and upgrading isn’t possible due to the impact a k8s upgrade has on the cluster and deployed services.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 23
- Comments: 17 (2 by maintainers)
Reopening since this is a Terraform bug which needs to be fixed - we should look to add the current version of Kube used on the control plane/node pools into the validation list which I believe should fix this issue
Checked out for today, sorry, but I’ll come back to it either Sunday or early next week. Thank you for your patience!