terraform-provider-azurerm: AKS error when trying to upgrade cluster
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform (and AzureRM Provider) Version
Provider version 1.24.
Affected Resource(s)
azurerm_kubernetes_cluster
Description
Trying to upgrade aks cluster from 1.11.5 -> 1.11.9. Receiving the following error:
containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 – Original Error: Code=“LinkedInvalidPropertyId” Message=“Property id ''at path ‘properties.addonProfiles.omsagent.config.logAnalyticsWorkspaceResourceID’ is invalid. Expect fully qualified resource Id that start with ‘/subscriptions/{subscriptionId}’ or ‘/providers/{resourceProviderNamespace}/’.”
Unsure if this happens on new cluster creation, only tried the upgrade.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 6
- Comments: 16 (4 by maintainers)
Commits related to this issue
- Make kubernetes log analytics workspace optional This fixes #3239 where a kubernetes cluster that had first the OMS agent profile addon enabled and then disabled not usable anymore by the terraform p... — committed to maxlegault/terraform-provider-azurerm by maxlegault 5 years ago
- Make kubernetes log analytics workspace optional (#4513) This fixes #3239 — committed to hashicorp/terraform-provider-azurerm by maxlegault 5 years ago
I can confirm that I was able to upgrade 1.11.5 -> 1.11.9 using Azure Portal. Seems as though I should have been able to do the same using terraform.
Hello. We have the same problem: can not update label for existing Kubernetes Service when Log Analytics was enabled and then disabled. In such scenario there is an empty omsagent section. This section does not exist in Kubernetes Service which was never linked to Log Analytics.
az aks show --resource-group <rg> --name <cluster>
{ 'addonProfiles': { 'omsagent': { 'config': {}, 'enabled': false } }
… Here is an error:TF template:
variable “subscription-id” {} variable “client-id” {} variable “client-secret” {} variable “tenant-id” {} variable “name” {} variable “location” {} variable “resource-group” {} variable “dns-name-prefix” {} variable “kubernetes-version” {} variable “node-count” {} variable “node-osdisk-size” {} variable “node-vm-size” {} variable “service-principal” {} variable “service-principal-secret” {} variable “tags” { type = “map”, default = { PII=“No”, CustomerInfo=“No”, CustomerData=“No”, ModuleConfig=“Yes” } }
provider “azurerm” { subscription_id = “${var.subscription-id}” client_id = “${var.client-id}” client_secret = “${var.client-secret}” tenant_id = “${var.tenant-id}” }
resource “azurerm_kubernetes_cluster” “aks” { name = “${var.name}” location = “${var.location}” resource_group_name = “${var.resource-group}” dns_prefix = “${var.dns-name-prefix}” kubernetes_version = “${var.kubernetes-version}”
agent_pool_profile { name = “default” count = “${var.node-count}” vm_size = “${var.node-vm-size}” os_disk_size_gb = “${var.node-osdisk-size}” }
service_principal { client_id = “${var.service-principal}” client_secret = “${var.service-principal-secret}” }
lifecycle { ignore_changes = [ “kubernetes_version”, “agent_pool_profile.0.count”, “agent_pool_profile.0.vm_size”, “agent_pool_profile.0.name”, “linux_profile”, “service_principal” ] }
tags = “${var.tags}” }