terraform-provider-azurerm: microsoft_defender for `azure_kubernetes_service` can not be disabled

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

1.1.9

AzureRM Provider Version

3.14.0

Affected Resource(s)/Data Source(s)

azurerm_kubernetes_cluster

Terraform Configuration Files

NA

Debug Output/Panic Output

NA

Expected Behaviour

Azurerm should not create log analytics workspace in default resource group.

if azurerm_kubernetes_cluster is created without specifiying any microsoft_defender block it should not create any security profile/azure defender.

az aks show -g resource_group_name -n aks_cluster_name should have following result

  "securityProfile": {
    "azureDefender": null
  },

Actual Behaviour

Even if microsoft_defender block is not present on azurerm_kubernetes_cluster resource, azurerm creates a log analytics workspace in default resource group and assign it to aks cluster.

Earlier we have used azurerm 3.0.0 which did not create any log analytics workspace.

Steps to Reproduce

Create azurerm_kubernetes_cluster resource using azurerm version 3.14.0 without defining any microsoft_defender block

terraform apply

get azure aks cluster will result log analytics workspace connection with aks. az aks show -g resource_group_name -n aks_cluster_name will have following block in it.

  "securityProfile": {
    "azureDefender": {
      "enabled": true,
      "logAnalyticsWorkspaceResourceId": "/subscriptions/34es54w-f86e-443b-9735-17679033a4e6/resourcegroups/DefaultResourceGroup-CUS/providers/Microsoft.OperationalInsights/workspaces/DefaultWorkspace-4gsr4s44-f86e-443b-9735-17679033a4e6-CUS"
    }
  },

Important Factoids

No response

References

https://github.com/hashicorp/terraform-provider-azurerm/pull/16218 https://github.com/hashicorp/terraform-provider-azurerm/issues/15063

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 5
  • Comments: 16 (3 by maintainers)

Most upvoted comments

We managed to solve this on our end (and keep Defender for Cloud) by registering the Defender for Cloud feature, which seems to be different from the auto provisioning.

If you run either of these in the subscription where the AKS cluster is:

az feature registration show --provider-namespace "Microsoft.ContainerService" --name "AKS-AzureDefender"

OR

az feature show --namespace "Microsoft.ContainerService" --name "AKS-AzureDefender"

And you will see state: NotRegistered

To register it you have to run az feature register --namespace "Microsoft.ContainerService" --name "AKS-AzureDefender"

Be aware it might take a while. This command is useful if you want to check the current state:

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-AzureDefender')].{Name:name,State:properties.state}"

I am not sure what the deal with this is and why the feature has to be registered (as the Terraform Provider error indicates), may be because it was in preview only until recently? We’ve been trying to upgrade the cluster all week and Defender has been getting in the way, which is the exact opposite of what a security feature should be doing!

I hope this helps you guys!

I have the sneaking suspicion that this StatusCode=400 is the result of something like the following scenario (which is entirely hypothetical based on black box debugging):

  1. terraform apply creates a cluster
  2. Defender auto-provisions itself into the cluster after a few minutes.
  3. Subsequent terraform apply commands detect this and try to remove Defender, since it doesn’t match the terraform state.
  4. Something in the backend doesn’t like this and maybe returns a 403 - since whatever permissions you are giving terraform do not trump whatever permissions the automatic provisioner has.
  5. Some other code path - left from since Defender was in “Preview” mode - assumes all 4xx errors are due to Defender not being Enabled and wrongly returns the aforementioned error message.
  6. Much sadness