terraform-provider-azurerm: azurerm_monitor_diagnostic_setting: Error: Provider produced inconsistent final plan
Terraform (and AzureRM Provider) Version
Terraform v0.12.23
- provider.azurerm v1.44.0
Affected Resource(s)
- azurerm_monitor_diagnostic_setting
Terraform Configuration
provider "azurerm" {
version = "~> 1.44"
subscription_id = var.subscription_id
tenant_id = var.tenant_id
}
resource "azurerm_resource_group" "main" {
count = var.rg_count
name = "${var.prefix}-${element(var.instance, count.index)}-dc-${var.environment}"
location = element(var.location, count.index)
tags = local.tags
}
resource "azurerm_storage_account" "main" {
count = var.storage_account_enabled ? var.rg_count : 0
name = "${var.prefix}main${var.environment}${element(var.instance_short, count.index)}"
resource_group_name = element(azurerm_resource_group.main.*.name, count.index)
location = element(azurerm_resource_group.main.*.location, count.index)
tags = local.tags
account_replication_type = "LRS"
account_tier = "Standard"
account_kind = "StorageV2"
// Force Cool, or do we want to create a policy?
access_tier = "Cool"
enable_https_traffic_only = true
}
// Create storage account firewall rules
resource "azurerm_storage_account_network_rules" "main" {
count = var.storage_account_enabled ? var.rg_count : 0
resource_group_name = element(azurerm_resource_group.main.*.name, count.index)
storage_account_name = element(azurerm_storage_account.main.*.name, count.index)
default_action = "Deny"
ip_rules = var.client_ip
bypass = ["AzureServices"]
}
data "azurerm_monitor_diagnostic_categories" "storagemain_log_cats" {
count = var.storage_account_enabled ? var.rg_count : 0
resource_id = element(azurerm_storage_account.main.*.id, count.index)
}
resource "azurerm_monitor_diagnostic_setting" "storagemain_logs" {
count = var.storage_account_enabled ? var.rg_count : 0
name = "storagemain-logs-${var.environment}"
target_resource_id = element(azurerm_storage_account.main.*.id, count.index)
storage_account_id = element(azurerm_storage_account.diagnostics.*.id, count.index)
// This will cycle through all possible values, and log them all
dynamic "log" {
for_each = toset(flatten(data.azurerm_monitor_diagnostic_categories.storagemain_log_cats.*.logs))
content {
category = log.value
retention_policy {
enabled = true
days = 30
}
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = true
days = 30
}
}
}
Debug Output
Error: Provider produced inconsistent final plan
When expanding the plan for
azurerm_monitor_diagnostic_setting.storagemain_logs[0] to include new values
learned so far during apply, provider "registry.terraform.io/-/azurerm"
produced an invalid new value for .log: planned set element
cty.ObjectVal(map[string]cty.Value{"category":cty.UnknownVal(cty.String),
"enabled":cty.True,
"retention_policy":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"days":cty.UnknownVal(cty.Number),
"enabled":cty.UnknownVal(cty.Bool)})})}) does not correlate with any element
in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Expected Behavior
Diagnostic logs are enabled on the storage resource.
Actual Behavior
Storage is created, but there are no diagnostic logs enabled
Steps to Reproduce
- terraform apply
Important Factoids
Very similar configuration for a key vault works fine:
data "azurerm_monitor_diagnostic_categories" "kv_log_cats" {
count = var.kv_enabled ? 1 : 0
resource_id = element(azurerm_key_vault.main.*.id, count.index)
}
resource "azurerm_monitor_diagnostic_setting" "kv_logs" {
count = var.kv_enabled && var.storage_account_enabled ? 1 : 0
name = "kv-logs-${var.environment}"
target_resource_id = element(azurerm_key_vault.main.*.id, count.index)
storage_account_id = element(azurerm_storage_account.diagnostics.*.id, count.index)
// This will cycle through all possible values, and log them all
dynamic "log" {
for_each = toset(flatten(data.azurerm_monitor_diagnostic_categories.kv_log_cats.*.logs))
content {
category = log.value
retention_policy {
enabled = true
days = 30
}
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = true
days = 30
}
}
}
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 18
- Comments: 18
Issue still present in Terraform v0.14.9,
azurermv2.50.0You are correct that a second run of apply is successful, and that is what I have been doing as part of my process.
It is unfortunate, as each resource type has different combinations of log and metric categories. So using a method like this to be able to dynamically determine and apply them was great for many reasons. Having too explicitly define the log and metric entries means spending the time to discover what the options are for each resource type, and having to explicitly define those resource blocks countless times.
Would it be possible, with some other kind of a code change, have terraform discover and apply all log and metric options via another option in the azuretm_monitor_diagnostic_setting resource? Something like an “apply_all_log” or akin to that? I am not familiar with what can/can’t be done in the various stages, so maybe this is not possible to do. But figured I’d ask the question.
hashicorp/terraform#28340 is now fixed in Terraform v0.15.2, on which version this issue should now be addressed.
@csdaraujo hashicorp/terraform#25600 is reopened, let’s wait for the fix in core.