terraform-provider-confluent: confluent_kafka_topic - Cannot import topic resource in a dedicated cluster
Hello,
I am having an issue importing topics when the cluster type is dedicated. The config block is an empty block in the state file after import.
terraform version: 1.3.6
Confluent Cloud Provider: 1.26.0
Example:
terraform {
required_providers {
confluent = {
source = "confluentinc/confluent"
version = "1.26.0"
}
}
}
provider "confluent" {
cloud_api_key = "<api_key>"
cloud_api_secret = "<api_secret>"
}
resource "confluent_kafka_cluster" "cluster" {
display_name = "<cluster>"
region = "eastus2"
availability = "SINGLE_ZONE"
cloud = "AZURE"
environment {
id = "<env_id>"
}
dedicated {
cku = 1
}
}
resource "confluent_kafka_topic" "topic" {
kafka_cluster {
id = confluent_kafka_cluster.cluster.id
}
topic_name = "topic"
partitions_count = 6
rest_endpoint = confluent_kafka_cluster.cluster.rest_endpoint
config = {
"cleanup.policy" = "delete"
"confluent.key.schema.validation" = "false"
"confluent.key.subject.name.strategy" = "io.confluent.kafka.serializers.subject.TopicNameStrategy"
"confluent.value.schema.validation" = "false"
"confluent.value.subject.name.strategy" = "io.confluent.kafka.serializers.subject.TopicNameStrategy"
"delete.retention.ms" = "86400000"
"max.compaction.lag.ms" = "9223372036854775807"
"max.message.bytes" = "2097164"
"message.timestamp.difference.max.ms" = "9223372036854775807"
"message.timestamp.type" = "CreateTime"
"min.compaction.lag.ms" = "0"
"min.insync.replicas" = "2"
"retention.bytes" = "-1"
"retention.ms" = "604800000"
"segment.bytes" = "104857600"
"segment.ms" = "604800000"
}
credentials {
key = "<cluster_api_key"
secret = "<cluster_api_secret>"
}
lifecycle {
prevent_destroy = true
}
}
State File after import:
{
"version": 4,
"terraform_version": "1.3.6",
"serial": 6,
"lineage": "082cc1c7-2166-44ae-945f-95ea099240ab",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "confluent_kafka_topic",
"name": "topic",
"provider": "provider[\"registry.terraform.io/confluentinc/confluent\"]",
"instances": [
{
"schema_version": 2,
"attributes": {
"config": {},
"credentials": [
{
"key": "<cluster_api_key>",
"secret": "<cluster_api_secret>"
}
],
"id": "<cluster_id>/<topic_name>",
"kafka_cluster": [
{
"id": "<cluster_id>"
}
],
"partitions_count": 6,
"rest_endpoint": "<cluster_rest_endpoint>",
"topic_name": "topic"
},
"sensitive_attributes": [],
"private": "<redacted>"
}
]
}
],
"check_results": null
}
Importing topics on basic cluster works and the config block with default properties are imported successfully.
Thanks for your help.
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 1
- Comments: 19
That’s exactly right! I’m going to close this issue then but feel free to open a new one if you have any other questions 👍
btw I’d also recommend using Option #2 to avoid having credentials / kafka_cluster blocks in topic definition.
Hello,
Just checking in to see if there is any update with this issue?
Thanks