terraform-provider-confluent: Bug: KsqlAdmin role for ksqldb doesn't work

Steps to reproduce

  1. Use this example for create ksql cluster - > https://github.com/confluentinc/terraform-provider-confluent/blob/master/examples/configurations/ksql-acls/main.tf
  2. Connect to Ksql with “app-manager-kafka-api-key” key-secret $CONFLUENT_HOME/bin/ksql -u $KSQL_API_KEY -p $KSQL_API_SECRET https://pksqlc-<ID>.eu-central-1.aws.confluent.cloud:443
  3. Check streams: list streams;

Expected result

ksql> list streams;

 Stream Name         | Kafka Topic                 | Key Format | Value Format | Windowed
------------------------------------------------------------------------------------------
 KSQL_PROCESSING_LOG | pksqlc-<ID>-processing-log | KAFKA      | JSON         | false
------------------------------------------------------------------------------------------

Actual result

ksql> list streams;
You are forbidden from using this cluster.

If you check GUi you will see that this role map to the ksql cluster id: Screenshot 2023-02-10 at 10 51 13

In the documentation it connects to the cluster name.

When you add this role in GUI by hand all work.

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 16 (3 by maintainers)

Most upvoted comments

@Asvor @S1M0NM could you try again with image

It seems to work now (after we released a backend fix).

If I understand you correctly, it’s probably because you’re using the wrong API key. The app-manager-kafka-api-key belongs to the app-manager service account.

The KsqlAdmin role, however, is assigned to the app-ksql service account, which in turn gets the app-ksqldb-api-key created. This key can then be used for the KSQL cluster.

update: we updated ksql-rbac example to replace CloudClusterAdmin role with ResourceOwner and KsqlAdmin roles in 1.33.0.

update: @Asvor @S1M0NM we’re working on a fix.

@S1M0NM did you also try the full crn pattern? crn_pattern = "${confluent_kafka_cluster.standard.rbac_crn}/ksql=${confluent_ksql_cluster.main.resource_name}"

I tried it just now because i wasnt sure, and it also doesnt work:

Terraform will perform the following actions:

  # confluent_role_binding.app-ksql-ksql-admin will be created
  + resource "confluent_role_binding" "app-ksql-ksql-admin" {
      + crn_pattern = "crn://confluent.cloud/organization=<org-id>/environment=env-<id>/cloud-cluster=lkc-<id>/ksql=crn://confluent.cloud/organization=<org-id>/environment=env-<id>/cloud-cluster=lkc-<id>/ksql=ksql_cluster_0"
      + id          = (known after apply)
      + principal   = "User:sa-<id>"
      + role_name   = "KsqlAdmin"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

The used crn pattern doesn’t look right… and also results in a failure:

https://api.confluent.cloud/iam/v2/role-bindings giving up after 5 attempt(s)
│
│   with confluent_role_binding.app-ksql-ksql-admin,
│   on main.tf line 364, in resource "confluent_role_binding" "app-ksql-ksql-admin":
│  364: resource "confluent_role_binding" "app-ksql-ksql-admin" {
│

Hi! Yes, I was confused about the name of the key name in the ticket. But, I checked this behaviour on our dev cluster with the right key-secret pair and I had the same problem with this KsqlAdmin role. (I had to give “CloudClusterAdmin” and it worked, but it was a big security issue) Could you, please, check this behaviour?