quarkus: Quarkus 1.7.0 + Camel-Quarkus Kafka: Failed to configure SaslClientAuthenticator
Describe the bug After an update from Quarkus 1.5.2 to Quarkus 1.6.0+ (currently tested with 1.7.0 same behavior) we now receive an error message when we try to secure the Kafka Connection in the App, while also using the camel-quarkus kafka component. We are not using Kafka-Streams. We are also not using an Kerberos, only good old configuration and its values.
Error
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.firstPrincipal(SaslClientAuthenticator.java:579)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.<init>(SaslClientAuthenticator.java:171)
at org.apache.kafka.common.network.SaslChannelBuilder.buildClientAuthenticator(SaslChannelBuilder.java:274)
at org.apache.kafka.common.network.SaslChannelBuilder.lambda$buildChannel$1(SaslChannelBuilder.java:216)
at org.apache.kafka.common.network.KafkaChannel.<init>(KafkaChannel.java:143)
at org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:224)
at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:338)
at org.apache.kafka.common.network.Selector.registerChannel(Selector.java:329)
at org.apache.kafka.common.network.Selector.connect(Selector.java:256)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:957)
at org.apache.kafka.clients.NetworkClient.access$600(NetworkClient.java:73)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1128)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1016)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:547)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:324)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at java.base/java.lang.Thread.run(Thread.java:834)
Application.properties
camel.component.kafka.worker-pool-core-size=20
camel.component.kafka.worker-pool-max-size=200
camel.component.kafka.enable-idempotence=true
camel.component.kafka.max-in-flight-request=1
camel.component.kafka.retries=1
camel.component.kafka.request-required-acks=all
camel.component.kafka.max-request-size=52428800
camel.component.kafka.reconnect-backoff-max-ms=1000
camel.component.kafka.sasl-jaas-config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" password="admin123" \
user_admin="admin123";
camel.component.kafka.security-protocol=SASL_PLAINTEXT
camel.component.kafka.sasl-mechanism=SCRAM-SHA-512
kafka.security-protocol=SASL_PLAINTEXT
Jaas-Config for Command
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
serviceName="kafka"
principal="admin"
username="admin"
password="admin123"
user_admin="admin123";
};
Jaas-Config for Kafka
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin123"
user_admin="admin123";
};
Client {};
Run Gradle Task
gradle -Djava.security.auth.login.config=$HOME/tgit/regressions/kafka/kafka_client_jaas.conf producer:quarkusd
Docker-Compose Kafka
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: wurstmeister/kafka
hostname: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092
KAFKA_PORT: 9094
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT, OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://:9094, OUTSIDE://kafka:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9094, OUTSIDE://kafka:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256, SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-512
#KAFKA_SSL_KEYSTORE_LOCATION: /etc/kafka/kafka.keystore.jks
#KAFKA_SSL_KEYSTORE_PASSWORD: test123
#KAFKA_SSL_KEY_PASSWORD: test123
#KAFKA_SSL_TRUSTSTORE_LOCATION: /etc/kafka/kafka.truststore.jks
#KAFKA_SSL_TRUSTSTORE_PASSWORD: test123
#KAFKA_SSL_ENABLED_PROTOCOLS: TLSv1.2
#KAFKA_SSL_KEYSTORE_TYPE: JKS
#KAFKA_SSL_TRUSTSTORE_TYPE: JKS
#KAFKA_SSL_CLIENT_AUTH: none
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "my-topic:1:1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./kafka_server_jaas.conf:/etc/kafka/kafka_server_jaas.conf
- ./kafka.keystore.jks:/etc/kafka/kafka.keystore.jks
- ./kafka.truststore.jks:/etc/kafka/kafka.truststore.jks
networks:
- default
kafdrop:
image: obsidiandynamics/kafdrop
depends_on:
- kafka
restart: "no"
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: "kafka:9092"
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
#KAFKA_PROPERTIES: ${KAFKA_DROP_PROPERTIES_BASE64}
#KAFKA_TRUSTSTORE: ${KAFKA_DROP_TRUSTSTORE_BASE64}
#KAFKA_KEYSTORE: ${KAFKA_DROP_KEYSTORE_BASE64}
proxy:
image: defreitas/dns-proxy-server
hostname: proxy
ports:
- "5380:5380"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/resolv.conf:/etc/resolv.conf
Kafka with Compose
docker-compose down && docker-compose up -d
docker exec -i kafka_kafka_1 kafka-configs.sh --zookeeper kafka_zookeeper_1:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin123],SCRAM-SHA-512=[password=admin123]' --entity-type users --entity-name admin
Expected behavior Producer can build up the Camel-Route which uses Camel-Quarkus Kafka Consumer.
Actual behavior Producer can’t create the Route because it tries to get a Principal to use Scram mechanism, but can’t get it.
To Reproduce Steps to reproduce the behavior:
- Create a Quarkus Project with Camel-Quarkus Kafka + Wurstmeister Kafka up and running with Security Config.
- Use given Config Params to try and configure the minimum.
- Use given Command with path to jaas.conf to get rid of the missing login
- Try to create a Camel Route which works with Kafka
Configuration
# Add your application.properties here, if applicable.
camel.component.kafka.worker-pool-core-size=20
camel.component.kafka.worker-pool-max-size=200
camel.component.kafka.enable-idempotence=true
camel.component.kafka.max-in-flight-request=1
camel.component.kafka.retries=1
camel.component.kafka.request-required-acks=all
camel.component.kafka.max-request-size=52428800
camel.component.kafka.reconnect-backoff-max-ms=1000
camel.component.kafka.sasl-jaas-config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" password="admin123" \
user_admin="admin123";
camel.component.kafka.security-protocol=SASL_PLAINTEXT
camel.component.kafka.sasl-mechanism=SCRAM-SHA-512
kafka.security-protocol=SASL_PLAINTEXT
Screenshots
Environment (please complete the following information):
-
Output of
uname -a
orver
: Linux bfr-pc 5.7.9-1-MANJARO #1 SMP PREEMPT Thu Jul 16 08:20:05 UTC 2020 x86_64 GNU/Linux -
Output of
java -version
: openjdk version “11.0.7” 2020-04-14 OpenJDK Runtime Environment (build 11.0.7+10) OpenJDK 64-Bit Server VM (build 11.0.7+10, mixed mode) -
GraalVM version (if different from Java): 20.1
-
Quarkus version or git rev: 1.7.0
-
Build tool (ie. output of
mvnw --version
orgradlew --version
):
Gradle 6.5.1
Build time: 2020-07-15 13:08:58 UTC Revision: <unknown>
Kotlin: 1.3.72 Groovy: 2.5.11 Ant: Apache Ant™ version 1.10.7 compiled on September 1 2019 JVM: 11.0.7 (Oracle Corporation 11.0.7+10) OS: Linux 5.7.9-1-MANJARO amd64
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 24 (12 by maintainers)
Yeah, I’m checking currently again in the reproducer, had some new errors, because of some tests I did some days ago.