strimzi-kafka-operator: [Kafka - SSL handshake failed error on 9093 port from producer shell script call within kafka broker POD] ...

Hi Team,

I am testing a use case of authentication using TLS port 9093 with all the required certificates. However I am receiving SSL handshake, Following are the steps which I followed, need help technically to identify the issue behind.

`1) kubectl get secrets -n kafka-operator1 my-cluster-cluster-ca-cert -o jsonpath=‘{.data.ca.crt}’ | base64 -id > ca.crt

  1. kubectl get secrets -n kafka-operator1 my-cluster-cluster-ca -o jsonpath=‘{.data.ca.key}’ | base64 -id > ca.key

  2. kubectl get secrets -n kafka-operator1 my-cluster-kafka-brokers -o jsonpath=‘{.data.my-cluster-kafka-0.crt}’ | base64 -id > ca_k0.crt

  3. kubectl get secrets -n kafka-operator1 my-cluster-kafka-brokers -o jsonpath=‘{.data.my-cluster-kafka-1.crt}’ | base64 -id > ca_k1.crt

  4. kubectl get secrets -n kafka-operator1 my-cluster-kafka-brokers -o jsonpath=‘{.data.my-cluster-kafka-2.crt}’ | base64 -id > ca_k2.crt

  5. keytool -keystore client.truststore.p12 -storepass 123456 -noprompt -alias my-cluster-kafka-0 -import -file ca_k0.crt

  6. keytool
    -import
    -file ca_k1.crt
    -keystore client.truststore.p12
    -alias my-cluster-kafka-1
    -storepass 123456
    -noprompt \

  7. keytool
    -import
    -file ca_k2.crt
    -keystore client.truststore.p12
    -alias my-cluster-kafka-2
    -storepass 123456
    -noprompt \

  8. keytool
    -import
    -file ca.crt
    -keystore client.truststore.p12
    -alias ca
    -storepass 123456
    -noprompt
    `

All the above commands are doing 1 thing finally, creating client.truststore.p12 which i am placing inside /tmp/ folder and calling the producer.sh as below.

[kafka@my-cluster-kafka-0 kafka]$ ./bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local:9093 --topic happy-topic \

–producer-property security.protocol=SSL
–producer-property ssl.truststore.type=PKCS12
–producer-property ssl.truststore.password=123456
–producer-property ssl.truststore.location=/tmp/prod/client.truststore.p12 OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThre ads=N [2020-05-15 16:23:36,698] ERROR [Producer clientId=console-producer] Connection to node -1 (my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local/10.12.4.238:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2020-05-15 16:23:36,996] ERROR [Producer clientId=console-producer] Connection to node -1 (my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local/10.12.4.238:9093) failed authentication d ue to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2020-05-15 16:23:37,626] ERROR [Producer clientId=console-producer] Connection to node -1 (my-cluster-kafka-bootstrap.kafka-operator1.svc.cluster.local/10.12.4.238:9093) failed authentication d ue to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 27 (13 by maintainers)

Most upvoted comments

Q1:

You have 3 listeners configured in your broker:

     listeners:
        plain: {}
        tls:
          authentication:
            type: tls
        external:
          type: loadbalancer
          tls:  true
          authentication: 
            type: tls          

The plain and tls are supposed to be used from inside the Kubernetes cluster and you can use the bootstrap URL my-cluster-kafka-bootstrap.kafka-operator1.svc:9093 (for tls) or my-cluster-kafka-bootstrap.kafka-operator1.svc:9092 (for plain - without TLS encryption). So when you connect from inside Kubernetes, you should use one of these.

The external listener is designed to be used from outside of the Kubernetes cluster. And to connect, you have to (in your case since you use type: loadbalancer) use the load balancer address (in your case IP, in other cases it might be DNS name).

So you should pick the right address depending where the client is running. You can use the external listener also from inside, but:

  • You should use the loadbalancer address
  • It will be probably slower and sometimes more expensive because the traffic will flow through the loadbalancers. So in general you should always prefer to the internal tls interface instead. The SAN list in the certificates also corresponds to this.

Q3

I’m sure the logs on the brokers or clients will show the username somwhere. But to be honest, I’m not sure where to look out of my head. You might also need to increase the log level (https://strimzi.io/docs/latest/full.html#con-kafka-logging-deployment-configuration-kafka) - I do not think it is printed by default.

TBH, I don’t know. If you connect

This part enabled the TLs client authentication:

          authentication:
            type: tls

So if you want to start with server auth only (which means basically regular TLS encryption), you have to remove it and do the steps 1 and 9 from your original post. The client should be configured well for that.

For the client auth as I said in my first answer … you will need to create the user and the keystore as I described there.

@pascals-ager What exactly are you looking for? I added some more security oriented examples the other day: https://github.com/strimzi/strimzi-kafka-operator/tree/master/examples/security

Ya, we have figured it out. The important point here to remember is we can’t create a consumer group using the shell script, so when we are trying to consume the topic using console-consumer.sh, it was defaulting to an existing consumer group which our topic and user don’t have access too.

Hence we need to do something like this.

./bin/kafka-console-consumer.sh --bootstrap-server kafka.dns.acl.com:9094 --topic acl-topic
–consumer-property security.protocol=SSL
–consumer-property ssl.truststore.type=PKCS12
–consumer-property ssl.keystore.type=PKCS12
–consumer-property ssl.truststore.password=123456
–consumer-property ssl.keystore.password=123456
–consumer-property group.id=acl-user-consumer-grp
–consumer-property ssl.truststore.location=/tmp/prod/client.truststore.p12
–consumer-property ssl.keystore.location=/tmp/prod/consumer.keystore.p12 --from-beginning

Property :::: –consumer-property group.id=acl-user-consumer-grp \ -> This will create the consumer group and also it has to be added in the kafkauser CR under resource:type->group as you mentioned.

  1. ./bin/kafka-consumer-groups.sh --bootstrap-server kafka.dns.acl.com:9094 --command-config /tmp/prod/consumer.properties \ –describe --all-groups - This will show the list of consumer groups we have created and the topics connected to it.

Hope this helps.

This is resolved, thanks for all the support, helped to gain lot of practical knowledge.