strimzi-kafka-operator: [Bug]: Password in secret keeps updating every few minutes
Bug Description
When working with a topic the password in the secret created by a KafkaUser manifest is changing every few minutes.
Steps to reproduce
Create a KafkaUser manifest:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: my-user
namespace: kafka
labels:
strimzi.io/cluster: my-kafka-cluster
spec:
authentication:
type: scram-sha-512
authorization:
type: simple
acls:
- resource:
type: topic
name: "my-topic"
patternType: literal
operations:
- All
host: "*"
- resource:
type: group
name: "my-topic-consumer"
patternType: literal
operations:
- All
host: "*"
If I add the below to the authorization the password no longer self updates.
password:
valueFrom:
secretKeyRef:
name: my-user
key: password
Expected behavior
I’m seeing the password be updated in the secret every few minutes.
I do see a bunch of errors in the logs for the user-operator:
2023-05-23 18:44:16 ERROR InformerUtils:29 - Caught exception in the Secret informer which is started
io.fabric8.kubernetes.client.WatcherException: Could not process websocket message
at io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener.onError(WatcherWebSocketListener.java:51) ~[io.fabric8.kubernetes-client-6.6.1.jar:?]
at io.fabric8.kubernetes.client.jdkhttp.JdkWebSocketImpl$ListenerAdapter.onError(JdkWebSocketImpl.java:85) ~[io.fabric8.kubernetes-httpclient-jdk-6.6.1.jar:?]
at jdk.internal.net.http.websocket.WebSocketImpl$ReceiveTask.processError(WebSocketImpl.java:508) ~[java.net.http:?]
at jdk.internal.net.http.websocket.WebSocketImpl$ReceiveTask.run(WebSocketImpl.java:462) ~[java.net.http:?]
at jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run(SequentialScheduler.java:149) ~[java.net.http:?]
at jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:230) ~[java.net.http:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
at java.lang.Thread.run(Thread.java:833) ~[?:?]
Caused by: java.net.SocketException: Connection reset
Strimzi version
strimzi-kafka-operator-0.35.0
Kubernetes version
Kubernetes 1.25
Installation method
helm strimzi-kafka-operator-0.35.0
Infrastructure
AKS
Configuration files and logs
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-kafka-cluster
namespace: kafka
spec:
kafka:
version: 3.4.0
replicas: 3
logging:
type: inline
loggers:
kafka.root.logger.level: "WARN"
resources:
requests:
cpu: "2"
limits:
memory: 16Gi
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
jvmOptions:
-Xms: 12g
-Xmx: 12g
listeners:
- name: plain
port: 9092
type: internal
tls: false
authentication:
type: scram-sha-512
- name: external
port: 9094
type: ingress
tls: true
authentication:
type: scram-sha-512
configuration:
class: kafka-internal
bootstrap:
host: kafka-crs-staging.myhost.com
brokers:
- broker: 0
host: kafka-crs-staging-0.myhost.com
- broker: 1
host: kafka-crs-staging-1.myhost.com
- broker: 2
host: kafka-crs-staging-2.myhost.com
brokerCertChainAndKey:
secretName: ingress-gateway-tlscerts
certificate: tls.crt
key: tls.key
authorization:
type: simple
superUsers:
- my-super-user
config:
auto.create.topics.enable: "false"
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: "3.4"
storage:
type: persistent-claim
size: 100Gi
class: managed-premium-retain
rack:
topologyKey: topology.kubernetes.io/zone
zookeeper:
replicas: 3
logging:
type: inline
loggers:
zookeeper.root.logger: "WARN"
resources:
requests:
memory: 1Gi
cpu: "2"
limits:
memory: 2Gi
jvmOptions:
-Xms: 756m
-Xmx: 756m
storage:
type: persistent-claim
size: 10Gi
class: managed-premium-retain
entityOperator:
template:
pod:
tolerations:
- effect: NoSchedule
key: app
operator: Equal
value: redis
topicOperator:
watchedNamespace: kafka
reconciliationIntervalSeconds: 60
logging:
type: inline
loggers:
rootLogger.level: "WARN"
resources:
requests:
memory: 512Mi
cpu: "1"
limits:
memory: 512Mi
cpu: "1"
userOperator:
watchedNamespace: kafka
reconciliationIntervalSeconds: 60
logging:
type: inline
loggers:
rootLogger.level: "WARN"
resources:
requests:
memory: 512Mi
cpu: "1"
limits:
memory: 512Mi
cpu: "1"
kafkaExporter:
groupRegex: ".*"
topicRegex: ".*"
resources:
requests:
cpu: 200m
memory: 64Mi
limits:
cpu: 500m
memory: 128Mi
logging: warn
enableSaramaLogging: true
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
cruiseControl: {}
Additional context
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16 (7 by maintainers)
Have you tried to restart the user operator?