dapr: dapr-operator not starting and TLS issues when running with ArgoCD
In what area(s)?
/area runtime
/area operator
/area placement
/area docs
/area test-and-release
What version of Dapr?
1.8.4
Expected Behavior
After deploying the CRDs and Helm Chart, I expect the operator and ancillary services to stay running.
Actual Behavior
Sometimes all services come up fine, while other times the pods of the various services are killed due to Readiness Probe failure.
Steps to Reproduce the Problem
I have not found a way to consistently reproduce this. Last week it happened one time with the sidecar-injector and was simply resolved by restarting it, however this week it is affecting the sidecar-injector, placement-server, and operator.
values.yml
global:
  ha:
    enabled: true
kustomization.yml (helm install via kustomize and argocd)
namespace: dapr-system
commonLabels:
  app.kubernetes.io/name: dapr-operator
  app.kubernetes.io/instance: dapr-operator #! Must match the releaseName of the helm-chart, else selectors are broken
resources:
# The apply will fail first time because the CRDs are not there
- https://raw.githubusercontent.com/dapr/dapr/v1.8.4/charts/dapr/crds/components.yaml
- https://raw.githubusercontent.com/dapr/dapr/v1.8.4/charts/dapr/crds/configuration.yaml
- https://raw.githubusercontent.com/dapr/dapr/v1.8.4/charts/dapr/crds/resiliency.yaml
- https://raw.githubusercontent.com/dapr/dapr/v1.8.4/charts/dapr/crds/subscription.yaml
# Unfortunately this needs to stay in the overlay, as they values cannot be overriden if it in the base.
# You can override using kustomize patches (after helm template rendering), but this can complicate
# things when you have to replace large portions such as command/args, and then lose the effect of some
# of the values in helm and have to manually figure out the gaps.
helmCharts:
- name: dapr
  repo: https://dapr.github.io/helm-charts/
  version: v1.8.4
  releaseName: dapr-operator #! see the app.kubernetes.io/instance label
  valuesFile: values.yml
  namespace: dapr-system # can't seem to override the namespace via the ArgoCD Application, and the Helm Chart needs this to populate DNS entries
 
working operator logs
time="2022-08-23T08:27:36.322486177Z" level=info msg="log level set to: debug" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T08:27:36.322917161Z" level=info msg="metrics server started on :9090/" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.metrics type=log ver=1.8.4
time="2022-08-23T08:27:36.323613815Z" level=info msg="starting Dapr Operator -- version 1.8.4 -- commit 18575823c74318c811d6cd6f57ffac76d5debe93" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
I0823 08:27:37.374562       1 request.go:665] Waited for 1.0317917s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/wgpolicyk8s.io/v1alpha1?timeout=32s
time="2022-08-23T08:27:37.481170494Z" level=info msg="Dapr Operator is starting" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T08:27:37.485102159Z" level=debug msg="observed component to be synced, demo/pubsub" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
I0823 08:27:37.582939       1 leaderelection.go:248] attempting to acquire leader lease dapr-system/operator.dapr.io...
time="2022-08-23T08:27:37.68394938Z" level=info msg="getting tls certificates" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T08:27:37.684072222Z" level=info msg="tls certificates loaded successfully" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T08:27:37.684104051Z" level=info msg="starting gRPC server on port %d6500" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator.api type=log ver=1.8.4
time="2022-08-23T08:27:37.68419769Z" level=info msg="starting watch for certs on filesystem: /var/run/dapr/credentials" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T08:27:37.684615396Z" level=info msg="Dapr Operator started" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T08:27:37.684708418Z" level=info msg="Healthz server is listening on :8080" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T08:27:38.637137082Z" level=info msg="starting webhooks" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
I0823 08:27:38.638127       1 leaderelection.go:248] attempting to acquire leader lease dapr-system/webhooks.dapr.io...
time="2022-08-23T08:27:38.654085142Z" level=info msg="Conversion webhook for \"subscriptions.dapr.io\" is up to date" instance=dapr-operator-6977dccbff-pmr27 scope=dapr.operator type=log ver=1.8.4
failed operator logs
Readiness probe failed: Get "http://10.220.44.111:8080/healthz": dial tcp 10.220.44.111:8080: connect: connection refused
Note: log level is set to
info. I’ll replace this with debug logs
time="2022-08-23T07:47:47.865564116Z" level=info msg="log level set to: info" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T07:47:47.865702251Z" level=info msg="metrics server started on :9090/" instance=dapr-operator-58d55569db-hd8cb scope=dapr.metrics type=log ver=1.8.4
time="2022-08-23T07:47:47.865813574Z" level=info msg="starting Dapr Operator -- version 1.8.4 -- commit 18575823c74318c811d6cd6f57ffac76d5debe93" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
I0823 07:47:48.916596       1 request.go:665] Waited for 1.039026329s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/vpcresources.k8s.aws/v1beta1?timeout=32s
time="2022-08-23T07:47:49.019022457Z" level=info msg="Dapr Operator is starting" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
I0823 07:47:49.119323       1 leaderelection.go:248] attempting to acquire leader lease dapr-system/operator.dapr.io...
time="2022-08-23T07:47:49.220302887Z" level=info msg="getting tls certificates" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T07:47:49.220356142Z" level=info msg="tls certificate not found; waiting for disk changes. err=open /var/run/dapr/credentials/ca.crt: no such file or directory" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T07:47:49.220369295Z" level=info msg="starting watch for certs on filesystem: /var/run/dapr/credentials" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T07:47:50.171631573Z" level=info msg="starting webhooks" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
I0823 07:47:50.172259       1 leaderelection.go:248] attempting to acquire leader lease dapr-system/webhooks.dapr.io...
time="2022-08-23T07:47:50.184331339Z" level=info msg="Conversion webhook for \"subscriptions.dapr.io\" is up to date" instance=dapr-operator-58d55569db-hd8cb scope=dapr.operator type=log ver=1.8.4
time="2022-08-23T07:48:02.732291725Z" level=info msg="Received signal \"terminated\"; beginning shutdown" instance=dapr-operator-58d55569db-hd8cb scope=dapr.signals type=log ver=1.8.4
working placement server logs
Note: log level is set to
info. I’ll replace this with debug logs
time="2022-08-23T03:35:15.201648263Z" level=info msg="starting Dapr Placement Service -- version 1.8.4 -- commit 18575823c74318c811d6cd6f57ffac76d5debe93" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T03:35:15.201753237Z" level=info msg="log level set to: info" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T03:35:15.207711991Z" level=info msg="metrics server started on :9090/" instance=dapr-placement-server-1 scope=dapr.metrics type=log ver=1.8.4
time="2022-08-23T03:35:15.261617221Z" level=info msg="Raft server is starting on dapr-placement-server-1.dapr-placement-server.dapr-system.svc.cluster.local:8201..." instance=dapr-placement-server-1 scope=dapr.placement.raft type=log ver=1.8.4
time="2022-08-23T03:35:15.261670151Z" level=info msg="mTLS enabled, getting tls certificates" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T03:35:15.261854035Z" level=info msg="tls certificate not found; waiting for disk changes" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T03:35:15.261957808Z" level=info msg="starting watch for certs on filesystem: /var/run/dapr/credentials" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T03:35:22.292051128Z" level=info msg="tls certificates loaded successfully" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T03:35:22.292176255Z" level=info msg="placement service started on port 50005" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T03:35:22.293548962Z" level=info msg="Healthz server is listening on :8080" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:00:32.664276116Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:00:32.672705915Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:00:32.736339337Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:00:32.736378794Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:00:32.736411383Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:00:34.478336703Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:00:34.486107263Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:14:43.60190176Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:14:43.601962169Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:14:43.602130626Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:23:43.148821472Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:23:43.15314433Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:34:38.308256382Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:34:38.308304248Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:34:38.308342204Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:38:51.773317589Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:38:52.143010602Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:39:03.190696222Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:39:03.190738714Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:39:03.190768212Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:39:04.277996726Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:39:04.283482548Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:15.187828854Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:15.188039428Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:15.188075919Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:34.150229828Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:34.164321214Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:35.39357015Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:35.393625349Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:35.39363869Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:37.011910777Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:51:37.024435131Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:52:25.916553553Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:52:25.916594257Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:52:25.916622743Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:52:42.698829391Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T04:52:42.713754545Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:10:52.362210092Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:10:52.362243559Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:10:52.362269832Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:24:32.180824113Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:24:32.186064118Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:35:42.853253457Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:35:42.853288382Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:35:42.853314947Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:36:26.223682478Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:36:26.228824593Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:41:36.452025091Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:41:36.452079488Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:41:36.452115103Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:41:37.583912915Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:41:37.588575422Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:47:38.375286238Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:47:38.37535024Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T05:47:38.375427481Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T06:05:34.227847412Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T06:05:34.232880506Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T06:16:54.797562541Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T06:16:54.797614157Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T06:16:54.797669137Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T06:52:47.614205109Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T06:52:47.619081353Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:03:42.964180209Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:03:42.964219877Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:03:42.96426149Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:16:07.87873758Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:16:07.884591951Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:27:23.310389181Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:27:23.310439586Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:27:23.310481686Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:33:30.639839451Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:33:30.644880077Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:45:00.762638269Z" level=info msg="shutting down leader loop" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:45:00.762692338Z" level=info msg="Waiting until all connections are drained." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:45:00.762843667Z" level=info msg="cluster leadership lost" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:45:12.699276746Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T07:45:12.705050352Z" level=info msg="leader is established." instance=dapr-placement-server-1 scope=dapr.placement type=log ver=1.8.4
failed placement server logs
time="2022-08-23T08:30:38.555979613Z" level=info msg="starting Dapr Placement Service -- version 1.8.4 -- commit 18575823c74318c811d6cd6f57ffac76d5debe93" instance=dapr-placement-server-2 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T08:30:38.556065151Z" level=info msg="log level set to: debug" instance=dapr-placement-server-2 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T08:30:38.556186223Z" level=info msg="metrics server started on :9090/" instance=dapr-placement-server-2 scope=dapr.metrics type=log ver=1.8.4
time="2022-08-23T08:30:38.563909593Z" level=debug msg="initial configuration%!(EXTRA []interface {}=[index 1 servers [%+v [{Voter dapr-placement-server-0 dapr-placement-server-0.dapr-placement-server.dapr-system.svc.cluster.local:8201} {Voter dapr-placement-server-1 dapr-placement-server-1.dapr-placement-server.dapr-system.svc.cluster.local:8201} {Voter dapr-placement-server-2 dapr-placement-server-2.dapr-placement-server.dapr-system.svc.cluster.local:8201}]]])" instance=dapr-placement-server-2 scope=dapr.placement.raft type=log ver=1.8.4
time="2022-08-23T08:30:38.563964358Z" level=info msg="Raft server is starting on dapr-placement-server-2.dapr-placement-server.dapr-system.svc.cluster.local:8201..." instance=dapr-placement-server-2 scope=dapr.placement.raft type=log ver=1.8.4
time="2022-08-23T08:30:38.563985798Z" level=info msg="mTLS enabled, getting tls certificates" instance=dapr-placement-server-2 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T08:30:38.564087792Z" level=debug msg="entering follower state%!(EXTRA []interface {}=[follower Node at 10.220.42.48:8201 [Follower] leader ])" instance=dapr-placement-server-2 scope=dapr.placement.raft type=log ver=1.8.4
time="2022-08-23T08:30:38.564144351Z" level=info msg="tls certificate not found; waiting for disk changes" instance=dapr-placement-server-2 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T08:30:38.56415761Z" level=info msg="starting watch for certs on filesystem: /var/run/dapr/credentials" instance=dapr-placement-server-2 scope=dapr.placement type=log ver=1.8.4
time="2022-08-23T08:30:39.048100352Z" level=debug msg="accepted connection%!(EXTRA []interface {}=[local-address 10.220.42.48:8201 remote-address 10.220.44.70:49228])" instance=dapr-placement-server-2 scope=dapr.placement.raft type=log ver=1.8.4
time="2022-08-23T08:30:39.04829131Z" level=debug msg="lost leadership because received a requestVote with a newer term%!(EXTRA []interface {}=[])" instance=dapr-placement-server-2 scope=dapr.placement.raft type=log ver=1.8.4
time="2022-08-23T08:30:39.16017251Z" level=debug msg="accepted connection%!(EXTRA []interface {}=[local-address 10.220.42.48:8201 remote-address 10.220.44.70:49232])" instance=dapr-placement-server-2 scope=dapr.placement.raft type=log ver=1.8.4
sentry logs (all are up)
time="2022-08-23T08:27:36.035702913Z" level=info msg="starting sentry certificate authority -- version 1.8.4 -- commit 18575823c74318c811d6cd6f57ffac76d5debe93" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:36.035747903Z" level=info msg="log level set to: debug" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:36.037743232Z" level=info msg="metrics server started on :9090/" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.metrics type=log ver=1.8.4
time="2022-08-23T08:27:36.060157002Z" level=info msg="configuration: [port]: 50001, [ca store]: default, [allowed clock skew]: 15m0s, [workload cert ttl]: 24h0m0s" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry.config type=log ver=1.8.4
time="2022-08-23T08:27:36.060189819Z" level=info msg="starting watch on filesystem directory: /var/run/dapr/credentials" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:36.060206817Z" level=info msg="certificate authority loaded" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:36.060671917Z" level=info msg="Healthz server is listening on :8080" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:37.067598229Z" level=info msg="trust root bundle loaded. issuer cert expiry: 2023-08-23 08:27:30 +0000 UTC" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:37.067844436Z" level=info msg="validator created" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:37.067877086Z" level=info msg="sentry certificate authority is running, protecting ya'll" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:27:37.067946401Z" level=debug msg="starting root certificate expiration watcher" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:04.445050572Z" level=debug msg="received issuer credentials changed signal" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:05.445164969Z" level=debug msg="received issuer credentials changed signal" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.445397707Z" level=warning msg="issuer credentials changed; reloading" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.445430523Z" level=info msg="sentry certificate authority is restarting" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.445452968Z" level=info msg="sentry certificate authority is shutting down" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.445597034Z" level=debug msg="terminating root certificate expiration watcher" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.646102805Z" level=info msg="certificate authority loaded" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.661664853Z" level=info msg="root and issuer certs not found: generating self signed CA" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry.ca type=log ver=1.8.4
time="2022-08-23T08:29:06.675734816Z" level=info msg="self signed certs generated and persisted successfully" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry.ca type=log ver=1.8.4
time="2022-08-23T08:29:06.675799829Z" level=info msg="trust root bundle loaded. issuer cert expiry: 2023-08-23 08:29:06 +0000 UTC" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.676069607Z" level=info msg="validator created" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.676107696Z" level=info msg="sentry certificate authority is running, protecting ya'll" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
time="2022-08-23T08:29:06.67617864Z" level=debug msg="starting root certificate expiration watcher" instance=dapr-sentry-574d5c7dd6-hh7wj scope=dapr.sentry type=log ver=1.8.4
Release Note
RELEASE NOTE:
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 26 (15 by maintainers)
For reference, I did make some changes in my Argo application, so now is working properly. Here’s my full config:
Basically,
RespectIgnoreDifferences=truefixed the problemSometimes we use “Hard Refresh” on the application, which means Argo will fetch and make the diff against all the manifests but ignoring those are ignored, the problem is at the sync stage where everything is applied as it is. Setting up
RespectIgnoreDifferences=truesolved the issue because argo also ignores the changes at sync stage as well.That’s explained here https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#respect-ignore-difference-configs.
This saved my life today! I configured my secret to just the
dapr-trust-bundlesecret as follows:I followed the Argo diff guide here: https://argo-cd.readthedocs.io/en/stable/user-guide/diffing/
On closer inspection, I don’t seem to be able to provide my own CA and keypair for the webhook secret and config.
I also now understand this line: https://github.com/dapr/dapr/blob/20356ee82825c4c051c91ac9e81c2680da9c77a4/charts/dapr/charts/dapr_sidecar_injector/templates/dapr_sidecar_injector_webhook_config.yaml#L38
In my case, we’re running Helm through Kustomize. Kustomize calls the
helm templatecommand, which I thought had access to k8s (if configured), but maybe it doesn’t. Need to take a closer look at that and see if values are changing on each run.