cert-manager: Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://rancher-cert-cert-manager-webhook.cert-manager.svc:443/mutate?timeout=30s: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")

Describe the bug:

I am using cert-manager to generate the certificate for Rancher. I am using helm chart to deploy both. (cert-manager version 0.16.1 and Rancher version 2.4.8). Cert-manager is deployed successfully but while deploying Rancher facing issues in generating certificate.

Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://rancher-cert-cert-manager-webhook.cert-manager.svc:443/mutate?timeout=30s: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")

Expected Behavior: It should deploy successfully.

Environment: Kubernetes : “v1.15.11-eks-af3caf” kubectl : v1.18.6 Install method: helm + kustomize (using Argo-CD)

There is no change in default values of charts.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 29 (4 by maintainers)

Most upvoted comments

I getting pissed off by people who say the issue resolved, without explanation, what was done to solve it.

Hi @meyskens This issue got resolved for me. So closing it.

Hi, the problem eventually heels itself… You should simply wait ~20 seconds before creating issuer, after cert-manager deploy, to let cainjector to inject the CA certificates into webhook. Its not an issue… but the behaviour should be documented in cert-manager “Getting Started document”.

The problem I encountered is covered in https://github.com/jetstack/cert-manager/pull/3425#issuecomment-719690301, but to echo here: setting namespace: cert-manager (or anything else) in Kustomize leads to it overwriting the kube-system namespace specified in some RBAC-related resources, which caused this failure. Removing namespace: entirely so it would just use whatever is in the resource description resolved the problem (this does mean setting an alternate namespace would require more effort).

I’m also seeing this using v1.0.3. I’m not familiar with the codebase, but do we need to patch clusterrole.rbac.authorization.k8s.io/v1/cert-manager-cainjector to include access to configmaps?

I just ran into this issue, and I patched the ClusterRole with the following rule to access configmaps:

  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
      - create
      - update

Note that if you’re using Kustomize, you’ll have to provide all the required permissions because the patch will replace the rules in the ClusterRole, instead of appending the new rule to the list. Here’s a gist of the patch I’m using with Kustomize.

Got nearly the same problem. It had probably persisted at least since cert-manager pod restarted 4 days ago.

The logs of cainjector did not mention anything about RBAC. It looks like cainjector just hung itself silently without recovering, or exiting causing a proper restart.

$ k logs -f -n cert-manager cert-manager-cainjector-65cf6db95b-7lsll
I0519 16:42:35.196412       1 start.go:107] "starting" version="v1.3.1" revision="614438aed00e1060870b273f2238794ef69b60ab"
I0519 16:42:36.255632       1 request.go:645] Throttling request took 1.020767978s, request: GET:https://10.100.0.1:443/apis/k8s.nginx.org/v1?timeout=32s
I0519 16:42:36.883635       1 leaderelection.go:243] attempting to acquire leader lease  kube-system/cert-manager-cainjector-leader-election...
I0519 16:42:54.924205       1 leaderelection.go:253] successfully acquired lease kube-system/cert-manager-cainjector-leader-election
I0519 16:42:54.979075       1 reflector.go:207] Starting reflector *v1.CustomResourceDefinition (9h12m4.563327028s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.982629       1 reflector.go:207] Starting reflector *v1.MutatingWebhookConfiguration (10h36m40.91118976s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.982673       1 reflector.go:207] Starting reflector *v1.ValidatingWebhookConfiguration (10h47m53.83000074s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.982717       1 reflector.go:207] Starting reflector *v1.APIService (9h37m19.690882125s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.983629       1 reflector.go:207] Starting reflector *v1.MutatingWebhookConfiguration (9h4m26.672522691s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.983695       1 reflector.go:207] Starting reflector *v1.ValidatingWebhookConfiguration (10h17m2.654837575s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.983726       1 reflector.go:207] Starting reflector *v1.APIService (10h13m16.290195373s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.983763       1 reflector.go:207] Starting reflector *v1.CustomResourceDefinition (10h13m41.638735515s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:42:54.983899       1 recorder.go:52] cert-manager/controller-runtime/manager/events "msg"="Normal"  "message"="cert-manager-cainjector-65cf6db95b-7lsll_8de00aa3-0325-4443-96c4-a9e079cec6dc became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"3e33c0ae-7b1f-43d1-92a8-cf33a67d627e","apiVersion":"v1","resourceVersion":"113075082"} "reason"="LeaderElection"
I0519 16:43:21.442620       1 trace.go:205] Trace[405694791]: "Reflector ListAndWatch" name:external/io_k8s_client_go/tools/cache/reflector.go:156 (19-May-2021 16:42:54.987) (total time: 26454ms):
Trace[405694791]: ---"Objects listed" 26454ms (16:43:00.442)
Trace[405694791]: [26.454674694s] [26.454674694s] END
I0519 16:43:21.486841       1 controller.go:139] cert-manager/certificate/customresourcedefinition/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-customresourcedefinition" "source"={}
I0519 16:43:21.487145       1 controller.go:139] cert-manager/certificate/customresourcedefinition/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-customresourcedefinition" "source"={}
I0519 16:43:21.487438       1 controller.go:139] cert-manager/certificate/mutatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-mutatingwebhookconfiguration" "source"={}
I0519 16:43:21.487669       1 controller.go:139] cert-manager/certificate/mutatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-mutatingwebhookconfiguration" "source"={}
I0519 16:43:21.487849       1 controller.go:139] cert-manager/certificate/validatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-validatingwebhookconfiguration" "source"={}
I0519 16:43:21.487992       1 controller.go:139] cert-manager/certificate/validatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-validatingwebhookconfiguration" "source"={}
I0519 16:43:21.488149       1 controller.go:139] cert-manager/certificate/apiservice/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-apiservice" "source"={}
I0519 16:43:21.488349       1 controller.go:139] cert-manager/certificate/apiservice/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-apiservice" "source"={}
I0519 16:43:21.488636       1 reflector.go:207] Starting reflector *v1.Certificate (10h29m52.932041773s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:43:21.611834       1 controller.go:139] cert-manager/certificate/apiservice/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-apiservice" "source"={}
I0519 16:43:21.612282       1 controller.go:139] cert-manager/certificate/customresourcedefinition/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-customresourcedefinition" "source"={}
I0519 16:43:21.612538       1 controller.go:139] cert-manager/certificate/mutatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-mutatingwebhookconfiguration" "source"={}
I0519 16:43:21.612774       1 controller.go:139] cert-manager/certificate/validatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-certificate-validatingwebhookconfiguration" "source"={}
I0519 16:43:21.613284       1 reflector.go:207] Starting reflector *v1.Secret (10h52m29.259025116s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:43:36.425442       1 trace.go:205] Trace[42536113]: "Reflector ListAndWatch" name:external/io_k8s_client_go/tools/cache/reflector.go:156 (19-May-2021 16:42:54.996) (total time: 41429ms):
Trace[42536113]: ---"Objects listed" 41429ms (16:43:00.425)
Trace[42536113]: [41.429357121s] [41.429357121s] END
I0519 16:43:36.876033       1 controller.go:139] cert-manager/secret/customresourcedefinition/controller "msg"="Starting EventSource" "controller"="controller-for-secret-customresourcedefinition" "source"={}
I0519 16:43:39.347055       1 controller.go:139] cert-manager/secret/customresourcedefinition/controller "msg"="Starting EventSource" "controller"="controller-for-secret-customresourcedefinition" "source"={}
I0519 16:43:36.876602       1 controller.go:139] cert-manager/secret/mutatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-secret-mutatingwebhookconfiguration" "source"={}
I0519 16:43:39.347389       1 controller.go:139] cert-manager/secret/mutatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-secret-mutatingwebhookconfiguration" "source"={}
I0519 16:43:39.358642       1 reflector.go:207] Starting reflector *v1.Secret (9h13m8.302608063s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:43:36.876893       1 controller.go:139] cert-manager/secret/validatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-secret-validatingwebhookconfiguration" "source"={}
I0519 16:43:39.447011       1 controller.go:139] cert-manager/secret/validatingwebhookconfiguration/controller "msg"="Starting EventSource" "controller"="controller-for-secret-validatingwebhookconfiguration" "source"={}
I0519 16:43:36.877203       1 controller.go:139] cert-manager/secret/apiservice/controller "msg"="Starting EventSource" "controller"="controller-for-secret-apiservice" "source"={}
I0519 16:43:40.028730       1 controller.go:139] cert-manager/secret/apiservice/controller "msg"="Starting EventSource" "controller"="controller-for-secret-apiservice" "source"={}
E0519 16:44:51.623376       1 leaderelection.go:321] error retrieving resource lock kube-system/cert-manager-cainjector-leader-election: Get "https://10.100.0.1:443/api/v1/namespaces/kube-system/configmaps/cert-manager-cainjector-leader-election": context deadline exceeded
I0519 16:44:51.663010       1 leaderelection.go:278] failed to renew lease kube-system/cert-manager-cainjector-leader-election: timed out waiting for the condition
E0519 16:44:51.663783       1 start.go:138] cert-manager/ca-injector "msg"="manager goroutine exited" "error"="error running manager: leader election lost"
I0519 16:45:20.046454       1 recorder.go:52] cert-manager/controller-runtime/manager/events "msg"="Normal"  "message"="cert-manager-cainjector-65cf6db95b-7lsll_8de00aa3-0325-4443-96c4-a9e079cec6dc stopped leading" "object"={"kind":"ConfigMap","apiVersion":"v1"} "reason"="LeaderElection"
I0519 16:45:20.355718       1 trace.go:205] Trace[1276394915]: "Reflector ListAndWatch" name:external/io_k8s_client_go/tools/cache/reflector.go:156 (19-May-2021 16:43:21.613) (total time: 118742ms):
Trace[1276394915]: [1m58.742295434s] [1m58.742295434s] END
I0519 16:45:20.355757       1 reflector.go:213] Stopping reflector *v1.Secret (10h52m29.259025116s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.355904       1 reflector.go:213] Stopping reflector *v1.Certificate (10h29m52.932041773s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.356120       1 reflector.go:213] Stopping reflector *v1.CustomResourceDefinition (9h12m4.563327028s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.356299       1 reflector.go:213] Stopping reflector *v1.ValidatingWebhookConfiguration (10h47m53.83000074s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.356572       1 reflector.go:213] Stopping reflector *v1.APIService (9h37m19.690882125s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.356827       1 reflector.go:213] Stopping reflector *v1.MutatingWebhookConfiguration (10h36m40.91118976s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.357103       1 reflector.go:213] Stopping reflector *v1.CustomResourceDefinition (10h13m41.638735515s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.357367       1 trace.go:205] Trace[2073588449]: "Reflector ListAndWatch" name:external/io_k8s_client_go/tools/cache/reflector.go:156 (19-May-2021 16:43:39.358) (total time: 100998ms):
Trace[2073588449]: [1m40.998608269s] [1m40.998608269s] END
I0519 16:45:20.357534       1 reflector.go:213] Stopping reflector *v1.Secret (9h13m8.302608063s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.357812       1 reflector.go:213] Stopping reflector *v1.ValidatingWebhookConfiguration (10h17m2.654837575s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.358032       1 reflector.go:213] Stopping reflector *v1.MutatingWebhookConfiguration (9h4m26.672522691s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:20.358197       1 reflector.go:213] Stopping reflector *v1.APIService (10h13m16.290195373s) from external/io_k8s_client_go/tools/cache/reflector.go:156
I0519 16:45:34.243983       1 request.go:995] Stream error http2.StreamError{StreamID:0xdb, Code:0x2, Cause:error(nil)} when reading response body, may be caused by closed connection.
I0519 16:45:34.345150       1 request.go:995] Stream error http2.StreamError{StreamID:0xcf, Code:0x2, Cause:error(nil)} when reading response body, may be caused by closed connection.

Needless to say this behavior is not optimal.

Hi, the problem eventually heels itself… You should simply wait ~20 seconds before creating issuer, after cert-manager deploy, to let cainjector to inject the CA certificates into webhook. Its not an issue… but the behaviour should be documented in cert-manager “Getting Started document”.

Hi @alexsorkin I tried this also. Waited for 10 min after cert-manager deployment but still the same issue.

I’m also seeing this using v1.0.3. I’m not familiar with the codebase, but do we need to patch clusterrole.rbac.authorization.k8s.io/v1/cert-manager-cainjector to include access to configmaps?

I just ran into this issue, and I patched the ClusterRole with the following rule to access configmaps:

  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
      - create
      - update

Note that if you’re using Kustomize, you’ll have to provide all the required permissions because the patch will replace the rules in the ClusterRole, instead of appending the new rule to the list. Here’s a gist of the patch I’m using with Kustomize.

this also works for kustomize rather than re-include all the original

patchesJson6902:
  - target:
      kind: ClusterRole
      name: cert-manager-cainjector
      version: v1
      group: rbac.authorization.k8s.io
    patch: |-
      - op: add
        path: /rules/-
        value:
          - apiGroups:
            - ""
            resources:
              - configmaps
            verbs:
              - get
              - create
              - update

value need a type map instead of array

patchesJson6902:
  - target:
      kind: ClusterRole
      name: cert-manager-cainjector
      version: v1
      group: rbac.authorization.k8s.io
    patch: |-
      - op: add
        path: /rules/-
        value:
            apiGroups:
              - ""
            resources:
              - configmaps
            verbs:
              - get
              - create
              - update

I’m also seeing this using v1.0.3. I’m not familiar with the codebase, but do we need to patch clusterrole.rbac.authorization.k8s.io/v1/cert-manager-cainjector to include access to configmaps?

I just ran into this issue, and I patched the ClusterRole with the following rule to access configmaps:

  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
      - create
      - update

Note that if you’re using Kustomize, you’ll have to provide all the required permissions because the patch will replace the rules in the ClusterRole, instead of appending the new rule to the list. Here’s a gist of the patch I’m using with Kustomize.

this also works for kustomize rather than re-include all the original

patchesJson6902:
  - target:
      kind: ClusterRole
      name: cert-manager-cainjector
      version: v1
      group: rbac.authorization.k8s.io
    patch: |-
      - op: add
        path: /rules/-
        value:
          - apiGroups:
            - ""
            resources:
              - configmaps
            verbs:
              - get
              - create
              - update