cert-manager: ClusterIssuer not found
Describe the bug: When attempting to request a certificate the operation fails. kubectl describe certificaterequest xx
Name: xx-staging-3078285176
Namespace: default
Labels: cattle.io/creator=norman
Annotations: cert-manager.io/certificate-name:xx-staging
cert-manager.io/private-key-secret-name: xx-staging
API Version: cert-manager.io/v1alpha2
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2019-12-26T23:40:35Z
Generation: 1
Owner References:
API Version: cert-manager.io/v1alpha2
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: xx-staging
UID: 1719a084-ad5d-4a8c-a89e-0e66906103fc
Resource Version: 663190
Self Link: /apis/cert-manager.io/v1alpha2/namespaces/default/certificaterequests/xx-staging-3078285176
UID: b00bfa66-592c-4255-b7e2-e182af064449
Spec:
Csr: --
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Status:
Conditions:
Last Transition Time: 2019-12-26T23:40:35Z
Message: Referenced issuer does not have a Ready status condition
Reason: Pending
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IssuerNotFound 19m (x5 over 19m) cert-manager Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-staging" not found
kubectl describe clusterissuers letsencrypt-
Name: letsencrypt-staging
Namespace:
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1alpha2
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2019-12-26T23:43:38Z
Generation: 1
Resource Version: 663214
Self Link: /apis/cert-manager.io/v1alpha2/clusterissuers/letsencrypt-staging
UID: ad0fb84d-cf45-4b47-87ba-c44539d54acb
Spec:
Acme:
Email: xx
Private Key Secret Ref:
Name: letsencrypt-staging-account-key
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Solvers:
Http 01:
Ingress:
Class: nginx
Status:
Acme:
Conditions:
Last Transition Time: 2019-12-26T23:45:47Z
Message: Failed to verify ACME account: Get https://acme-staging-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout
Reason: ErrRegisterACMEAccount
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrVerifyACMEAccount 4m28s (x9 over 23m) cert-manager Failed to verify ACME account: Get https://acme-staging-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout
Warning ErrInitIssuer 4m28s (x9 over 23m) cert-manager Error initializing issuer: Get https://acme-staging-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout
Anything else we need to know?:
When checking the logs for the cert-manager pod I get the following:
ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration
Environment details::
- Kubernetes version (e.g. v1.10.2): 1.16.3
- cert-manager version (e.g. v0.4.0): v0.13.0-alpha.0
- Install method (e.g. helm or static manifests): helm
/kind bug
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 21 (3 by maintainers)
just came across this issue and the problems was specifying the right
kindof issuer in your certificate“ClusterIssuer not found” can also occur from a problem in a subtle upgrade detail where Helm charts are involved. When the annotations on the Helm chart (or certificate indication of issuer) don’t match with the latest versions of cert-manager.
https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/
In particular you need ingress definitions in Helm chart ingress annotations to change from “k8s.io” to “cert-manager.io”
e.g.
certmanager.k8s.io/cluster-issuer:tocert-manager.io/cluster-issuer:I’m adding this because the issue name is confusing and the problem can crop up for multiple reasons. OP looks like a timeout issue reaching the host.
Notes from a serial upgrader
So here’s a bizarre twist.
BUT
I can list my certificates and they all come back as ready. This is after and upgrade from 9-12 directly.
In general it seems that the cert-manager pod has to come online before kubectl will find your new/old resource version.
You’ll see:
Until the upgraded version is running.
Generally it seems you need to restart the cert-manager pod in order for this to work. Not sure why that is. Not sure how many times is necessary.
Another fun thing: kubectl caches the CRDs I think? So
client-golooking for specific resources and versions works great, butkubectlgets:Running
kubectl api-resourceswill get it to resync the resources.I add the same problem after an upgrade on AKS 1.22.11 going from cert-manager 1.4 to 1.5 (1.5.5 at the time)
outputs
But the issuer was here:
outputs
Reading though this doc:
https://cert-manager.io/docs/faq/acme/#1-troubleshooting-clusterissuers
I found some request related issues / method for troubleshooting.
While you are
kubectl describeing element one by one you get the link to the other:At one point I saw previous secret aged more that my upgrade, I delete them, the certificate too. etc. Once I finally arrive to the challenge that gave me:
The IP was resoling but not the http request.
The solution was to upgrade too my nginx-ingress-controller.
Was related to the fact that the IngressClass was not existing at all in the previous one (an old legacy one in my case)
Hope this input would help someone seeking around with this error message.
I have the same problem . Kubernetes version v1.17.0 same with v1.16.3 Cert-manager version: v0.12.0 Install method: static manifest and Helm after uninstall Static method
Work with selfSigned but failed with Acme letsencrypt Staging & production