cert-manager: Waiting for http-01 challenge propagation: wrong status code '404', expected '200'
Describe the bug:
I have an nginx-controller on my cluster and I’m also using cert-manger version v0.9.1
. I have an ingress that I’m trying to get a certificate for. The issue is that the created challenge goes to pending
state and the reason is Waiting for http-01 challenge propagation: wrong status code '404', expected '200'
.
I did some digging and and found a page in letsencrypt forums that states that this issue might be caused by a preflight check by the cert manager. My question is that is this a bug or is the problem caused because I’m using an old version of cert manager?
Expected behaviour:
A new Ready
certificate created for the ingress I created.
Steps to reproduce the bug: Create the manifest provided below
Anything else we need to know?: My ingress manifest:
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/acme-http01-edit-in-place: “false” ingress.kubernetes.io/ssl-redirect: “false” kubernetes.io/tls-acme: “true” name: test-ingress namespace: default spec: rules:
- host: test.domain.com http: paths:
- backend: serviceName: backendservice servicePort: 80 path: / tls:
- hosts:
- test.domain.com secretName: test-tls-secret
I also have a backend that serves static content on its index page.
Created challenge fails fairly quickly, something like 7 seconds. That made me think maybe it’s a preflight check.
Environment details::
- Kubernetes version (e.g. v1.14.8):
- cert-manager version (e.g. v0.9.1):
/kind bug /kind support
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (1 by maintainers)
In case this helps anyone else - I had this issue on a vanilla MicroK8s install.
Eventually, I tracked it down to the fact that the built-in ingress class is called
public
in MicroK8s, notnginx
. After I modified my issuer definition to the following, all was good:okay I think I found my issue - I kept on thinking about what the check could be and suddenly thought about the fact that my cluster is in an internal netwerk and that the domain name is a public record, so if cert-manager resolves it and tries to connect to it - he needs to go to the internet and then through the firewall back inside. I have seen multiple types of issues where these connections fail so I added an internal DNS record as well so that if cert-manager resolves the IP, it gets the internal record instead (and doesn’t go through the firewall to reach my ingress) and only 5 minutes after that my certificates are already installed and running fine.
I was mislead by the message that a propagation failed - but actually the thing was that he was unable to verify that the propagation succeeded …