cert-manager: istio letsencrypt cert-manager cloudns dns01 failing with Google API Error 403
Hi All,
I have installed certmanager on GKE with issurers using clouddns provider for dns01 validation. The providers are using GCP serviceaccount with cloudns admin roles .
However, on deploying a certificate, all challenges are failing with this error
error processing: GoogleCloud API call failed: googleapi: Error 403: Forbidden, forbidden
What could I be missing here?
My issuers are
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: internal-issuer
spec:
selfSigned: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: mail@org.com
privateKeySecretRef:
name: letsencrypt-prod
dns01:
providers:
- name: clouddns
clouddns:
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: service-account.json
project: mygcpproject
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: mail@org.com
privateKeySecretRef:
name: letsencrypt-staging
dns01:
providers:
- name: clouddns
clouddns:
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: service-account.json
project: mygcpproject
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 5
- Comments: 18 (1 by maintainers)
this actually workerd for me
Creating new service account with owner permission solved the problem. Updating old key was not working.
I finally figure it out. It’s all about node-pool permissions(scopes).
I verify my node with:
It missed
https://www.googleapis.com/auth/cloud-platform
So, what you need to do is “add Cloud Platform” scope.
OK I managed to find some replicable steps as this drove me nuts for about 3 hours yesterday.
I have a cluster where this works fine it uses a domain like *.knative.example.com
If I launch a second cluster in the same project and want to use say *.svc.example.com, I get 403 errors. This is odd, as the service account clearly has access. I also tried *.svc.different.com and same, so it isn’t domain specific.
This project also has Google App Engine and Firebase accounts managing domains.
When I created a clean project and deployed a cluster with *.svc.different.com it worked first time.
I am going to raise this internally with the Cloud DNS team to see if we can get to the bottom of this, because this kind of discrete error causes a lot of wasted human heart beats.
I am having the same issue, but I ran:
giving the SA
dns.admin
permission, and it worked hereFor any weary travelers getting here, I found this post: https://github.com/jetstack/cert-manager/issues/2069#issuecomment-531428320 , I recreated my service account with a different ID and things started working.
Same issue here, was working fine a month ago, I just forked a new cluster, nothing changed in all my configurations, so I’m guessing the last
gcloud
components update or the upgrade of the nodes kubernetes version messed up something… If anyone has any clue, that would be great