cert-manager: customresourcedefinitions.apiextensions.k8s.io "certificates.certmanager.k8s.io" already exists

Bugs should be filed for issues encountered whilst operating cert-manager. You should first attempt to resolve your issues through the community support channels, e.g. Slack, in order to rule out individual configuration errors. Please provide as much detail as possible.

Describe the bug: I’ve installed two cert-manager into different custom namespaces (stage, demo). the first installment into stage works flawless.

helm upgrade --namespace stage --install --wait --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=Issuer --set rbac.create=false --set serviceAccount.create=false stage-cert-manager stable/cert-manager

the second installment into demo fails

helm upgrade --namespace demo --install --wait --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=Issuer --set rbac.create=false --set serviceAccount.create=false demo-cert-manager stable/cert-manager
> Error: release demo-cert-manager failed: customresourcedefinitions.apiextensions.k8s.io "certificates.certmanager.k8s.io" already exists

That’s the existing customresourcedefinitions in my cluster:

kubectl get customresourcedefinitions --all-namespaces=true
NAME                                AGE
apprepositories.kubeapps.com        2d
certificates.certmanager.k8s.io     19m
clusterissuers.certmanager.k8s.io   19m
issuers.certmanager.k8s.io          19m

And this the definition of certificates.certmanager.k8s.io:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: 2018-09-06T09:22:39Z
  generation: 1
  labels:
    app: cert-manager
    chart: cert-manager-v0.4.1
    heritage: Tiller
    release: demo-cert-manager
  name: certificates.certmanager.k8s.io
  resourceVersion: "7379719"
  selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/certificates.certmanager.k8s.io
  uid: 668f36bf-b1b6-11e8-a174-ee96761aa8f6
spec:
  additionalPrinterColumns:
  - JSONPath: .metadata.creationTimestamp
    description: |-
      CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.

      Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
    name: Age
    type: date
  group: certmanager.k8s.io
  names:
    kind: Certificate
    listKind: CertificateList
    plural: certificates
    shortNames:
    - cert
    - certs
    singular: certificate
  scope: Namespaced
  version: v1alpha1
  versions:
  - name: v1alpha1
    served: true
    storage: true
status:
  acceptedNames:
    kind: Certificate
    listKind: CertificateList
    plural: certificates
    shortNames:
    - cert
    - certs
    singular: certificate
  conditions:
  - lastTransitionTime: 2018-09-06T09:22:39Z
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: null
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  storedVersions:
  - v1alpha1

I guess the certificate is now assigned with the demo instance, right? -> release: demo-cert-manager

Expected behaviour: I guess the cert-manager should be installable without issues in two namespaces.

Steps to reproduce the bug: see above.

Anything else we need to know?:

Environment details::

  • Kubernetes version (e.g. v1.10.2): 1.11.2
  • Cloud-provider/provisioner (e.g. GKE, kops AWS, etc): azure
  • cert-manager version (e.g. v0.4.0): 0.4.1
  • Install method (e.g. helm or static manifests): helm

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 13
  • Comments: 38 (3 by maintainers)

Most upvoted comments

I fixed worked around this with:

kubectl get customresourcedefinition
kubectl delete customresourcedefinition xxxxxxxx

But that may well do horrible things if used in production, I don’t know.

I had the same problem with a fresh install on a aws cluster. As @michaelsteven I solved it with: helm install --name cert-manager --namespace yournamespace stable/cert-manager --set createCustomResource=false helm upgrade --install --namespace yournamespace cert-manager stable/cert-manager --set createCustomResource=true

I am getting this issue even on 0.5.0

Ran into these symptoms deploying on a new/clean aks cluster today and the above workarounds didn’t work. I was able to work-around by first doing a helm install --set createCustomResource=false, then following that up with a helm upgrade without setting that variable.

I fixed worked around this with:

kubectl get customresourcedefinition
kubectl delete customresourcedefinition xxxxxxxx

But that may well do horrible things if used in production, I don’t know.

This works… Thanks for sharing

This is a bug introduced on Helm v2.12.0 and is now corrected on v2.12.1. Just upgrade Helm and it will work.

We’ve changed the way we install the CRDs in the helm chart for the next release due to issues with Helm. You can see a bit more info here! https://github.com/jetstack/cert-manager/pull/1138

On Thu, 6 Dec 2018 at 10:44, Piotr Kula notifications@github.com wrote:

I had this problem because I ran the initial command and forgot to set rbac=false - helm managed to install these custom resources and purge did not clean up after it. 🤔

I tried with `–set cert-manager.createCustomResource=false’ but was still the same error 🤔

So i followed the tip kubectl get issuer,clusterissuer,certificate -o yaml --all-namespaces > cert-manager-resources-backup.yaml and the yaml was empty… so nothing important there 😄

I just deleted the custom resources and did the helm package again with rbac=false and it installed properly.

Who has the responsibility of cleaning up of custom resources after delete / purge?

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/jetstack/cert-manager/issues/870#issuecomment-444829107, or mute the thread https://github.com/notifications/unsubscribe-auth/AAMbP_f6SDvz-Y2imDZteoma04aDDsViks5u2PT0gaJpZM4WcmsP .

Workaround.

I have add a simple check before making Helm release for our deployments.

set +e
kubectl api-resources -o name | grep certificates.certmanager.k8s.io
create_custom_resource=$( if [[ $? == 0 ]]; then echo false; else echo true; fi; )
set -e
helm upgrade --install \
...
 --set cert-manager.createCustomResource=$create_custom_resource
...

There’s an issue tracking this upstream: https://github.com/helm/helm/issues/4259

The only solution for mitigating the issue currently is to delete the CRD, which will delete all data.

The ‘proper’ workaround for the time being is something like:

$ kubectl get issuer,clusterissuer,certificate -o yaml --all-namespaces > cert-manager-resources-backup.yaml
$ kubectl delete crd issuer,clusterissuer,certificate
$ {run cert-manager install via Helm}
$ kubectl create -f cert-manager-resources-backup.yaml

My shortcut to purge:

kubectl delete customresourcedefinitions clusterissuers.certmanager.k8s.io issuers.certmanager.k8s.io certificates.certmanager.k8s.io
helm delete --purge cert-manager
kubectl delete namespaces cert-manager

Same issue with fresh cluster & Helm v2.12.1 😦

I used it and worked flawless here… on 2.12.0 I had the issue

Same issue with a fresh cluster in GKE, this works perfectly!

I had the same problem with a fresh install on a aws cluster. As @michaelsteven I solved it with: helm install --name cert-manager --namespace yournamespace stable/cert-manager --set createCustomResource=false helm upgrade --install --namespace yournamespace cert-manager stable/cert-manager --set createCustomResource=true

Ok I found how to fix my problem.

I’m deploying cert-manager thanks to Helm in my “infrastructure as code” repo deployment process (which will create the k8s instances, etc thanks to Terraform).

For each app, I use an Helm chart which I use to deploy it. In these app charts, I used to declare cert-manager as a dependency. If I remove this dependency, the deployment of my apps work.