helm: `Error: rendered manifests contain a resource that already exists` but nothing shows up on `helm list --all`

Output of helm version: version.BuildInfo{Version:“v3.0.1”, GitCommit:“7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa”, GitTreeState:“clean”, GoVersion:“go1.13.4”}

Output of kubectl version: Client Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.1”, GitCommit:“eec55b9ba98609a46fee712359c7b5b365bdd920”, GitTreeState:“clean”, BuildDate:“2018-12-13T10:39:04Z”, GoVersion:“go1.11.2”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.1”, GitCommit:“eec55b9ba98609a46fee712359c7b5b365bdd920”, GitTreeState:“clean”, BuildDate:“2018-12-13T10:31:33Z”, GoVersion:“go1.11.2”, Compiler:“gc”, Platform:“linux/amd64”}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): Minikube

I’m trying to helm install but helm comes back saying with following error Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: CustomResourceDefinition, namespace: , name: scheduledsparkapplications.sparkoperator.k8s.io

but when I do helm list --all, nothing comes up and therefore I unable to remove any resource to proceed with installation.

Any help appreciated! Thanks!

See the attached screenshot: Screen Shot 2020-01-16 at 3 36 00 PM

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 31
  • Comments: 33 (4 by maintainers)

Commits related to this issue

Most upvoted comments

Solution:

kubectl delete clusterrole nginx-ingress

kubectl delete clusterrolebinding nginx-ingress

check for the clusterrole and clusterrolebinding delete them all, it should solve the problem

kubectl get clusterrole | grep ingress kubectl get clusterrolebinding | grep ingress

it did solve in my case.

Further to the above

 kubectl get crd

Returns just one item which was created last September and so presumably isn’t relevant.

I encountered this problem with CRDs in a situation where deletion was not an option and solved the problem with the following script:

#!/bin/bash

set -euo pipefail

for CRD in $(kubectl get crds -o=name | grep "${GROUP}")
do
    kubectl label ${CRD} app.kubernetes.io/managed-by=Helm --overwrite
    kubectl annotate ${CRD} meta.helm.sh/release-name=${HELM_RELEASE} --overwrite
    kubectl annotate ${CRD} meta.helm.sh/release-namespace=${HELM_RELEASE_NAMESPACE} --overwrite
done

Run with:

$ GROUP=<your crd group> HELM_RELEASE=<your release name> HELM_RELEASE_NAMESPACE=<your release namespace> importCRDs.sh

After that the upgrade succeeded and the CRDs became managed by the release. I was using Helm v3.7.0.

kubectl get all and delete the reported conflict resource(or all related resources if needed? ) works for me.

Just go on deleting the conflicting resources (as you see in the error message) and it should work. In my case I had to delete 3 resources:

kubectl delete rolebinding -n kube-system aws-load-balancer-controller-leader-election-rolebinding kubectl delete mutatingwebhookconfiguration aws-load-balancer-webhook kubectl delete validatingwebhookconfiguration aws-load-balancer-webhook

The CRD could exist because the chart that deployed it, may not remove the CR object from the cluster. In Helm 3, this is expected behaviour as Helm will only install CRDs.

I just mentioned the --all-namespaces flag for your information.

Getting the same error. helm version version.BuildInfo{Version:“v3.0.3”, GitCommit:“ac925eb7279f4a6955df663a0128044a8a6b7593”, GitTreeState:“clean”, GoVersion:“go1.13.7”}

–skip-crds also doesnt help… Also, is there a way to make this operation idempotent instead of passing different flags each time? Example - Say I have a chart to deploy an application and this chart also has some crds in crds/ directory. Now if I want to install 2 instances of this application, I cannot do it as second install gives a resource conflict with kind : CustomResourceDefinition

Wondering why there is suggestions and lot of support for deletion of resources 🤔 Atleast I cannot do it straight into my production clusters… 😃

A leftover CRD scheduledsparkapplications.sparkoperator.k8s.io seems to be the cause. In my case removing the previously created CRD resources did the trick (these are not removed when removing the release).

@saicharanduppati It seems like CRD scheduledsparkapplications.sparkoperator.k8s.io already exists in your cluster. This is causing the conflict.

BTW, to get all releases in all namespaces, use --all-namespaces flag with helm ls.

In my case add: kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

check for the clusterrole and clusterrolebinding delete them all, it should solve the problem

kubectl get clusterrole | grep ingress kubectl get clusterrolebinding | grep ingress

it did solve in my case.

kubectl get CustomResourceDefinition --all-namespaces works for me to get the CustomResourceDefinition attached to no namespace

In my case add: kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

you are a Genious and saved my life. thank you.

I guess you have installed the servers by Helm2, and now you are using Helm3.

I did the following scripts, and it works now:

kubectl delete service -n <namespace> <service-name>
kubectl delete deployment -n <namespace> <deployment-name>
kubectl delete ingress -n <namespace> <ingress-name>

It works on v3.2.0.

Adding --skip-crds to the helm install... command gives the same error.

Same issue here.

Followed the steps in https://docs.microsoft.com/en-us/azure/aks/ingress-basic.

Then deleted the namespace:

kubectl delete namespace <mynamespace>

Then performed the steps again, got the ‘resource that already exists’ error at the helm install nginx-ingress... step.

If I do

helm list --all

…I get an empty list.

I’m a beginner at all of these but it seems to me the process should be idempotent.