helm: UPGRADE FAILED: rendered manifests contain a new resource that already exists

I have a release of stable/wordpress which is deployed by Helm 2. Earlier I followed the instructions to migrate to Helm 3. Now I want to make some changes in values.yaml and apply it to the release as I always do in the time of helm 2, but I run into:

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace: default, name: [deployment name], existing_kind: apps/v1, Kind=Deployment, new_kind: apps/v1, Kind=Deployment

I’m confused. I tried to search this error message on Google and got the idea that “if the Kind is changed, it cannot be automatically upgraded”. But I’m using the exactly the same version of Chart so there shouldn’t be any difference. Also, from the error message the existing_kind and new_kind are the same.

My command is : helm upgrade [release name] stable/wordpress --version 8.1.1 -f values.yaml

Output of helm version: version.BuildInfo{Version:“v3.1.1”, GitCommit:“afe70585407b420d0097d07b21c47dc511525ac8”, GitTreeState:“clean”, GoVersion:“go1.13.8”}

Output of kubectl version: Client Version: version.Info{Major:“1”, Minor:“14+”, GitVersion:“v1.14.3-tke.9”, GitCommit:“0a5b3103ef7653b37fdff6fc19a72a509e5f2698”, GitTreeState:“clean”, BuildDate:“2020-01-07T02:21:33Z”, GoVersion:“go1.12.10”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“14+”, GitVersion:“v1.14.3-tke.9”, GitCommit:“0a5b3103ef7653b37fdff6fc19a72a509e5f2698”, GitTreeState:“clean”, BuildDate:“2020-01-07T02:21:45Z”, GoVersion:“go1.12.10”, Compiler:“gc”, Platform:“linux/amd64”}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): Tencent Cloud

Is this a bug or I missed some information? I’m not a Kubernetes expert but I’ll try to solve if I can get some clue. Thanks.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 19
  • Comments: 29 (3 by maintainers)

Most upvoted comments

Apologies, you are right. @fr0der1c’s issue in the OP has been solved with #7649.

The issue described in https://github.com/helm/helm/issues/7697#issuecomment-595984123 is a duplicate of #7219.

Once Helm 3.2.0 has been released, attach the following annotations to any resource you wish to adopt into the release:

KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm

Note that these annotations and labels are attached to all resources after Helm 3.2.0. This workaround is necessary for those deploying resources with older versions of Helm.

Once #2730 is implemented, the process to adopt existing resources into a release will be drastically simpler.

In any case, it looks like this has been resolved and this issue can now be closed.

I have this exact same issue. If you change the apiVersion of a kind say Deployment, helm won’t be able to change it because it will “already exists”. If you deploy a chart with a Deployment on apiVersion “apps/v1beta2” and then, you update the chart to change the apiVersion to “apps/v1”, you will get this exact bug.

Hi,

We have the same problem when upgrading the stable/sysdig chart. in version 1.7.6 there is a change of apiVersion: rbac.authorization.k8s.io/v1beta1 to apiVersion: rbac.authorization.k8s.io/v1. We get this message:

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace: , name: sysdig, existing_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole

Helm version: 3.1.2

@hickeyma This really really works well.

@archonic The only approach I know is to actually delete the existing resource that is referenced in the error and upgrade.

Maybe some core maintainer (@bacongobbler?) can point out if a --overwrite-existing-resource <kind>:<namespace>:<name> parameter or something like that can be implemented or if there is some existing workaround to this problem.

After more digging in issues, posts and documentation seems like a normal behavior of helm.

When helm upgrade create a resource that did not existed between the resources created by the previous revision of the chart release and a resource with same kind, namespace and name exists in kubernetes then this error is thrown.

In my case I had a secret with a certificate that could be passed using values or created by a job. This secret was then created directly by helm or indirectly by the job. If the chart was installed and the secret was created indirectly with the job helm saw that as not included in the resources created by the chart. Later when the secret was added as part of the chart by an upgrade helm throw the error. The solution was to create the secret always and make the job patch it.

Apologies, you are right. @fr0der1c’s issue in the OP has been solved with #7649. The issue described in #7697 (comment) is a duplicate of #7219. Once Helm 3.2.0 has been released, attach the following annotations to any resource you wish to adopt into the release:

KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm

Note that these annotations and labels are attached to all resources after Helm 3.2.0. This workaround is necessary for those deploying resources with older versions of Helm. Once #2730 is implemented, the process to adopt existing resources into a release will be drastically simpler. In any case, it looks like this has been resolved and this issue can now be closed.

Doesn’t seem to be working for me. Updated our helm deploy container to use 3.2.3 and added the annotations to the offending deployment. Same error.

I can confirm this behaviour too.

Hey @bacongobbler , I can confirm the same issue with helm 3.3.0 👍

The deployed chart version is 8.1.1. I’m using helm upgrade [release name] stable/wordpress --version 8.1.1 -f values.yaml so I think all the Chart manifest should be exactly the same.

Also, the error message existing resource conflict: namespace: default, name: [deployment name], existing_kind: apps/v1, Kind=Deployment, new_kind: apps/v1, Kind=Deployment says the existing kind is the same as new kind. So I think this is not a problem moving from apps/v1beta2 to apps/v1.

I only changed the ingress part. Added a few domains to the ingress.hosts and ingress.tls. Also added some rewrite rule to ingress.annotations.

Thanks @hickeyma upgrading the release manually referencing the existing chart with new apiVersions worked, it seems the issue is probably with FluxCD using an older version of Helm under the hood.

Apologies, you are right. @fr0der1c’s issue in the OP has been solved with #7649. The issue described in #7697 (comment) is a duplicate of #7219. Once Helm 3.2.0 has been released, attach the following annotations to any resource you wish to adopt into the release:

KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm

Note that these annotations and labels are attached to all resources after Helm 3.2.0. This workaround is necessary for those deploying resources with older versions of Helm. Once #2730 is implemented, the process to adopt existing resources into a release will be drastically simpler. In any case, it looks like this has been resolved and this issue can now be closed.

Doesn’t seem to be working for me. Updated our helm deploy container to use 3.2.3 and added the annotations to the offending deployment. Same error.

I can confirm this behaviour too.

A possible workaround is to play with including or removing --reuse-values, depending on some helm internal state that seems to make the upgrade working.

+1 to the problem: ╰─ helm version version.BuildInfo{Version:“v3.1.1”, GitCommit:“afe70585407b420d0097d07b21c47dc511525ac8”, GitTreeState:“clean”, GoVersion:“go1.13.8”}

Release "travel-front" does not exist. Installing it now.
+ helm upgrade travel-front --install --namespace dev --force --atomic --timeout 120s -f helm/dev/values.yaml --set image.tag=3.0.65.262-SNAPSHOT --set image.repository=my.reg.local/travel-front goharbor/allinone
Release "travel-front" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: dev, name: travel-front, existing_kind: /v1, Kind=ConfigMap, new_kind: /v1, Kind=ConfigMap