helm: Unable to perform helm upgrade due to resource conflict

Output of helm version: v3.0.0-rc.1

Output of kubectl version: Client Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.3”, GitCommit:“2d3c76f9091b6bec110a5e63777c332469e0cba2”, GitTreeState:“clean”, BuildDate:“2019-08-19T12:36:28Z”, GoVersion:“go1.12.9”, Compiler:“gc”, Platform:“darwin/amd64”} Server Version: version.Info{Major:“1”, Minor:“13+”, GitVersion:“v1.13.10-eks-5ac0f1”, GitCommit:“5ac0f1d9ab2c254ea2b0ce3534fd72932094c6e1”, GitTreeState:“clean”, BuildDate:“2019-08-20T22:39:46Z”, GoVersion:“go1.11.13”, Compiler:“gc”, Platform:“linux/amd64”}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS

Seem to be experiencing a weird bug when doing helm upgrade. The error states “Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceMonitor, namespace: dcd, name: bid-management”.

We’ve tested on the following helm versions:

Helm Version:“v3.0.0-beta.2”, “v3.0.0-beta.3”

We get the following error - “Error: UPGRADE FAILED: no ServiceMonitor with the name “bid-management” found”. Though I can confirm it exists.

Helm Version:“v3.0.0-rc.1”, “3.0.0-beta.4”, “3.0.0-beta.5”

We get the error above “Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ServiceMonitor, namespace: dcd, name: bid-management”

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 32
  • Comments: 61 (18 by maintainers)

Commits related to this issue

Most upvoted comments

This doesn’t really help because I still need to manually remove old resources. I’d expect that flag for helm update command like --force would automatically remove and add resources that have api-incompatibility, but it’s not the case. This makes upgrades of apps in kubernetes clusters very cumbersome. If --force is not responsible for this then other flag would be useful.

It’s very important issue right now because kubernetes 1.16 just drops support for old apis so we need to upgrade.

Also having the same problem when upgrading a resource to a new api version using helm3.

For the time being, we’re in need of a clean workaround as well. Deleting the old resource isn’t really an option for our production workloads.

I have the same issue. Production env is blocked and not possible to fix. $ helm version version.BuildInfo{Version:“v3.0.2”, GitCommit:“19e47ee3283ae98139d98460de796c1be1e3975f”, GitTreeState:“clean”, GoVersion:“go1.13.5”} Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Service, namespace: mynamespace, name: my-service

Also Amazon EKS

Please add “Warning! There is risk to have brick instead of chart after update” to https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/

Amazing work guys. I’m proud!

Hey,

I am having the same issue. My issue is regarding a pre existing stateful set. Any advice would be much appreciated.

Thanks, Richard

Also bumped into this problem.

After migration to v3 with helm-v2-to-helm-v3 plugin I’m unable to update charts :

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Deployment, namespace: default, name: grafana-main

This would be totally understandable if would have happened in a beta version, but I’m running v3.0.2 after following instructions on the official blog, reaching a issue/ bug reported during the beta version. It doesn’t feel nice.

Is there any non-destructive work-arround for the time being? What this comment proposes feels quite destructive https://github.com/helm/helm/issues/6646#issuecomment-546603596

@vakaobr I doubt it’s the same issue. When the first install fails (and only with the first install), as you noticed, helm doesn’t make a release. Hence helm wont have any information about a release to compare with already deployed resources and will try to install them showing that message because actually some of the resources were installed. You can probably solve this by using --atomic with the installation or using helm upgrade --install --force being careful with the --force since it will delete and re-create resources. Here we are facing an issue that happens with already successfully installed charts.

I don’t know if this has been floated before, but maybe there could be a way to tell Helm to “adopt” resources if they already exist – i.e., in case of an existing namespace it would be patched with user-supplied manifest and understood to be managed by Helm from that point.

See #2730

Same problem here upgrade a stable/nginx-ingress running:

# helm upgrade nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true --set controller.tcp.configMapNamespace=tcp-services

output: Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: ClusterRole, namespace: , name: main-nginx-ingress

# helm version

output: version.BuildInfo{Version:“v3.0.0”, GitCommit:“e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6”, GitTreeState:“clean”, GoVersion:“go1.13.4”}

I agree. It’s frustrating that the Kubernetes API treats updating the apiVersion as a breaking change.

If anyone is aware of a a solution within Kubernetes that allows one to upgrade apiVersions without having to re-create the object, we’d love to hear about it. We are currently unaware of an API endpoint available for third party tools like Helm to “convert” an object from one apiVersion to the next.

The only option we’re aware of from Kubernetes’ API is to delete and create the object. It is certainly not ideal, but it’s the only option we are aware of at this time.

It was mentioned in #7219 that upgrading from Kubernetes 1.15 to 1.16 migrated the objects from a deprecated apiVersion (like extensions/v1beta1) to apps/v1 on the backend. If someone could confirm that behaviour as well as gain a better understanding about how Kubernetes achieves this, that could give us a possible solution to this problem.

What is the real problem here? It’s possible to do update object with kubectl even with api changes without any issues. The object does not have to be deleted (can be simply kubectl apply/replace) why Helm can’t do the same?

@bacongobbler I agree that from k8s point of view, it’s the broken change between API versions. However, in k8s there be design to handle such a case to migrate one object from one version to another. For example, in one 1.14 cluster, if a deployment created in version ‘apps/v1’, it’s also available in the version ‘apps/v1bet1’, ‘apps/v1bet2’, ‘extensions/v1beta1’. See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#deployment-v1-apps. So I think the gvk design of the helm3 is ok, but the implementation should be more complicated. The old release object should not only be retrieved from helm release storage, but also from the current running environment.

Thanks

Helm 3.0.2. Can’t deploy or even rollback when the previous deploy changed the number of deployments (removed or added a Deployment). Fails with error:

Error: no Deployment with the name “server2” found

Extremely frustrating.

If you are having to use these workarounds: https://github.com/helm/helm/issues/6646#issuecomment-546603596, you can use the following script I created to automate that process: https://gist.github.com/techmexdev/5183be77abb26679e3f5d7ff99171731

@sheerun did you see my answer in regards to apiVersion changes in this comment: https://github.com/helm/helm/issues/6646#issuecomment-547650430?

The tl;dr is that you have to manually remove the old object in order to “upgrade”. The two schemas are incompatible with each other and therefore cannot be upgraded from one to the next in a clean fashion. Are you aware of any tooling that handles this case?

I get this error when trying to change api version of deployment to apps/v1 from deprecated extensions/v1beta1. Helm refuses to deploy without me manually removing old deployment.