helm-2to3: Upgrade of Converted Releases Fails

Using Helm 3.2.0 and the 0.51 version of the 2to3 converter, I converted some releases from helm 2 to helm 3.

Now, when I try to upgrade those releases using helm upgrade … I get the error message:

[18458 18208 :007316: nftcx 04/28/2020 20:07:55 -0600 ERROR   ]             
"Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. 
Unable to continue with update: PodSecurityPolicy "common-services-grafana" in 
namespace "" exists and cannot be imported into the current release: invalid 
ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": 
must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": 
must be set to "common-services"; annotation validation error: missing key 
"meta.helm.sh/release-namespace": must be set to "default"

So it looks like the upgrade is failing because helm 3 expects some annotations to be present that the converter isn’t creating. Is that a good analysis?

Is there any way to get the converter to add these keys?

Thanks.

George

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 25
  • Comments: 28

Most upvoted comments

this helped me, with a broken upgrade process on the prometheus-operator helm chart, after migration with helm-2to3:

kubectl -n monitoring label ingress -l "heritage=Tiller" "app.kubernetes.io/managed-by=Helm"
kubectl -n monitoring annotate ingress -l "heritage=Tiller" "meta.helm.sh/release-name=prometheus-operator" "meta.helm.sh/release-namespace=monitoring"

Hello everyone,

I was facing exact same issue after updating the helm version from 2 to 3. I was able to resolve this by purging the old helm deployment. helm delete <name> --purge Then I was able to upgrade the services with helm upgrade successfully.

I hope it helps someone!

We’re encountering the same issue with a release of nginx-ingress. We tried to manually change missing labels / annotations, but encountered the following problem:

cannot patch "nginx-ingress-controller-default-backend" with kind Deployment: Deployment.apps "nginx-ingress-controller-default-backend" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"nginx-ingress", "release":"nginx-ingress-controller", "app.kubernetes.io/component":"default-backend"}: `selector` does not match template `labels`

I’m quite new to helm and k8s in general, would be grateful if someone could decypher it for me

@hickeyma Thanks for your response. I apologize for the delay in coming back to this.

I tried creating a simple test case, and it works as expected. Here’s some additional information. I dumped the PodSecurityPolicy that’s erroring out during the upgrade. Here’s the version of the original chart AFTER upgrade to Helm 3:

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  creationTimestamp: "2020-05-07T00:03:39Z"
  labels:
    app: grafana
    chart: grafana-3.3.9
    heritage: Tiller
    release: common-services
  name: common-services-grafana
  resourceVersion: "6461"

Here’s what it looked like on another cluster where it was installed via helm 3

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    meta.helm.sh/release-name: common-services
    meta.helm.sh/release-namespace: default
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  creationTimestamp: "2020-05-08T13:29:52Z"
  labels:
    app: grafana
    app.kubernetes.io/managed-by: Helm
    chart: grafana-3.3.9
    heritage: Helm
    release: common-services
  name: common-services-grafana
  resourceVersion: "240767"

And here’s the error message when the helm upgrade command is executed.

Error: UPGRADE FAILED: rendered manifests contain a resource that already 
exists. Unable to continue with update: PodSecurityPolicy "common-services-grafana" 
in namespace "" exists and cannot be imported into the current release: invalid 
ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": 
must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": 
must be set to "common-services"; annotation validation error: missing key 
"meta.helm.sh/release-namespace": must be set to "default"

I looked through the source code for the converter, and it looks like it’s just copying the existing helm 2 annotations.

I diffed the grafana chart, and can see where the PodSecurityPolicy APIVersion has updated.

+++ common-services-1.1.91871/common-services/charts/prometheus-operator/charts/grafana/templates/podsecuritypolicy.yaml        2020-05-08 09:17:56.000000000 -0600
@@ -1,7 +1,7 @@
 {{- if .Values.rbac.pspEnabled }}
-apiVersion: extensions/v1beta1
+apiVersion: policy/v1beta1
 kind: PodSecurityPolicy

So, it looks to me like the issue here is that the API version has changed, and Helm 3 is trying to reconcile ownership using the additional metadata annotations, but they’re not present.

Does that sound right to you?

Do you see a way forward?

I found one way to reduce the work at least. You can set the label and annotations with kubectl:

kubectl -n xxx annotate --overwrite --all Deployment meta.helm.sh/release-name=xxx

This could be further improved of course by making a script that fetches all the objects in the release and fixes them all, but it’s still a hack. I’m still interested to know if there is a better way for this!

I have the same problem. After running the conversion for my release I tried upgrading it with --dry-run flag, but I got the error:

Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "xyz" in namespace "xxx" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "xxx"

I added the missing annotations and fixed the label and sure enough, that error was gone, but then I got the same error on the next object…after fixing each and every Kubernetes object in the release with the same annotations and label the upgrade finally worked. Is there anyway to automate this? I still have a pile of releases to convert and upgrade.

Anyone experiencing the original issue:

  1. Upgrade helm itself to the latest version (3.2.1) and upgrade your helm chart to the same helm chart version deployed. (So an upgrade with no changes)
  2. Upgrade your helm chart to your latest version with changes.

With regard to annotations:

  • meta.helm.sh/release-name: <RELEASE_NAME>
  • meta.helm.sh/release-namespace: <RELEASE_NAMESPACE>

They can be used for adopting existing already created resources and are not mandatory in Helm templates (https://github.com/helm/helm/pull/7649).

I’ve had a same issue. After migration from Helm2 to Helm3 my new manifests included new api version of ingress. And helm3 raise a error: Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Ingress “test-ingress” in namespace “helm-migration-experiments” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “test-helm-migration-release”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “helm-migration-experiments”

I did these steps and i was able to upgrade my release:

  1. Wrote previous working api version in the ingress manifest
  2. Deployed it
  3. Got success, then i wrote new api version into ingress manifest
  4. Deployed it and i’ve got the success deploy.

@Hashfyre A new issue with be great, thanks

HI @hickeyma ,

I’m getting the similar kind of issue please tell me the solution for it.

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ServiceMonitor “monitor” in namespace “nha-app” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “monitor”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “nha-app”

@vcorr Added some background in https://github.com/helm/helm-2to3/issues/162#issuecomment-682581264. Hope this helps shed some light on your issue.

@dza89 It would be better raise a new issue and provide more details. I would need to know the steps using a chart as an example through the different steps when first deployed using v2, the migration of that release and then the upgrade in v3 and where/how it fails.

@kiyutink We ran into something similar. In our case, there was a StatefulSet that didn’t have a selector defined at the template. Kubernetes/Helm supplied a default selector that matched all labels. In our case, what I did was write a little script to patch the set of selector labels to be the release and namespace. I then set the StatefulSet’s template selector to match. Once I did that, I could upgrade.

Start by looking at your deployment with ‘kubectl get deployment <name> -o yaml’ and examine the selector. Then, look at your chart’s.