helm-2to3: Upgrade of Converted Releases Fails
Using Helm 3.2.0 and the 0.51 version of the 2to3 converter, I converted some releases from helm 2 to helm 3.
Now, when I try to upgrade those releases using helm upgrade … I get the error message:
[18458 18208 :007316: nftcx 04/28/2020 20:07:55 -0600 ERROR ]
"Error: UPGRADE FAILED: rendered manifests contain a resource that already exists.
Unable to continue with update: PodSecurityPolicy "common-services-grafana" in
namespace "" exists and cannot be imported into the current release: invalid
ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name":
must be set to "common-services"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
So it looks like the upgrade is failing because helm 3 expects some annotations to be present that the converter isn’t creating. Is that a good analysis?
Is there any way to get the converter to add these keys?
Thanks.
George
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 25
- Comments: 28
this helped me, with a broken upgrade process on the
prometheus-operator
helm chart, after migration with helm-2to3:Hello everyone,
I was facing exact same issue after updating the helm version from 2 to 3. I was able to resolve this by purging the old helm deployment. helm delete <name> --purge Then I was able to upgrade the services with helm upgrade successfully.
I hope it helps someone!
We’re encountering the same issue with a release of nginx-ingress. We tried to manually change missing labels / annotations, but encountered the following problem:
I’m quite new to helm and k8s in general, would be grateful if someone could decypher it for me
@hickeyma Thanks for your response. I apologize for the delay in coming back to this.
I tried creating a simple test case, and it works as expected. Here’s some additional information. I dumped the PodSecurityPolicy that’s erroring out during the upgrade. Here’s the version of the original chart AFTER upgrade to Helm 3:
Here’s what it looked like on another cluster where it was installed via helm 3
And here’s the error message when the helm upgrade command is executed.
I looked through the source code for the converter, and it looks like it’s just copying the existing helm 2 annotations.
I diffed the grafana chart, and can see where the PodSecurityPolicy APIVersion has updated.
So, it looks to me like the issue here is that the API version has changed, and Helm 3 is trying to reconcile ownership using the additional metadata annotations, but they’re not present.
Does that sound right to you?
Do you see a way forward?
I found one way to reduce the work at least. You can set the label and annotations with kubectl:
kubectl -n xxx annotate --overwrite --all Deployment meta.helm.sh/release-name=xxx
This could be further improved of course by making a script that fetches all the objects in the release and fixes them all, but it’s still a hack. I’m still interested to know if there is a better way for this!
I have the same problem. After running the conversion for my release I tried upgrading it with --dry-run flag, but I got the error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "xyz" in namespace "xxx" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "xxx"
I added the missing annotations and fixed the label and sure enough, that error was gone, but then I got the same error on the next object…after fixing each and every Kubernetes object in the release with the same annotations and label the upgrade finally worked. Is there anyway to automate this? I still have a pile of releases to convert and upgrade.
Anyone experiencing the original issue:
With regard to annotations:
They can be used for adopting existing already created resources and are not mandatory in Helm templates (https://github.com/helm/helm/pull/7649).
I’ve had a same issue. After migration from Helm2 to Helm3 my new manifests included new api version of ingress. And helm3 raise a error: Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Ingress “test-ingress” in namespace “helm-migration-experiments” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “test-helm-migration-release”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “helm-migration-experiments”
I did these steps and i was able to upgrade my release:
@Hashfyre A new issue with be great, thanks
HI @hickeyma ,
I’m getting the similar kind of issue please tell me the solution for it.
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ServiceMonitor “monitor” in namespace “nha-app” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “monitor”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “nha-app”
@vcorr Added some background in https://github.com/helm/helm-2to3/issues/162#issuecomment-682581264. Hope this helps shed some light on your issue.
@dza89 It would be better raise a new issue and provide more details. I would need to know the steps using a chart as an example through the different steps when first deployed using v2, the migration of that release and then the upgrade in v3 and where/how it fails.
@kiyutink We ran into something similar. In our case, there was a StatefulSet that didn’t have a selector defined at the template. Kubernetes/Helm supplied a default selector that matched all labels. In our case, what I did was write a little script to patch the set of selector labels to be the release and namespace. I then set the StatefulSet’s template selector to match. Once I did that, I could upgrade.
Start by looking at your deployment with ‘kubectl get deployment <name> -o yaml’ and examine the selector. Then, look at your chart’s.