helm: helm template is unable upgrade deployment apiVersion to apps/v1
Output of helm version:
Client: &version.Version{SemVer:“v2.14.0”, GitCommit:“05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0”, GitTreeState:“clean”}
Server: &version.Version{SemVer:“v2.14.0”, GitCommit:“05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0”, GitTreeState:“clean”}
Output of kubectl version:
Client Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.3”, GitCommit:“2d3c76f9091b6bec110a5e63777c332469e0cba2”, GitTreeState:“clean”, BuildDate:“2019-08-19T11:13:54Z”, GoVersion:“go1.12.9”, Compiler:“gc”, Platform:“linux/amd64”}
Server Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.3”, GitCommit:“2d3c76f9091b6bec110a5e63777c332469e0cba2”, GitTreeState:“clean”, BuildDate:“2019-08-19T11:05:50Z”, GoVersion:“go1.12.9”, Compiler:“gc”, Platform:“linux/amd64”}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): Openstack private cloud
Using helm template I’ve been creating releases no problem for a long time with my deployment hard-coded to apiVersion of extensions/v1beta1. I updated my template to change it to apps/v1 in preparation for kube 1.16.0 later this year. I use helm upgrade --install when I create releases.
The deployment still ends up having extension/v1beta1
helm template produces the output with apps/v1 after helm -upgrade --install (with or without helm delete --purge beforehand) the helm get manifest shows apps/v1 for the deployment. however when you look at the deployment it remains apiVersion of extenstions/v1beta1
Any other changes in my values file or in the template itself works fine. I have an annotation that creates a uuid which forces pods to roll no matter what for every upgrade even if no changes to the manifest. However the deployment value apiVersion remains at extensions/v1beta1 no matter what.
I even dumped the existing deployment yaml, edited the apiVersion to apps/v1 and applied it using kubectl apply -f. this was successfully applied but when I get the deployment yaml again, the version remains extenstions/v1beta1
I suppose this last bit might prove it not to be a helm issue but rather a kubernetes…
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 8
- Comments: 18 (7 by maintainers)
I was about to open an issue at https://github.com/kubernetes/kubernetes, but they already closed a similar issue with the same problem.
https://github.com/kubernetes/kubernetes/issues/62283
Safe to say, the k8s API is showing
extensions/v1beta1as default giving API clients a chance to migrate without breaking compatibility.The apiVersion returned to kubectl does not necessarily reflect the apiVersion used to create the object and you can view the actual apiVersion of the object using
kubectl get deployments.apps.Yes, can confirm this bug. Steps to reproduce
v1.15.xkubernetes clusterHi I have the same issue, my current release is with apiVersion: extensions/v1beta1 , and I want to upgrade to apiVersion: apps:/v1, I want to do it without downtime, is it possible?
Thanks
Thanks, @hickert33. I just confirmed this behavior as well.
Even after manually deleting the Deployment and issuing a
helm upgrade --installwith a new chart that declares:But the Deployment object created in Kubernetes shows up as
Strangely, though, the resulting Pod does have
apiVersion: v1.This is really bizarre, and honestly, I’m not even sure what to expect now. This is very unpredictable.
EDIT: Even a
helm del --purgedoes not help.@achisolomon @ofiryy You will find details on the issue and how to fix it in the following docs:
Helm 3: https://helm.sh/docs/topics/kubernetes_apis/ Helm 2: https://github.com/helm/helm/blob/dev-v2/docs/kubernetes_apis.md
Marking this as question/support because it’s not a bug, but an intended “feature” of Kubernetes. 😃
Just ran into this issue today…
I’m trying to update all workloads to the most recent apiVersion as a prep to upgrade the cluster to k8s 1.16.
I’ve updated my charts to reflect the new apiVersion, but a
helm upgrade --installupdates all other fields from the Deployment object, except the apiVersion.One option is to delete the helm release entirely and reinstall it, but this would highly undesirable, most likely not acceptable, in a Production environment.
EDIT: btw, using the most recent v2.14.3
FYI the steps outlined in https://github.com/helm/helm/issues/7219#issuecomment-567256858 is a work around for this for future people running up against this.
Is there a workaround that I can use to upgrade everything I currently have in helm without using
helm --purgefirst? I understand that this is expected behavior from kubernetes, but I really think this should be handled more gracefully.Also, confirming, this does not appear to be a helm issue.
The underlying k8s api seems to ignore or wrongly show the Deployment apiVersion all by itself.
To reproduce, issue the following under a bash terminal:
I am still not able to force the deployment to apps/v1 even after deleting with --purge. The pods come up with apps/v1 but the deployment still shows extensions/v1beta1. I am using secrets for helm to store its config, I verified there were none related to my deployment after deleting --purge my deployment. I hard coded the deployment.yaml (via template) to apps/v1 and after releasing the pods are fine and show apps/v1 but the deployment still shows extenstions/v1beta1 how can this be, there is no mention of extensions/v1beta1 in my resource yaml’s. Also the replicaset is extensions/v1beta1 but it has an “ownerReference” to the Deployment that shows apiVersion: apps/v1.
What am I missing to delete to make the new deployment 100% apps/v1?