helm: helm template is unable upgrade deployment apiVersion to apps/v1

Output of helm version: Client: &version.Version{SemVer:“v2.14.0”, GitCommit:“05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0”, GitTreeState:“clean”} Server: &version.Version{SemVer:“v2.14.0”, GitCommit:“05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0”, GitTreeState:“clean”}

Output of kubectl version: Client Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.3”, GitCommit:“2d3c76f9091b6bec110a5e63777c332469e0cba2”, GitTreeState:“clean”, BuildDate:“2019-08-19T11:13:54Z”, GoVersion:“go1.12.9”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.3”, GitCommit:“2d3c76f9091b6bec110a5e63777c332469e0cba2”, GitTreeState:“clean”, BuildDate:“2019-08-19T11:05:50Z”, GoVersion:“go1.12.9”, Compiler:“gc”, Platform:“linux/amd64”}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): Openstack private cloud

Using helm template I’ve been creating releases no problem for a long time with my deployment hard-coded to apiVersion of extensions/v1beta1. I updated my template to change it to apps/v1 in preparation for kube 1.16.0 later this year. I use helm upgrade --install when I create releases.

The deployment still ends up having extension/v1beta1

helm template produces the output with apps/v1 after helm -upgrade --install (with or without helm delete --purge beforehand) the helm get manifest shows apps/v1 for the deployment. however when you look at the deployment it remains apiVersion of extenstions/v1beta1

Any other changes in my values file or in the template itself works fine. I have an annotation that creates a uuid which forces pods to roll no matter what for every upgrade even if no changes to the manifest. However the deployment value apiVersion remains at extensions/v1beta1 no matter what.

I even dumped the existing deployment yaml, edited the apiVersion to apps/v1 and applied it using kubectl apply -f. this was successfully applied but when I get the deployment yaml again, the version remains extenstions/v1beta1

I suppose this last bit might prove it not to be a helm issue but rather a kubernetes…

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 8
  • Comments: 18 (7 by maintainers)

Most upvoted comments

I was about to open an issue at https://github.com/kubernetes/kubernetes, but they already closed a similar issue with the same problem.

https://github.com/kubernetes/kubernetes/issues/62283

kubectl get uses server-preferred order, which will prefer the extensions API group for backward compatibility, until extensions is removed. That is to say, kubectl get deployment uses extenions/v1beta1 endpoint by default.

To get deployments under apps API group, you can use kubectl get deployment.apps, which returns you apps/v1 deployments.

Note that the API version used to create deployments does not affect what API version you get from kubectl. The differences between API versions are things like default values and field names. Because API versions are round-trippable, you can safely get the same deployment object with different API version endpoints.

Closing because it’s not a bug.

Safe to say, the k8s API is showing extensions/v1beta1 as default giving API clients a chance to migrate without breaking compatibility.

The apiVersion returned to kubectl does not necessarily reflect the apiVersion used to create the object and you can view the actual apiVersion of the object using kubectl get deployments.apps.

Yes, can confirm this bug. Steps to reproduce

  1. Have a v1.15.x kubernetes cluster
  2. Have Helm v2.14.3
  3. Run these commands
$ helm create chart
$ # change apiVersion in deployment template to extensions/v1beta1
$ vi dummy/templates/deployment.yaml
$ helm upgrade --install dummy chart
$ # check apiVersion of deployment cluster and it will be extensions/v1beta1
$ kubectl get deploy dummy-chart -o yaml | grep apiVersion
$ # Now change apiVersion of deployment to apps/v1
$ vi dummy/templates/deployment.yaml
$ helm upgrade --install dummy chart
$ # check apiVersion of deployment in cluster and it will still be extensions/v1beta1
$ kubectl get deploy dummy-chart -o yaml | grep apiVersion

Hi I have the same issue, my current release is with apiVersion: extensions/v1beta1 , and I want to upgrade to apiVersion: apps:/v1, I want to do it without downtime, is it possible?

Thanks

Thanks, @hickert33. I just confirmed this behavior as well.

Even after manually deleting the Deployment and issuing a helm upgrade --install with a new chart that declares:

# Source: simpleapp/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment

But the Deployment object created in Kubernetes shows up as

apiVersion: extensions/v1beta1
kind: Deployment

Strangely, though, the resulting Pod does have apiVersion: v1.

apiVersion: v1
kind: Pod

This is really bizarre, and honestly, I’m not even sure what to expect now. This is very unpredictable.

EDIT: Even a helm del --purge does not help.

@achisolomon @ofiryy You will find details on the issue and how to fix it in the following docs:

Helm 3: https://helm.sh/docs/topics/kubernetes_apis/ Helm 2: https://github.com/helm/helm/blob/dev-v2/docs/kubernetes_apis.md

Marking this as question/support because it’s not a bug, but an intended “feature” of Kubernetes. 😃

Just ran into this issue today…

I’m trying to update all workloads to the most recent apiVersion as a prep to upgrade the cluster to k8s 1.16.

I’ve updated my charts to reflect the new apiVersion, but a helm upgrade --install updates all other fields from the Deployment object, except the apiVersion.

One option is to delete the helm release entirely and reinstall it, but this would highly undesirable, most likely not acceptable, in a Production environment.

EDIT: btw, using the most recent v2.14.3

Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-19T05:14:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

FYI the steps outlined in https://github.com/helm/helm/issues/7219#issuecomment-567256858 is a work around for this for future people running up against this.

I’m closing this as its intended behavior and kube related more than helm.

Is there a workaround that I can use to upgrade everything I currently have in helm without using helm --purge first? I understand that this is expected behavior from kubernetes, but I really think this should be handled more gracefully.

Also, confirming, this does not appear to be a helm issue.

The underlying k8s api seems to ignore or wrongly show the Deployment apiVersion all by itself.

To reproduce, issue the following under a bash terminal:

echo '
apiVersion: apps/v1
kind: Deployment
metadata:
  name: testing
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: testing
      app.kubernetes.io/name: testing
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: testing
        app.kubernetes.io/name: testing
    spec:
      containers:
      - image: nginx
        name: testing
        resources:
          limits:
            cpu: 1
            memory: 256Mi
          requests:
            cpu: 30m
            memory: 32Mi
' | kubectl apply -f -

kubectl get deploy testing -oyaml | grep apiVersion

I am still not able to force the deployment to apps/v1 even after deleting with --purge. The pods come up with apps/v1 but the deployment still shows extensions/v1beta1. I am using secrets for helm to store its config, I verified there were none related to my deployment after deleting --purge my deployment. I hard coded the deployment.yaml (via template) to apps/v1 and after releasing the pods are fine and show apps/v1 but the deployment still shows extenstions/v1beta1 how can this be, there is no mention of extensions/v1beta1 in my resource yaml’s. Also the replicaset is extensions/v1beta1 but it has an “ownerReference” to the Deployment that shows apiVersion: apps/v1.

What am I missing to delete to make the new deployment 100% apps/v1?