helm: Helm init fails on Kubernetes 1.16.0

Output of helm version: v2.14.3 Output of kubectl version: client: v1.15.3, server: v1.16.0-rc.1 Cloud Provider/Platform (AKS, GKE, Minikube etc.): IBM Cloud Kubernetes Service

$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/xxxx/.helm.
Error: error installing: the server could not find the requested resource

$ helm init --debug --service-account tiller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
. 
.
.

Looks like helm is trying to create tiller Deployment with: apiVersion: extensions/v1beta1 According to: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16 that is no longer supported.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 64
  • Comments: 83 (6 by maintainers)

Commits related to this issue

Most upvoted comments

If you want to use one less sed 😃 helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Thanks!

The following sed works-for-me:

helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@  replicas: 1@  replicas: 1\n  selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -

The issue with @mattymo solution (using kubectl patch --local) is that is seems to not work when its input contains multiple resource (here a Deployment and a Service).

current workaround seems to be something like this:

helm init --output yaml > tiller.yaml and update the tiller.yaml:

  • change to apps/v1
  • add the selector field
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
....

Kubernetes 1.16.0 was release yesterday: 9/18/2018.
Helm is broken on this latest Kubernetes release unless the above work around is used.

When will this issue be fixed and when will Helm 2.15.0 be released ?

As a helm n00b who is using minikube, I was able to get around this issue by setting a kubernetes version like so:

$ minikube delete
$ minikube start --kubernetes-version=1.15.4

Hope it helps!

#6462 has been merged and will be available in the next release (2.15.0). For now, feel free to use the workarounds provided above or use the canary release.

Thanks everyone!

This has been fixed in Helm 2.16.0.

@puww1010 I just redirected the output in a file, and then used VIM to change it. Below commands as reference.

# helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml
# vim helm-init.yaml
# kubectl apply -f helm-init.yaml
helm init spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Slight correction

helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

We’ve avoided updating tiller to apps/v1 in the past due to complexity with having helm init --upgrade reconciling both extensions/v1beta1 and apps/v1 tiller Deployments. It looks like once we start supporting Kubernetes 1.16.0 we will have to handle that case going forward and migrate to the newer apiVersion.

After successful init you won’t be able to install a chart package from repository until replacing extensions/v1beta1 in it as well. Here is how to adapt any chart from repository for k8s v1.16.0 The example is based on prometheus chart.

git clone https://github.com/helm/charts
cd charts/stable

Replace extensions/v1beta1 to policy/v1beta1 PodSecurityPolicy:

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: policy/v1beta1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+PodSecurityPolicy/ {print FILENAME}' {} +`

NetworkPolicy apiVersion is handled well by _helpers.tpl for those charts where it is used.

Replace extensions/v1beta1 to apps/v1 in Deployment, StatefulSet, ReplicaSet, DaemonSet

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`
sed -i 's@apiVersion: apps/v1beta2@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`

Create a new package:

helm package ./prometheus
Successfully packaged chart and saved it to: /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Install it: helm install /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Based on https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

P.S. For some charts with dependencies you might need to use helm dependency update and replace dependent tgz with patched ones if applicable.

Here’s a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it’s not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

Workaround, using jq:

helm init -o json | jq '(select(.apiVersion == "extensions/v1beta1") .apiVersion = "apps/v1")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.app = "helm")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.name = "tiller")' | kubectl create -f -

@DanielIvaylov

kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

Just a side note to @PierreF’s and @mihivagyok’s solution. Those did not work for me when I use private helm repos.

$ helm repo add companyrepo https://companyrepo
Error: Couldn't load repositories file (/home/username/.helm/repository/repositories.yaml).

I guess that happens because helm init is not run, just generates yaml file. I fixed that by running helm init -c as an extra.

I’m also getting the error:

$ helm init
$HELM_HOME has been configured at C:\Users\user\.helm.
Error: error installing: the server could not find the requested resource

I’m trying a solution proposed in this issue, particularly this one. However, after modifying the tiller.yaml file accordingly, I’m not able to update the configuration. I’m trying the following command in order to apply the changes/update the configuration:

$ kubectl apply -f tiller.yaml
deployment.apps/tiller-deploy configured
service/tiller-deploy configured

But then, if I run:

$ helm init --output yaml > tiller2.yaml

The tiller2.yaml file shows:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:

Basically, the changes are not reflected. So I assume that I’m not updating the configuration properly. What would be the correct way to do it?


EDIT: I managed to get it running. I’m using Minikube, and in order to get it running, first I downgraded the Kubernetes version to 1.15.4.

minikube delete
minikube start --kubernetes-version=1.15.4

Then, I was using a proxy, so I had to add Minikube’s IP to the NO_PROXY list: 192.168.99.101 in my case. See: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/

Note: After some further testing, perhaps the downgrade is not necessary, and maybe all I was missing was the NO_PROXY step. I added all 192.168.99.0/24, 192.168.39.0/24 and 10.96.0.0/12 to the NO_PROXY setting and now it seems to work fine.

@cyrilthank if it’s minikube, minikube config set kubernetes-version v1.15.4

If you have applied the workaround mentioned above when working with helm init, and still get the following error when trying things like helm version, it’s because the helm deployment cannot be found.

Error: could not find tiller

You need to run kubectl get events --all-namespaces | grep -i tiller to know why it’s not ready.

For example, my issue is simply as below, because I don’t need serviceaccount "tiller" with microk8s.

microk8s.kubectl get events --all-namespaces | grep -i tiller
kube-system    23m         Warning   FailedCreate                   replicaset/tiller-deploy-77855d9dcf            Error creating: pods "tiller-deploy-77855d9dcf-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found

So I did the workaournd without the service account

- helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
+ helm init spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Nice! You might be able to achieve the same effect with the --override flag than crazy sed hacks 😃

Yes, but his crazy sed hacks I can copy & paste, whereas this helm init --override "apiVersion"="apps/v1" just does not work. Ok, the sed hack does not work either.

current workaround seems to be something like this:

helm init --output yaml > tiller.yaml and update the tiller.yaml:

  • change to apps/v1
  • add the selector field
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
....

It’s works, since kubernetes change apiVersion apps/v1 to for Deployment, there is one thing need to be change is we need to add selector matchLabels for spec

helm init --service-account tiller --override spec.selector.matchLabels.‘name’=‘tiller’,spec.selector.matchLabels.‘app’=‘helm’ --output yaml | sed ‘s@apiVersion: extensions/v1beta1@apiVersion: apps/v1@’ | kubectl apply -f -

Its worked for me, Thank you so much

Thanks @UmamaheshMaxwell for sharing this. Can you please share the steps you used to rollback the kubernetes version?

@cyrilthank we have been using our own VMs ( Ubuntu 18+), below are the setps to install k8s version 1.15.4

kubeadm reset sudo apt-get install kubelet=1.15.4-00 kubectl=1.15.4-00 kubeadm=1.15.4-00 sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --apiserver-advertise-address=x.x.x.x --apiserver-cert-extra-sans=x.x.x.x --kubernetes-version “1.15.4”

–pod-network-cidr=10.244.10.0/16 - flannel –apiserver-advertise-address=x.x.x.x - private IP of your VM ( Master) –apiserver-cert-extra-sans=x.x.x.x - Public IP of your VM ( Master) ( This is required, if you are trying to access your Master from your local machine. Note: Follow the below link to set up a kubeconfig file for a self-hosted Kubernetes cluster (http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/) Let me know if you still have any questions.

Thanks @UmamaheshMaxwell for your patient reply

I have an existing kubernetes 1.16 setup can you please confirm if I can try running these steps?

@cyrilthank if it’s minikube, minikube config set kubernetes-version v1.15.4

Thanks @MrSimonEmms mine is not mini I think I will have to go with @UmamaheshMaxwell 's steps

Canary image still produces the same error unless it doesn’t have this merge yet,

@DanielIvaylov

kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

This solved my problem!

@puww1010 I just redirected the output in a file, and then used VIM to change it. Below commands as reference.

# helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml
# vim helm-init.yaml
# kubectl apply -f helm-init.yaml

I tried doing this. After editing the file in VIM i use the kubectl apply command, but it doesn’t seem to do anything. When I run helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml again or helm init --output yaml the changes haven’t been applied. Anyone else experience this?

@cyrilthank It seems the tiller error is because there is no tiller deployment running, try running this command to install tiller: helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

helm version -s should return the server (tiller) version if its up and running properly

@gm12367 Yes, I can see the print but just output. So, what command I can change the output?