helm: Helm error with forwarding ports

Hello,

I have new clear installation of K8s and then I installed Helm. If I try to install whatever (i.e. mysql) I get this error message: Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist

# kubectl -n kube-system get pods                                                                                                 
NAME                             READY     STATUS    RESTARTS   AGE
etcd-master                      1/1       Running   2          3h
kube-apiserver-master            1/1       Running   3          3h
kube-controller-manager-master   1/1       Running   3          3h
kube-dns-3913472980-5wj0x        3/3       Running   6          3h
kube-proxy-tmh3k                 1/1       Running   2          3h
kube-proxy-vssfr                 1/1       Running   0          3h
kube-scheduler-master            1/1       Running   3          3h
tiller-deploy-1491950541-5crrg   1/1       Running   0          2h

Next I tried to set port-forwarding but I still get the same error:

# kubectl -n kube-system port-forward $(kubectl -n kube-system get pod -l app=helm -o jsonpath='{.items[0].metadata.name}') 44134 
error: error upgrading connection: unable to upgrade connection: pod does not exist

K8s and Helm is running on Ubuntu (v16.04) in my VirtualBox. There I have 2 network interfaces. I’m not sure if problem should be there. My networks:

  • enp0s3 - NAT
  • enp0s8 - host only adapter (for connecting to Node01)

It is almost the same problem like issue 1770.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 17 (5 by maintainers)

Most upvoted comments

Did anyone found a solution to this

I’m having exactly the same problem. Any solution?

When I do the following request: GET https://192.168.0.10:6443/api/v1/namespaces/kube-system/pods/tiller-deploy-59988697b6-j47w7/portforward

I get the following response:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Upgrade request required",
  "reason": "BadRequest",
  "code": 400
}

which ultimately results in my original command kubectl -n kube-system port-forward $(kubectl -n kube-system get pod -l app=helm -o jsonpath='{.items[0].metadata.name}') 44134 producing this error :

error: error upgrading connection: unable to upgrade connection: pod does not exist

I was able to resolve this by restarting the node where tiller was installed and then initializing again.

$ helm init --service-account tiller --wait
$HELM_HOME has been configured at C:\Users\arontx\home\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)

$ helm version
Client: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}

I’m also hitting this problem on a vagrant-installed, (based on centos/7 boxes) multi-node cluster.

> kubectl --namespace kube-system get pod -o wide
NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
coredns-78fcdf6894-cvwgc         1/1       Running   0          1d        10.244.0.3    master
coredns-78fcdf6894-vr9jw         1/1       Running   0          1d        10.244.0.2    master
etcd-master                      1/1       Running   0          1d        10.0.2.15     master
kube-apiserver-master            1/1       Running   0          1d        10.0.2.15     master
kube-controller-manager-master   1/1       Running   0          1d        10.0.2.15     master
kube-flannel-ds-kngs5            1/1       Running   0          1d        10.0.2.15     master
kube-flannel-ds-kr7cc            1/1       Running   0          23h       10.0.2.15     node2
kube-flannel-ds-n7rr6            1/1       Running   2          23h       10.0.2.15     node3
kube-flannel-ds-pkzmw            1/1       Running   0          23h       10.0.2.15     node1
kube-flannel-ds-prrh5            1/1       Running   0          23h       10.0.2.15     node4
kube-proxy-94tmj                 1/1       Running   0          23h       10.0.2.15     node3
kube-proxy-jqgfb                 1/1       Running   0          23h       10.0.2.15     node2
kube-proxy-kgm2r                 1/1       Running   0          1d        10.0.2.15     master
kube-proxy-s9jfd                 1/1       Running   0          23h       10.0.2.15     node1
kube-proxy-xgxdv                 1/1       Running   0          23h       10.0.2.15     node4
kube-scheduler-master            1/1       Running   0          1d        10.0.2.15     master
tiller-deploy-64c9d747bd-csphl   1/1       Running   0          8m        10.244.3.15   node3

The cluster seems to operate fine (I’ve been able to install/access services OK). I installed tiller using helm init using the latest 2.10.0 binary.

I tried playing with the yaml definition from helm init --output yaml to see if I could “expose” the port somehow (replacing “tiller” by “44134” as targetPort for example) but without success (not really sure what I need to do). The service section of that yaml looks like this:

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

Any ideas?