helm: helm does not want to install
Hi,
I am using kubeadm and I am having difficulties trying to deploy/run helm.
kubernetes version 1.5.1
[root@kub1 openstack-helm]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
helm version
[root@kub1 openstack-helm]# helm version
Client: &version.Version{SemVer:"v2.1.3", GitCommit:"5cbc48fb305ca4bf68c26eb8d2a7eb363227e973", GitTreeState:"clean"}
Error: cannot connect to Tiller
pods
[root@kub1 openstack-helm]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-n091c 1/1 Running 1 26m
kube-system calico-node-1t9f5 2/2 Running 4 26m
kube-system calico-node-dk3hn 2/2 Running 0 26m
kube-system calico-policy-controller-807063459-v07cl 1/1 Running 2 26m
kube-system dummy-2088944543-c3mf9 1/1 Running 1 27m
kube-system etcd-kub1.localhost 1/1 Running 7 26m
kube-system kube-apiserver-kub1.localhost 1/1 Running 7 27m
kube-system kube-controller-manager-kub1.localhost 1/1 Running 1 14m
kube-system kube-discovery-1769846148-ts199 1/1 Running 1 27m
kube-system kube-dns-2924299975-48p4m 4/4 Running 7 27m
kube-system kube-proxy-g1j0j 1/1 Running 0 26m
kube-system kube-proxy-hwkt2 1/1 Running 1 27m
kube-system kube-scheduler-kub1.localhost 1/1 Running 2 27m
kube-system tiller-deploy-3299276078-5kx5w 0/1 Running 1 41s
[root@kub1 openstack-helm]#
pod description
[root@kub1 openstack-helm]# kubectl describe pod tiller-deploy-3299276078-5kx5w -n kube-system
Name: tiller-deploy-3299276078-5kx5w
Namespace: kube-system
Node: kub2.localhost/192.168.20.11
Start Time: Thu, 19 Jan 2017 01:36:46 +1100
Labels: app=helm
name=tiller
pod-template-hash=3299276078
Status: Running
IP: 192.168.99.129
Controllers: ReplicaSet/tiller-deploy-3299276078
Containers:
tiller:
Container ID: docker://9ac08c4e29162901138605676004683c7ed22cfafe201935f1026723bd457720
Image: gcr.io/kubernetes-helm/tiller:v2.1.3
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:a5919b5a2a837fc265aeeb639413c653427332544c3a590bc8ed74370905d87a
Port: 44134/TCP
State: Running
Started: Thu, 19 Jan 2017 01:38:27 +1100
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 19 Jan 2017 01:38:18 +1100
Finished: Thu, 19 Jan 2017 01:38:26 +1100
Ready: False
Restart Count: 5
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9h4g9 (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-9h4g9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9h4g9
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned tiller-deploy-3299276078-5kx5w to kub2.localhost
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Pulling pulling image "gcr.io/kubernetes-helm/tiller:v2.1.3"
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Pulled Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.1.3"
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Created Created container with docker id 09867cb36ac1; Security:[seccomp=unconfined]
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Started Started container with docker id 09867cb36ac1
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Killing Killing container with docker id 09867cb36ac1: pod "tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)" container "tiller" is unhealthy, it will be killed and re-created.
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Created Created container with docker id 0f2583248a73; Security:[seccomp=unconfined]
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Started Started container with docker id 0f2583248a73
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Created Created container with docker id 859dc48c4d12; Security:[seccomp=unconfined]
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Killing Killing container with docker id 0f2583248a73: pod "tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)" container "tiller" is unhealthy, it will be killed and re-created.
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Started Started container with docker id 859dc48c4d12
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Killing Killing container with docker id 859dc48c4d12: pod "tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)" container "tiller" is unhealthy, it will be killed and re-created.
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Created Created container with docker id aab68c3cfb4f; Security:[seccomp=unconfined]
1m 1m 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Started Started container with docker id aab68c3cfb4f
51s 51s 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Killing Killing container with docker id aab68c3cfb4f: pod "tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)" container "tiller" is unhealthy, it will be killed and re-created.
51s 35s 3 {kubelet kub2.localhost} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tiller" with CrashLoopBackOff: "Back-off 20s restarting failed container=tiller pod=tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)"
19s 19s 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Started Started container with docker id 6a8afb9e06e2
19s 19s 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Created Created container with docker id 6a8afb9e06e2; Security:[seccomp=unconfined]
11s 11s 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Created Created container with docker id 9ac08c4e2916; Security:[seccomp=unconfined]
11s 11s 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Killing Killing container with docker id 6a8afb9e06e2: pod "tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)" container "tiller" is unhealthy, it will be killed and re-created.
1m 11s 5 {kubelet kub2.localhost} spec.containers{tiller} Normal Pulled Container image "gcr.io/kubernetes-helm/tiller:v2.1.3" already present on machine
10s 10s 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Started Started container with docker id 9ac08c4e2916
51s 1s 4 {kubelet kub2.localhost} spec.containers{tiller} Warning BackOff Back-off restarting failed docker container
1m 1s 8 {kubelet kub2.localhost} spec.containers{tiller} Warning Unhealthy Liveness probe failed: Get http://192.168.99.129:44135/liveness: dial tcp 192.168.99.129:44135: getsockopt: connection refused
1m 1s 8 {kubelet kub2.localhost} spec.containers{tiller} Warning Unhealthy Readiness probe failed: Get http://192.168.99.129:44135/readiness: dial tcp 192.168.99.129:44135: getsockopt: connection refused
1s 1s 1 {kubelet kub2.localhost} spec.containers{tiller} Normal Killing Killing container with docker id 9ac08c4e2916: pod "tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)" container "tiller" is unhealthy, it will be killed and re-created.
1s 1s 1 {kubelet kub2.localhost} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tiller" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=tiller pod=tiller-deploy-3299276078-5kx5w_kube-system(8a2bb598-dd8b-11e6-8b84-0050568da433)"
can’t see the logs and can’t login to container…
any advise?
thank you very much
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 18 (9 by maintainers)
same issue here on Centos 7.3 using weaveworks as the network provider: kubectl apply -f https://git.io/weave-kube
Will switch to flannel.