dashboard: dashboard pod cannot run on kubeadm

Issue details

I can’t make the dashboard to run. I am using a fresh kubeadm installation + calico.

Kubernetes version: 1.5.1
Operating system: Centos7
Steps to reproduce

kubeadm init kubeadm join … --> join a new node kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Observed result
[root@kub1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   calico-etcd-xnb3q                          1/1       Running            0          44m
kube-system   calico-node-7ccs2                          2/2       Running            0          44m
kube-system   calico-node-zgww7                          2/2       Running            0          44m
kube-system   calico-policy-controller-807063459-r5k6f   1/1       Running            0          44m
kube-system   dummy-2088944543-xx6hb                     1/1       Running            0          52m
kube-system   etcd-kub1.localhost                        1/1       Running            0          51m
kube-system   kube-apiserver-kub1.localhost              1/1       Running            0          52m
kube-system   kube-controller-manager-kub1.localhost     1/1       Running            0          52m
kube-system   kube-discovery-1769846148-2znmc            1/1       Running            0          52m
kube-system   kube-dns-2924299975-mjcll                  4/4       Running            0          52m
kube-system   kube-proxy-393q6                           1/1       Running            0          52m
kube-system   kube-proxy-lhzpw                           1/1       Running            0          52m
kube-system   kube-scheduler-kub1.localhost              1/1       Running            0          52m
kube-system   kubernetes-dashboard-3203831700-sz5kr      0/1       CrashLoopBackOff   11         39m
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]# kubectl describe pod kubernetes-dashboard-3203831700-sz5kr -n kube-system
Name:           kubernetes-dashboard-3203831700-sz5kr
Namespace:      kube-system
Node:           kub2.localhost/192.168.20.11
Start Time:     Fri, 20 Jan 2017 02:00:57 +1100
Labels:         app=kubernetes-dashboard
                pod-template-hash=3203831700
Status:         Running
IP:             192.168.99.129
Controllers:    ReplicaSet/kubernetes-dashboard-3203831700
Containers:
  kubernetes-dashboard:
    Container ID:       docker://61448d97cbbcea7900def2f9252b186ba09b3bfde5fcc761fd5a69d30ef9e63e
    Image:              gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
    Image ID:           docker-pullable://gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:46a09eb9c611e625e7de3fcf325cf78e629d002e57dc80348e9b0638338206b5
    Port:               9090/TCP
    State:              Waiting
      Reason:           CrashLoopBackOff
    Last State:         Terminated
      Reason:           Error
      Exit Code:        1
      Started:          Fri, 20 Jan 2017 02:39:20 +1100
      Finished:         Fri, 20 Jan 2017 02:39:50 +1100
    Ready:              False
    Restart Count:      11
    Liveness:           http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g8c6f (ro)
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True
  Ready         False
  PodScheduled  True
Volumes:
  default-token-g8c6f:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-g8c6f
QoS Class:      BestEffort
Tolerations:    dedicated=master:Equal:NoSchedule
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath                           Type            Reason          Message
  ---------     --------        -----   ----                            -------------                           --------        ------          -------
  40m           40m             1       {default-scheduler }                                                    Normal          Scheduled       Successfully assigned kubernetes-dashboard-3203831700-sz5kr to kub2.localhost
  40m           40m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id a49cd03e9777; Security:[seccomp=unconfined]
  40m           40m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id a49cd03e9777
  39m           39m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id ce3d37ca7822; Security:[seccomp=unconfined]
  39m           39m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id ce3d37ca7822
  39m           39m             2       {kubelet kub2.localhost}                                                Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  38m   38m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id cd022645360a
  38m   38m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id cd022645360a; Security:[seccomp=unconfined]
  37m   37m     3       {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  37m   37m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id 62be00de3036; Security:[seccomp=unconfined]
  37m   37m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id 62be00de3036
  36m   36m     4       {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  35m   35m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id 8375b55999c9; Security:[seccomp=unconfined]
  35m   35m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id 8375b55999c9
  35m   33m     7       {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  33m   33m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id abf92039a988; Security:[seccomp=unconfined]
  33m   33m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id abf92039a988
  33m   30m     14      {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  30m   30m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id 019b1fa3d8f1; Security:[seccomp=unconfined]
  30m   30m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id 019b1fa3d8f1
  24m   24m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id d787df99e676; Security:[seccomp=unconfined]
  24m   24m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id d787df99e676
  19m   19m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id d7c318d46200
  19m   19m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id d7c318d46200; Security:[seccomp=unconfined]
  39m   18m     2       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Warning Unhealthy       Liveness probe failed: Get http://192.168.99.129:9090/: dial tcp 192.168.99.129:9090: getsockopt: connection refused
  40m   2m      12      {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Pulling         pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1"
  40m   2m      12      {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Pulled          Successfully pulled image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1"
  13m   1m      3       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         (events with common reason combined)
  13m   1m      3       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         (events with common reason combined)
  29m   5s      125     {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  39m   5s      155     {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Warning BackOff Back-off restarting failed docker container
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]# kubectl logs kubernetes-dashboard-3203831700-sz5kr -n kube-system
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
[root@kub1 ~]#
Expected result

dashboard --> Running

Comments

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 44 (10 by maintainers)

Most upvoted comments

If you follow the kubeadm instructions to the letter … Which means install docker, kubernetes (kubeadm, kubectl, & kubelet), and calico with the Kubeadm hosted instructions … and your computer nodes have a physical ip address in the range of 192.168.X.X then you will end up with the above mentioned non-working dashboard. This is because the node ip addresses clash with the internal calico ip addresses. To fix, do this during the installation:

During the master node cluster creation step: export CALICO_IPV4POOL_CIDR=172.16.0.0 kubeadm init --pod-network-cidr=$CALICO_IPV4POOL_CIDR/16

When you install the POD network and have chosen calico: Download the calico.yaml and patch in the alternate CIDR

wget https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml -O calico.yaml

sed -i “s/192.168.0.0/'”$CALICO_IPV4POOL_CIDR"'/g" calico.yaml

kubectl apply -f calico.yaml

@j0nesin @IanLewis I just realised that dashboard works fine if it is running on the same node as the apiserver.

I am using calico

I’m out of ideas. I don’t know why it might not be working. Even still it’s very likely network or Kubernetes core related rather than a bug in the Dashboard.

You should try to actually send a request to the API server and make sure you can get a response. It may be that the API server cannot use etcd.

Same issue for me. Followed kubeadm install steps for a master and 2 node cluster on CentOS 7. When adding dashboard, found same issues as here when using flannel, but when recreated cluster with same steps but using Weave, dashboard works. Hope that helps someone narrow down where the issue is.

Same issue here. Used kuebadm, looking at the docker logs it seems dashboard doesn’t pass the ca cert.

Things started working for me, and I suspect it was due to using the latest kubernetes-dashboard.yaml with the annotation for running the dashboard on the master.

https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml

I used Weave as suggested in the initial issue description

So, I used the installation guide on two, clean ubuntu 16 VMs.

apt list --installed | grep kube

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

kubeadm/kubernetes-xenial,now 1.6.0-alpha.0-2074-a092d8e0f95f52-00 amd64 [installed]
kubectl/kubernetes-xenial,now 1.5.2-00 amd64 [installed]
kubelet/kubernetes-xenial,now 1.5.2-00 amd64 [installed]
kubernetes-cni/kubernetes-xenial,now 0.3.0.1-07a8a2-00 amd64 [installed]

The first attempt failed. Some pods could not be scheduled because a single core VM could not satisfy the CPU requests. I increased to two CPU cores, but the master did not start-up properly. A bit strange. My observation was similar to: https://github.com/kubernetes/kubernetes/issues/33671

In the second attempt I started from scretch with a two core VM and everything worked smoothly. It did not matter if Dashboard was scheduled on master or node. Both worked

So, I could not find anything Dashboard related.