dashboard: Kubernetes dashboard unauthorized & endpoints failed problem
Issue details


localhost:8001

localhost:8001/ui

kubeadm-1.5.0-0.alpha.0.1534.gcf7301f.x86_64
kubelet-1.4.0-0.x86_64
kubernetes-cni-0.3.0.1-0.07a8a2.x86_64
kubectl-1.4.0-0.x86_64
CentOS 7
Steps to reproduce
Observed result
[root@cloud ~]# kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 100.64.0.1 <none> 443/TCP 17h
kube-system kube-dns 100.64.0.10 <none> 53/UDP,53/TCP 17h
kube-system kubernetes-dashboard 100.73.172.90 <nodes> 80/TCP 17h
[root@cloud kubernetes]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
kube-dns is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
[root@cloud kubernetes]# kubectl describe nodes
Name: cloud
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubeadm.alpha.kubernetes.io/role=master
kubernetes.io/hostname=cloud
Taints: dedicated=master:NoSchedule
CreationTimestamp: Sat, 29 Oct 2016 20:35:47 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Sun, 30 Oct 2016 13:23:04 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Sun, 30 Oct 2016 13:23:04 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 30 Oct 2016 13:23:04 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Sun, 30 Oct 2016 13:23:04 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletReady kubelet is posting ready status
Addresses: 131.160.56.47,131.160.56.47
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 16
memory: 49283376Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 16
memory: 49283376Ki
pods: 110
System Info:
Machine ID: 6361e3b01be64734877ddc707f6564a4
System UUID: 44454C4C-4700-1059-8059-B9C04F38344A
Boot ID: ee853c9a-3c53-4800-b1e3-c9b1d474f3f4
Kernel Version: 3.10.0-327.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.10.3
Kubelet Version: v1.4.0
Kube-Proxy Version: v1.4.0
ExternalID: cloud
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-cloud 200m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-cloud 250m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-cloud 200m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-discovery-982812725-z4b0v 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-2247936740-j6pq1 210m (1%) 210m (1%) 390Mi (0%) 390Mi (0%)
kube-system kube-proxy-amd64-d2nr2 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-cloud 100m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
960m (6%) 210m (1%) 390Mi (0%) 390Mi (0%)
No events.[root@cloud kubernetes]# kubectl describe nodes
Name: cloud
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubeadm.alpha.kubernetes.io/role=master
kubernetes.io/hostname=cloud
Taints: dedicated=master:NoSchedule
CreationTimestamp: Sat, 29 Oct 2016 20:35:47 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Sun, 30 Oct 2016 13:23:55 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Sun, 30 Oct 2016 13:23:55 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 30 Oct 2016 13:23:55 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Sun, 30 Oct 2016 13:23:55 +0800 Sat, 29 Oct 2016 20:35:47 +0800 KubeletReady kubelet is posting ready status
Addresses: 131.160.56.47,131.160.56.47
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 16
memory: 49283376Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 16
memory: 49283376Ki
pods: 110
System Info:
Machine ID: 6361e3b01be64734877ddc707f6564a4
System UUID: 44454C4C-4700-1059-8059-B9C04F38344A
Boot ID: ee853c9a-3c53-4800-b1e3-c9b1d474f3f4
Kernel Version: 3.10.0-327.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.10.3
Kubelet Version: v1.4.0
Kube-Proxy Version: v1.4.0
ExternalID: cloud
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-cloud 200m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-cloud 250m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-cloud 200m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-discovery-982812725-z4b0v 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-2247936740-j6pq1 210m (1%) 210m (1%) 390Mi (0%) 390Mi (0%)
kube-system kube-proxy-amd64-d2nr2 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-cloud 100m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
960m (6%) 210m (1%) 390Mi (0%) 390Mi (0%)
No events.[root@cloud kubernetes]# kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 100.64.0.1 <none> 443/TCP 16h
kube-system kube-dns 100.64.0.10 <none> 53/UDP,53/TCP 16h
kube-system kubernetes-dashboard 100.73.172.90 <nodes> 80/TCP 16h
[root@cloud kubernetes]# kubectl get svc --all-namespaces^C
[root@cloud kubernetes]# kubectl get events --namespace=kube-system
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
1s 16h 26052 kube-dns-2247936740-j6pq1 Pod Warning FailedSync {kubelet cloud} Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2247936740-j6pq1_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2247936740-j6pq1_kube-system(472b334c-9dd4-11e6-bb85-002219c0946b)\" using network plugins \"cni\": cni config unintialized; Skipping pod"
2s 16h 3302 kubernetes-dashboard-1655269645-km0gq Pod Warning FailedScheduling {default-scheduler } pod (kubernetes-dashboard-1655269645-km0gq) failed to fit in any node
fit failure on node (cloud): PodToleratesNodeTaints
Expected result
want to login the kube dashboard from external broswer.
Comments
May need the help to have a look. Thank you
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 22 (16 by maintainers)
Most users want Dashboard to be deployed. Setting up master first and validating that everything works before adding worker nodes is a possible workflow with kubeadm. So, considering the many problems I would rather make it simple, for the moment. Minimizing master for production would apply for dns too, I assume. Anyway, overall the problem is rather related to kubeadm documentation. I will create a PR for it too.
@naisanza it’s an old issue, but if I recall correctly,
kubeadmtaints the master by default so that workloads won’t normally run there. Since April though, it’s possible thatkubeadmnow tolerates the taint on master to avoid this issue. I’m not sure, as I’ve not been paying a lot of attention in this area.If you’re having issues and you’re not sure it’s the same issue as this one, it might be best to open a new issue and describe your problem fresh.
My master is marked unschedulable. I don’t want things scheduled on it. I don’t want any anything other than the master components running on it.
To be honest, my concern is that this is being done for single node clusters… but if the user has the master marked unscheduable then basically every single thing they try to deploy is going to fail to schedule anyway.
Either way, I can either remove the toleration from the yaml or leave it commented out, it’s not really that big of a deal to me. I vendor my own copy of the dashboard files anyway for posterity.