dashboard: kube-dashboard generates 502 error
Environment
Dashboard version: 1.6.3
Kubernetes version: 1.6.6
Operating system: Centos7
Steps to reproduce
Installed kube-dashboard according to the instructions provided on the website. When I run
kubectl proxy, I’m unable to access the Dashboard UI at http://localhost:8001/ui/. When I access http://Localhost:8001/ url, I see the kubernetes api stuff, so kubectl itself is working fine.
Observed result
Getting an 502 error on the following url: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
console output of kubectl proxy:
Starting to serve on 127.0.0.1:8001I0906 16:46:06.470208 31586 logs.go:41] http: proxy error: unexpected EOF
pod log of the dashboard container:
> $ kubectl logs kubernetes-dashboard-2870541577-czpdb --namespace kube-system
Using HTTP port: 8443
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization header
Successful initial request to the apiserver, version: v1.6.6
No request provided. Skipping authorization header
Creating in-cluster Heapster client
Expected result
Expected to see the Dashboard
Comments
I also have heapster installed, and it is able the kubernetes api just fine. So I guess that pod-networking, service-cidr networking, and service accounts itself are working fine. It’s only kube-dashboard that is giving me issues.
This is the yml file I used to deploy Dashboard:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 23 (11 by maintainers)
Hi, yes, I solved the issue. I think the problem was caused by the fact that my api-server was not running in a pod (in kubeadm, the master processes run in pods). Since the requests were proxied through the api-server, but api-server had no access not the pod network or service-network (kube-proxy wasn’t installed on my master nodes either), kube-apiserver is unable to access any services.
I now run all my master processes as pods (using static pod manifests) on the master nodes, and everything works fine. It makes sense when I think about it.
Thanks for your assistance with this issue. Next time, I should think a little harder on how all the components work together 😃
do you have another service running in your cluster that you could try and access via kubeproxy?
If not you can create a very simple one like this:
this will create a deployment and service both named nginx in your default namespace and then try to contact the running nginx via the kubeproxy. If this also doesn’t work something in the kubeproxy is not working. If however this works we can try to investigate further why the dashboard is not playing nicely with the proxy.
Also, don’t just suggest another deployment tool. I NEED to figure out how Kubernetes works, it’s part of my job. And the K8s reference manuals are vague on the system operations part.
Oh, maybe important: I’m not running kube-apiserver, kube-scheduler and kube-controller-manager as pods. This is a cluster built from “kubernetes the hard way” tutorial. I don’t know if this makes a difference for kube-dashboard and the proxy-method?