minikube: Multinode Minikube + Dashboard => Error (related to dashboard-metrics-scraper?)
Thanks very much for the minikube & kubernetes!
Steps to reproduce the issue:
- minikube start --nodes=4 --cpus=2 --memory=3000MB --driver=docker (Running on a Mac Mini 2020 version, with i7 12core and 32GB RAM.)
- minikube dashboard
P.S. I know multi-node is in experimental, and I am willing to help if I can 😃
Full output of failed command: minikube dashboard
I0716 18:20:59.052468 78440 mustload.go:64] Loading cluster: minikube
I0716 18:20:59.052981 78440 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0716 18:20:59.086087 78440 host.go:65] Checking if "minikube" exists ...
I0716 18:20:59.086411 78440 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0716 18:20:59.121296 78440 api_server.go:146] Checking apiserver status ...
I0716 18:20:59.121441 78440 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0716 18:20:59.121507 78440 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0716 18:20:59.160084 78440 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/Users/tom/.minikube/machines/minikube/id_rsa Username:docker}
I0716 18:20:59.270791 78440 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1775/cgroup
I0716 18:20:59.281732 78440 api_server.go:162] apiserver freezer: "7:freezer:/docker/76a5ba36b02d6ad01e4b24432b13c165d0b2072ae5c6048c1938cc2df00a1a01/kubepods/burstable/pod484e7c0718c2559ba40cc73195f5d1a3/ec76ebe6d79061b1c93e000041aa422f3ab9d6322f4b244620d7f87b72d2cc69"
I0716 18:20:59.281838 78440 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/76a5ba36b02d6ad01e4b24432b13c165d0b2072ae5c6048c1938cc2df00a1a01/kubepods/burstable/pod484e7c0718c2559ba40cc73195f5d1a3/ec76ebe6d79061b1c93e000041aa422f3ab9d6322f4b244620d7f87b72d2cc69/freezer.state
I0716 18:20:59.292002 78440 api_server.go:184] freezer state: "THAWED"
I0716 18:20:59.292038 78440 api_server.go:215] Checking apiserver healthz at https://127.0.0.1:32884/healthz ...
I0716 18:20:59.299163 78440 api_server.go:235] https://127.0.0.1:32884/healthz returned 200:
ok
W0716 18:20:59.299189 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299553 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299563 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299568 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299573 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299579 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299583 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299587 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299591 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299596 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299601 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299605 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299609 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299613 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299617 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299622 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299628 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299633 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299637 78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299642 78440 proxy.go:117] fail to check proxy env: Error ip not in block
🤔 正在验证 dashboard 运行情况 ...
I0716 18:20:59.314916 78440 service.go:212] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard 67f1b30c-86c5-49dc-8a3f-d14d764ebfd9 1029 0 2020-07-16 18:09:41 +0800 CST <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl Update v1 2020-07-16 18:09:41 +0800 CST FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 111 110 109 97 110 97 103 101 114 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 111 100 101 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 105 110 105 107 117 98 101 45 97 100 100 111 110 115 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.36.106,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
🚀 Launching proxy ...
I0716 18:20:59.315194 78440 dashboard.go:144] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context minikube proxy --port=0]
I0716 18:20:59.317861 78440 dashboard.go:149] Waiting for kubectl to output host:port ...
I0716 18:20:59.364538 78440 dashboard.go:167] proxy stdout: Starting to serve on 127.0.0.1:54967
🤔 正在验证 proxy 运行状况 ...
I0716 18:20:59.380697 78440 dashboard.go:204] http://127.0.0.1:54967/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:20:59 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00009a8c0 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004fe100 TLS:<nil>}
I0716 18:20:59.380762 78440 retry.go:30] will retry after 110.466µs: Temporary Error: unexpected response code: 503
I0716 18:20:59.387440 78440 dashboard.go:204] http://127.0.0.1:54967/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:20:59 GMT] X-Content-Type-Options:[nosniff]] Body:0xc000718200 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ec800 TLS:<nil>}
I0716 18:20:59.387481 78440 retry.go:30] will retry after 216.077µs: Temporary Error: unexpected response code: 503
I0716 18:20:59.393636 78440 dashboard.go:204] http://127.0.0.1:54967/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:20:59 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00009ae40 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000346600 TLS:<nil>}
I0716 18:20:59.393685 78440 retry.go:30] will retry after 262.026µs: Temporary Error: unexpected response code: 503
[...then this repeat over and over again..., for instance:]
I0716 18:22:25.484676 79413 retry.go:30] will retry after 4.744335389s: Temporary Error: unexpected response code: 503
I0716 18:22:30.239387 79413 dashboard.go:204] http://127.0.0.1:55060/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:22:30 GMT] X-Content-Type-Options:[nosniff]] Body:0xc000432e80 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000b6c700 TLS:<nil>}
I0716 18:22:30.239422 79413 retry.go:30] will retry after 4.014454686s: Temporary Error: unexpected response code: 503
[...and more...]
Full output of minikube start command used, if not already included:
minikube start --nodes=4 --cpus=2 --memory=3000MB --driver=docker
Optional: Full output of minikube logs command:
[minikube_logs.txt](https://github.com/kubernetes/minikube/files/4930857/minikube_logs.txt)
Extra information that IMHO maybe useful
- I see the dashboard-metrics-scraper complains.
k logs --namespace=kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-69lcp
{"level":"info","msg":"Kubernetes host: https://10.96.0.1:443","time":"2020-07-16T10:09:43Z"}
172.18.0.1 - - [16/Jul/2020:10:10:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
172.18.0.1 - - [16/Jul/2020:10:10:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
172.18.0.1 - - [16/Jul/2020:10:10:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2020-07-16T10:10:43Z"}
- more about that
k describe --namespace=kubernetes-dashboard pods dashboard-metrics-scraper-dc6947fbf-69lcp
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf-69lcp to minikube-m04
Normal Pulled 16m kubelet, minikube-m04 Container image "kubernetesui/metrics-scraper:v1.0.4" already present on machine
Normal Created 16m kubelet, minikube-m04 Created container dashboard-metrics-scraper
Normal Started 16m kubelet, minikube-m04 Started container dashboard-metrics-scraper
- The dashboard itself is also complaining, which seems to related to the scraper
k logs --namespace=kubernetes-dashboard kubernetes-dashboard-6dbb54fd95-4tcm7
2020/07/16 10:09:43 Starting overwatch
2020/07/16 10:09:43 Using namespace: kubernetes-dashboard
2020/07/16 10:09:43 Using in-cluster config to connect to apiserver
2020/07/16 10:09:43 Using secret token for csrf signing
2020/07/16 10:09:43 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/07/16 10:09:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2020/07/16 10:09:43 Successful initial request to the apiserver, version: v1.18.3
2020/07/16 10:09:43 Generating JWE encryption key
2020/07/16 10:09:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/07/16 10:09:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/07/16 10:09:43 Initializing JWE encryption key from synchronized object
2020/07/16 10:09:43 Creating in-cluster Sidecar client
2020/07/16 10:09:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/07/16 10:09:43 Serving insecurely on HTTP port: 9090
2020/07/16 10:09:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/07/16 10:10:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
...more...
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 22 (4 by maintainers)
I tried this with minikube v1.22.0 and the dashboard command worked with multi-node. Could someone who reported this problem confirm that this is fixed now? @fzyzcjy @xhebox @chatterjeesunit
My minikube version is : v1.15.1
I started minikube using this command
minikube start --memory 6000 --cpus=4 --nodes=2 --disk-size='5gb'minikube dashboardnever works for me if nodes are greater than 1.