metrics-server: kubectl top pod -A returns "No resources found"
Hi,
I am currently expeiencing an issue where kubectl top nodes returns metrics but kubectl top pods -A returns “No resources found”. I would like to know if there are any solutions or workarounds to rectify it. Thank you.
k8s-admin@k8s-master:~$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 116m 5% 1415Mi 18%
k8s-worker01 36m 1% 943Mi 12%
k8s-admin@k8s-master:~$ kubectl top pod -A
No resources found
k8s-admin@k8s-master:~$
The pod log shows multiple “Failed getting complete container metric” and “Failed getting complete Pod metric” entries.
I0603 10:37:15.901762 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0603 10:37:15.902130 1 dynamic_serving_content.go:112] "Loaded a new cert/key pair" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0603 10:37:16.201660 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController
I0603 10:37:16.306680 1 scraper.go:115] "Scraping metrics from nodes" nodeCount=2
I0603 10:37:16.308353 1 genericapiserver.go:406] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete
I0603 10:37:16.310463 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0603 10:37:16.310493 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0603 10:37:16.310512 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0603 10:37:16.310515 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0603 10:37:16.310531 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0603 10:37:16.310535 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0603 10:37:16.310795 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" certDetail="\"localhost@1654252635\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1654252635\" (2022-06-03 09:37:15 +0000 UTC to 2023-06-03 09:37:15 +0000 UTC (now=2022-06-03 10:37:16.31077715 +0000 UTC))"
I0603 10:37:16.311066 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1654252636\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1654252635\" (2022-06-03 09:37:15 +0000 UTC to 2023-06-03 09:37:15 +0000 UTC (now=2022-06-03 10:37:16.31105512 +0000 UTC))"
I0603 10:37:16.311128 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0603 10:37:16.311128 1 secure_serving.go:266] Serving securely on [::]:4443
I0603 10:37:16.311216 1 genericapiserver.go:462] [graceful-termination] waiting for shutdown to be initiated
I0603 10:37:16.311263 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0603 10:37:16.311342 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
I0603 10:37:16.314054 1 scraper.go:137] "Scraping node" node="k8s-worker01"
I0603 10:37:16.318748 1 scraper.go:137] "Scraping node" node="k8s-master"
I0603 10:37:16.411146 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0603 10:37:16.411184 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0603 10:37:16.411220 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0603 10:37:16.411304 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"front-proxy-ca\" [] validServingFor=[front-proxy-ca] issuer=\"<self>\" (2022-06-03 08:03:19 +0000 UTC to 2032-05-31 08:03:19 +0000 UTC (now=2022-06-03 10:37:16.411290744 +0000 UTC))"
I0603 10:37:16.411389 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" certDetail="\"localhost@1654252635\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1654252635\" (2022-06-03 09:37:15 +0000 UTC to 2023-06-03 09:37:15 +0000 UTC (now=2022-06-03 10:37:16.411380334 +0000 UTC))"
I0603 10:37:16.411726 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1654252636\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1654252635\" (2022-06-03 09:37:15 +0000 UTC to 2023-06-03 09:37:15 +0000 UTC (now=2022-06-03 10:37:16.4117039 +0000 UTC))"
I0603 10:37:16.412046 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubernetes\" [] validServingFor=[kubernetes] issuer=\"<self>\" (2022-06-03 08:03:19 +0000 UTC to 2032-05-31 08:03:19 +0000 UTC (now=2022-06-03 10:37:16.411776692 +0000 UTC))"
I0603 10:37:16.412078 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"front-proxy-ca\" [] validServingFor=[front-proxy-ca] issuer=\"<self>\" (2022-06-03 08:03:19 +0000 UTC to 2032-05-31 08:03:19 +0000 UTC (now=2022-06-03 10:37:16.41206886 +0000 UTC))"
I0603 10:37:16.412144 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" certDetail="\"localhost@1654252635\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1654252635\" (2022-06-03 09:37:15 +0000 UTC to 2023-06-03 09:37:15 +0000 UTC (now=2022-06-03 10:37:16.412137253 +0000 UTC))"
I0603 10:37:16.412413 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1654252636\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1654252635\" (2022-06-03 09:37:15 +0000 UTC to 2023-06-03 09:37:15 +0000 UTC (now=2022-06-03 10:37:16.412402424 +0000 UTC))"
I0603 10:37:18.353326 1 decode.go:189] "Failed getting complete container metric" containerName="kubernetes-dashboard" containerMetric={StartTime:2022-06-03 08:10:28 +0000 UTC Timestamp:2022-06-03 10:37:18.35 +0000 UTC CumulativeCpuUsed:2892890089 MemoryUsage:0}
I0603 10:37:18.353362 1 decode.go:97] "Failed getting complete Pod metric" pod="kubernetes-dashboard/kubernetes-dashboard-5676d8b865-9m8qs"
I0603 10:37:18.353372 1 decode.go:189] "Failed getting complete container metric" containerName="metrics-server" containerMetric={StartTime:2022-06-03 10:37:15 +0000 UTC Timestamp:2022-06-03 10:37:18.351 +0000 UTC CumulativeCpuUsed:587419181 MemoryUsage:0}
I0603 10:37:18.353379 1 decode.go:97] "Failed getting complete Pod metric" pod="kube-system/metrics-server-f54c8cf58-jj7xw"
I0603 10:37:18.353385 1 decode.go:189] "Failed getting complete container metric" containerName="dashboard-metrics-scraper" containerMetric={StartTime:2022-06-03 08:10:15 +0000 UTC Timestamp:2022-06-03 10:37:17.339 +0000 UTC CumulativeCpuUsed:958244446 MemoryUsage:0}
I0603 10:37:18.353396 1 decode.go:97] "Failed getting complete Pod metric" pod="kubernetes-dashboard/dashboard-metrics-scraper-8c47d4b5d-7trhp"
Installation method from modifying the file downloaded from here https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
to add - --kubelet-insecure-tls and hostNetwork: true
Have already added - --enable-aggregator-routing=true to apiserver.
Kubernetes : v1.24.1 OS: Ubuntu 20.04 Container Runtime: Docker 20.10.16 with Mirantis cri-dockerd v0.2.1
Output from kubectl get --raw /api/v1/nodes/k8s-master/proxy/metrics/resource
# HELP container_cpu_usage_seconds_total [ALPHA] Cumulative cpu time consumed by the container in core-seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-6d4b75cb6d-b9xxq"} 8.194643127 1654252257283
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-6d4b75cb6d-gmpwx"} 8.344803163 1654252257294
container_cpu_usage_seconds_total{container="etcd",namespace="kube-system",pod="etcd-k8s-master"} 111.35739673 1654252257288
container_cpu_usage_seconds_total{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-k8s-master"} 282.483272116 1654252257279
container_cpu_usage_seconds_total{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-k8s-master"} 100.571676766 1654252257291
container_cpu_usage_seconds_total{container="kube-proxy",namespace="kube-system",pod="kube-proxy-5h4nt"} 2.282856193 1654252257293
container_cpu_usage_seconds_total{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-k8s-master"} 15.673070345 1654252257281
container_cpu_usage_seconds_total{container="tigera-operator",namespace="tigera-operator",pod="tigera-operator-5fb55776df-kd9hl"} 13.443719557 1654252257284
# HELP container_memory_working_set_bytes [ALPHA] Current working set of the container in bytes
# TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="coredns",namespace="kube-system",pod="coredns-6d4b75cb6d-b9xxq"} 0 1654252257283
container_memory_working_set_bytes{container="coredns",namespace="kube-system",pod="coredns-6d4b75cb6d-gmpwx"} 0 1654252257294
container_memory_working_set_bytes{container="etcd",namespace="kube-system",pod="etcd-k8s-master"} 0 1654252257288
container_memory_working_set_bytes{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-k8s-master"} 0 1654252257279
container_memory_working_set_bytes{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-k8s-master"} 0 1654252257291
container_memory_working_set_bytes{container="kube-proxy",namespace="kube-system",pod="kube-proxy-5h4nt"} 0 1654252257293
container_memory_working_set_bytes{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-k8s-master"} 0 1654252257281
container_memory_working_set_bytes{container="tigera-operator",namespace="tigera-operator",pod="tigera-operator-5fb55776df-kd9hl"} 0 1654252257284
# HELP container_start_time_seconds [ALPHA] Start time of the container since unix epoch in seconds
# TYPE container_start_time_seconds gauge
container_start_time_seconds{container="coredns",namespace="kube-system",pod="coredns-6d4b75cb6d-b9xxq"} 1.654243427e+09 1654243427000
container_start_time_seconds{container="coredns",namespace="kube-system",pod="coredns-6d4b75cb6d-gmpwx"} 1.654243426e+09 1654243426000
container_start_time_seconds{container="etcd",namespace="kube-system",pod="etcd-k8s-master"} 1.654243406e+09 1654243406000
container_start_time_seconds{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-k8s-master"} 1.654244775e+09 1654244775000
container_start_time_seconds{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-k8s-master"} 1.654244744e+09 1654244744000
container_start_time_seconds{container="kube-proxy",namespace="kube-system",pod="kube-proxy-5h4nt"} 1.654243426e+09 1654243426000
container_start_time_seconds{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-k8s-master"} 1.654244744e+09 1654244744000
container_start_time_seconds{container="tigera-operator",namespace="tigera-operator",pod="tigera-operator-5fb55776df-kd9hl"} 1.654244786e+09 1654244786000
# HELP node_cpu_usage_seconds_total [ALPHA] Cumulative cpu time consumed by the node in core-seconds
# TYPE node_cpu_usage_seconds_total counter
node_cpu_usage_seconds_total 1207.791651824 1654252255672
# HELP node_memory_working_set_bytes [ALPHA] Current working set of the node in bytes
# TYPE node_memory_working_set_bytes gauge
node_memory_working_set_bytes 1.47912704e+09 1654252255672
# HELP pod_cpu_usage_seconds_total [ALPHA] Cumulative cpu time consumed by the pod in core-seconds
# TYPE pod_cpu_usage_seconds_total counter
pod_cpu_usage_seconds_total{namespace="kube-system",pod="coredns-6d4b75cb6d-b9xxq"} 8.205269333 1654252250472
pod_cpu_usage_seconds_total{namespace="kube-system",pod="coredns-6d4b75cb6d-gmpwx"} 8.354866424 1654252243508
pod_cpu_usage_seconds_total{namespace="kube-system",pod="etcd-k8s-master"} 111.343763613 1654252254633
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-apiserver-k8s-master"} 282.501425633 1654252257125
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-controller-manager-k8s-master"} 118.740043889 1654252245088
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-proxy-5h4nt"} 2.291610019 1654252248901
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-scheduler-k8s-master"} 19.463919807 1654252254169
pod_cpu_usage_seconds_total{namespace="tigera-operator",pod="tigera-operator-5fb55776df-kd9hl"} 15.76809826 1654252249512
# HELP pod_memory_working_set_bytes [ALPHA] Current working set of the pod in bytes
# TYPE pod_memory_working_set_bytes gauge
pod_memory_working_set_bytes{namespace="kube-system",pod="coredns-6d4b75cb6d-b9xxq"} 1.3320192e+07 1654252250472
pod_memory_working_set_bytes{namespace="kube-system",pod="coredns-6d4b75cb6d-gmpwx"} 1.3705216e+07 1654252243508
pod_memory_working_set_bytes{namespace="kube-system",pod="etcd-k8s-master"} 4.6534656e+07 1654252254633
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-apiserver-k8s-master"} 4.14420992e+08 1654252257125
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-controller-manager-k8s-master"} 5.7974784e+07 1654252245088
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-proxy-5h4nt"} 1.9468288e+07 1654252248901
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-scheduler-k8s-master"} 2.301952e+07 1654252254169
pod_memory_working_set_bytes{namespace="tigera-operator",pod="tigera-operator-5fb55776df-kd9hl"} 2.6734592e+07 1654252249512
# HELP scrape_error [ALPHA] 1 if there was an error while getting container metrics, 0 otherwise
# TYPE scrape_error gauge
scrape_error 0
Thanks.
/kind support
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 20 (6 by maintainers)
I opened a ticket with Docker (I’m using the Docker Desktop Kubernetes setup). They told me they don’t support Kubernetes, they just deploy it with Docker Desktop if you enable it. They said they should be releasing something soon that fixes it since the issue is around an update to dockerd. They did not provide a timeline for the update.