metrics-server: couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
API Server Logs :-
1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1012 08:23:25.282353 1 controller.go:111] loading OpenAPI spec for “v1beta1.metrics.k8s.io” failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] I1012 08:23:25.282377 1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1012 08:23:25.396126 1 memcache.go:134] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1012 08:23:25.991550 1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.105.54.184:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1012 08:23:46.469237 1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.105.54.184:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1012 08:23:55.440941 1 memcache.go:134] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1012 08:23:55.789103 1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.105.54.184:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1012 08:24:25.477704 1 memcache.go:134] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1012 08:24:25.705399 1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.105.54.184:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1012 08:24:55.516394 1 memcache.go:134] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1012 08:24:55.719712 1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.105.54.184:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) E1012 08:25:13.395961 1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.105.54.184:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I1012 08:25:25.282682 1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io E1012 08:25:25.282944 1 controller.go:111] loading OpenAPI spec for “v1beta1.metrics.k8s.io” failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[X-Content-Type-Options:[nosniff] Content-Type:[text/plain; charset=utf-8]] I1012 08:25:25.282969 1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E1012 08:25:25.563266 1 memcache.go:134] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Controller Logs :- E1012 08:26:57.910695 1 memcache.go:134] couldn’t get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1012 08:27:13.214427 1 resource_quota_controller.go:430] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W1012 08:27:17.126343 1 garbagecollector.go:647] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Metric Server Logs :-
I1012 08:22:11.248135 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) [restful] 2018/10/12 08:22:12 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi [restful] 2018/10/12 08:22:12 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/ I1012 08:22:12.537437 1 serve.go:96] Serving securely on [::]:443
Kubernetes Version :- 1.12.1
Metric Server Deployment YAML :-
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
Any help is appreciated.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 39
- Comments: 30 (2 by maintainers)
As an update to this, all my issues with metrics-server went away after I set
in the stable helm chart
https://github.com/helm/charts/tree/master/stable/metrics-server
Closing per Kubernetes issue triage policy
GitHub is not the right place for support requests. If you’re looking for help, check Stack Overflow and the troubleshooting guide. You can also post your question on the Kubernetes Slack or the Discuss Kubernetes forum. If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
I can confirm https://github.com/kubernetes-incubator/metrics-server/issues/157#issuecomment-484047998 helps
Not using the helm chart, I added to the manifest under spec/template/spec
hostNetwork: trueand now it is working.Also I am using the flags
Coming back around to this, I am still seeing these errors with metrics-server.
Here is my config:
If I spam
kubectl top nodeI will most times get a response, however randomly I will get this error:I’m seeing this in the apiserver logs:
My solution was this:
I did not have the metrics server installed, nor did I need it. At some point somebody installed it and uninstalled it. But the uninstallation was not complete. We had these lingering resources:
The comment in values.yaml https://github.com/helm/charts/blob/master/stable/metrics-server/values.yaml mentions that might be required if we use Weave network on EKS. We faced a similar problem in EKS using AWS CNI and this issue seems to fix the problem. I believe this is more a band-aid solution and the root is somewhere else.
Check that your ControlPlane can reach your DataPlane on 443 (I had to modify SG for both to allow this and it worked)
Note that if you are using GKE (Google Cloud Kubernetes Engine) and that your cluster has been without containers for a long time (multiple days), then GKE decommissions the nodes from the cluster (so as to save you costs). As such, without nodes, the control plane processes cannot start. So if that’s your case, all is good! Just run an image or deploy a deployment and everything should start working as per usual 😃
It works, thanks!
I had this problem, in my case i am using
Kops 1.10with a Gossip based cluster, i added to lines in mydeploy/1.8+/metrics-server-deployment.yamlfile:and after this,
kubectl top...worked after 5 minutesi added hostNetwork: true but my problem no fix the apiserver report log "kube-controller-manager: E1011 13:37:24.015616 33182 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request“”
In the above mentioned step , I am actually using calico. The respective ports of calico are open and pods from other nodes are reachable.
How to check this? I have my cluster hosted on Azure AKS
Thanks, this is gold!!!
Ah, there’s a GKE trouble-shooting guide here: https://cloud.google.com/kubernetes-engine/docs/troubleshooting#namespace_stuck_in_terminating_state
I don’t think metrics-server was meant to run in host network. I think it’s a problem with particular overlay network, but it’s not my expertise.
Metrics Server uses https://github.com/kubernetes/kube-aggregator to register into Apiserver maybe you could find answers there?
Still it would be useful to document on how metrics server provides Metrics API and what requirements it poses on Network