metrics-server: Metrics server api not getting registered
I have deployed metrics api in kubernetes following https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B. Metric server is running fine and but I am not able to get metrics from it. I am using kubernetes 1.9 version.
[demo@dev-demo metrics-server]$ kubectl get --raw "/apis/metrics.k8s.io" Error from server (NotFound): the server could not find the requested resource [demo@dev-demo metrics-server]$
`[demo@dev-demo metrics-server]$ k get hpa -n default NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache <unknown> / 50% 1 10 1 19h [demo@dev-demo metrics-server]$ k describe hpa -n default Name: php-apache Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 21 Mar 2018 04:57:32 -0400 Reference: Deployment/php-apache Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 50% Min replicas: 1 Max replicas: 10 Conditions: Type Status Reason Message
AbleToScale True SucceededGetScale the HPA controller was able to get the target’s current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) Events: Type Reason Age From Message
Warning FailedComputeMetricsReplicas 43m (x13 over 49m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) Warning FailedGetResourceMetric 4m (x91 over 49m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)`
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 3
- Comments: 17 (2 by maintainers)
@groob 请问问题解决了吗?
FYI I fixed this problem on Amazon EKS by updating the kubernetes nodes security groups to allow ingress / incoming https connections from the EKS masters security group.
SOLUTION (if you are behind Corporate Proxy)
kubectl -n=kube-system get servicesvi /etc/kubernetes/manifests/kube-apiserver.yamlsystemctl daemon-reload && systemctl restart kubeletkubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"Same problem when deploying on Amazon EKS… and there I don’t think I can edit apiserver.yaml.
If you have the same problem on GKE cluster - you need to add prometheus-adapter port (not a kubernetes service port) to allowed in gcp firewall rules. By default kube-master is allowed to make requests only to 10250 and 443 ports. 6443 (default prom-adapter port) need to be added to this rule (or you can add another one with this port and filter by master ip)
As a follow up to @zknill comment. For EKS to work, you must also allow 443 egress from the control plane to the nodes.
if you do
kubectl get apiservice v1beta1.metrics.k8s.io -o yaml, does the status look ok?@antcs In 2 step, what do you mean no_proxy variable, I got the same problem here.
@antcs please elaborate:
can you give yaml snippet?
Same issue as @sujithvs74
Initially
kubectl get --raw /apis/metrics.k8s.ioworks, but after a short time the resource becomes unavailable.I’m running 1.9.6 on GKE.
Whatever your cluster networking is, it needs to permit your API server to be able to talk to pods. Please ensure that this is the case, and ensure (as mentioned above) that your master isn’t being routed through a proxy.
Thanks @zknill and @craftyc0der for this. Updating security groups on worker and control plane (both ingress and egress) worked 😃
I am facing the same issue and getting the following error:
Error from server (ServiceUnavailable): the server is currently unable to handle the requestTried the above suggestion but still stuck. Any more suggestions?zknill
Thanks Buddy… Your had a perfect reply for this problem coming on Amazon EKS. Following basic installations if still we get reason: FailedDiscoveryCheck and status: “False” and type: Available, its simply because of security group’s inbound rule issue. Giving proper https access to Cluster Security group to access Node security group solves the problem.Thanks…