metrics-server: Unable to get CPU for container discarding data: missing cpu usage metric
I added the following to the 1.8+ manifest and deploy it to EKS.
imagePullPolicy: Always
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Yet, I see the following errors:
I0123 02:04:30.824820 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0123 02:04:31.420786 1 authentication.go:166] cluster doesn't provide client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication to extension api-server won't work.
W0123 02:04:31.448142 1 authentication.go:210] cluster doesn't provide client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication to extension api-server won't work.
[restful] 2019/01/23 02:04:31 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi
[restful] 2019/01/23 02:04:31 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/
I0123 02:04:31.493567 1 serve.go:96] Serving securely on [::]:443
E0123 02:04:34.290945 1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-converse-1: no metrics known for pod
E0123 02:04:34.290964 1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-converse-0: no metrics known for pod
E0123 02:05:04.301881 1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-converse-1: no metrics known for pod
E0123 02:05:04.301998 1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-converse-0: no metrics known for pod
E0123 03:11:31.654059 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-0-1-40.ap-southeast-1.compute.internal: unable to get CPU for container "iconverse-nlp" in pod default/iconverse-nlp-1 on node "10.0.1.40", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-10-0-0-168.ap-southeast-1.compute.internal: unable to get CPU for container "iconverse-converse" in pod default/iconverse-converse-0 on node "10.0.0.168", discarding data: missing cpu usage metric]
E0123 03:11:35.994020 1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-converse-0: no metrics known for pod
E0123 03:12:06.018005 1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-converse-0: no metrics known for pod
E0123 04:31:31.563306 1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-10-0-2-172.ap-southeast-1.compute.internal: unable to get CPU for container "iconverse-nlp" in pod default/iconverse-nlp-0 on node "10.0.2.172", discarding data: missing cpu usage metric
$ k --kubeconfig=$KUBECONFIG get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
iconverse-converse StatefulSet/iconverse-converse <unknown>/50% 2 5 2 4h
$ k --kubeconfig=$KUBECONFIG describe hpa iconverse-converse
Error from server (NotFound): the server could not find the requested resource
This is my hpa manifest:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: myapplication
namespace: default
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: myapplication
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 50
Any insight and advice is appreciated.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 16
- Comments: 20 (3 by maintainers)
Closing per Kubernetes issue triage policy
GitHub is not the right place for support requests. If you’re looking for help, check Stack Overflow and the troubleshooting guide. You can also post your question on the Kubernetes Slack or the Discuss Kubernetes forum. If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
maybe you didnt provide ca.crt and ca.key to the node of your cluster
Same here in K8s 1.13.5
There are 200 forks, does anyone know which (if any) of them fix metric server?
I had the same problem
kubectl version v1.15.0
metrics-server-amd64:v0.3.6
Thanks @zouyee. I changed from using “args” to “commands” and I don’t see the
401 Unauthorized
now. However,kubectl logs -f metrics-server... -n kube-system
still shows “no metrics known for pod”:same issue in EKS cluster with version 1.13