kubernetes: hpa problem on k8s 1.9.0
/kind bug
What happened:
- first i set up k8s-app like this:
kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m --expose --port=80
- then i created the hpa which didn’t work:
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
error infos:
# kubectl describe hpa php-apache
Name: php-apache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 27 Dec 2017 14:36:38 +0800
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 3m (x2231 over 18h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
# kubectl get hpa php-apache -o yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
annotations:
autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2017-12-27T06:37:08Z","reason":"SucceededGetScale","message":"the
HPA controller was able to get the target''s current scale"},{"type":"ScalingActive","status":"False","lastTransitionTime":"2017-12-27T06:37:08Z","reason":"FailedGetResourceMetric","message":"the
HPA was unable to compute the replica count: unable to get metrics for resource
cpu: unable to fetch metrics from API: the server could not find the requested
resource (get pods.metrics.k8s.io)"}]'
creationTimestamp: 2017-12-27T06:36:38Z
name: php-apache
namespace: default
resourceVersion: "20568"
selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/php-apache
uid: 4ad25969-ead0-11e7-8a38-525400cecc16
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: php-apache
targetCPUUtilizationPercentage: 50
status:
currentReplicas: 1
desiredReplicas: 0
# journalctl -u kube-controller-manager.service -f
Dec 28 09:17:19 k8s401 kube-controller-manager[32278]: E1228 09:17:19.914013 32278 horizontal.go:189] failed to compute desired number of replicas based on listed metrics for Deployment/default/php-apache: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
Dec 28 09:17:19 k8s401 kube-controller-manager[32278]: I1228 09:17:19.914015 32278 event.go:218] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"default", Name:"php-apache", UID:"4ad25969-ead0-11e7-8a38-525400cecc16", APIVersion:"autoscaling/v2beta1", ResourceVersion:"20568", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
Dec 28 09:17:19 k8s401 kube-controller-manager[32278]: I1228 09:17:19.914094 32278 event.go:218] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"default", Name:"php-apache", UID:"4ad25969-ead0-11e7-8a38-525400cecc16", APIVersion:"autoscaling/v2beta1", ResourceVersion:"20568", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
the strange thing was: when i created the hpa, it used the apiVersion: autoscaling/v1, and in the kube-controller-manager's error log, it used the autoscaling/v2beta1
is this the default behavior or a misconfigured cluster ?
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?: heapster was running normal.
# kubectl logs -n kube-system heapster-7b76dbf757-xzxhc
I1227 01:38:17.584695 1 heapster.go:72] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
I1227 01:38:17.584756 1 heapster.go:73] Heapster version v1.3.0
I1227 01:38:17.585111 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
I1227 01:38:17.585169 1 configs.go:62] Using kubelet port 10255
I1227 01:38:17.625239 1 influxdb.go:252] created influxdb sink with options: host:monitoring-influxdb.kube-system.svc:8086 user:root db:k8s
I1227 01:38:17.625261 1 heapster.go:196] Starting with InfluxDB Sink
I1227 01:38:17.625265 1 heapster.go:196] Starting with Metric Sink
I1227 01:38:17.637983 1 heapster.go:106] Starting heapster on port 8082
I1227 01:39:05.104481 1 influxdb.go:215] Created database "k8s" on influxDB server at "monitoring-influxdb.kube-system.svc:8086"
Environment:
- Kubernetes version (use
kubectl version):
# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
baremetal - OS (e.g. from /etc/os-release):
# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
- Kernel (e.g.
uname -a):
Linux k8s401 4.4.0-97-generic #120-Ubuntu SMP Tue Sep 19 17:28:18 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
- Install tools:
https://github.com/gjmzj/kubeasz - Others:
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 7
- Comments: 40 (9 by maintainers)
i found a solution in my case:
kube-controller-manager’s parameter--horizontal-pod-autoscaler-use-rest-clientsin k8s 1.9.0 default value istrue, while in k8s 1.8.x isfalsechange it tofalseand it works.same problem on version 1.10.0
With
--horizontal-pod-autoscaler-use-rest-clients=trueHPA uses new resource metrics API instead of old way of getting metrics. Setting it tofalseas you did works, but probably the correct long term solution is to run metrics server as part of your cluster set up. This is documented here: https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/@MaciekPytel
Hi, I used the metrics server, but there is also no value about resource of pod, here is what happened:
OS:
Kernel:
And I didn’t find the parameter “–horizontal-pod-autoscaler-use-rest-clients=false” in the file: “/etc/kubernetes/manifests/kube-controller-manager.yaml”. Also, I didn’t run the heapster.
Could you please give me some suggestions? Thanks!
For information (because I got a similar issue), here is the solution for a kubernetes cluster created using
kops:To be added in cluster config (via
kops edit cluster <clustername>) and applied using kops commands (kops update cluster <clustername> --yes+kops rolling-update cluster --name <clustername> --yes)On done, you can then play with autoscaling on CPU / memory / custom metrics (well as soon as you have installed the required components to get the custom metrics API, RTFM)
I had somewhat the same problem. This solution worked for me: https://stackoverflow.com/q/54106725/2291510
Is there a way to set this property on Docker for mac?
@amirzaidi2002
if you using minikube, there are manifest files(.yaml) for kube-controller-manager
/etc/kubernetes/manifests/kube-controller-manager.yaml you can add new lines and config infos. parameter --horizontal-pod-autoscaler-use-rest-clients=false
That’s strange. How are you deploying your cluster? In 1.10 HPA all the RBAC should be set up by default, though that probably also depends on how the cluster is deployed. Also RBAC related errors in my experience have more explicit errors.
My guess would be the problem is either with missing RBAC or certs for aggregated API server.
Some things I would check to debug:
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/podsshould print json with metrics for your pods (assuming you have any pods running in default namespace). If you get it it means metrics-server is running as expected. If not check if you haveextension-apiserver-authenticationconfigmap in kube-system namespace - missing that would probably mean the problem is with apiserver certs./kubectl get clusterrole system:controller:horizontal-pod-autoscaler -o yamlshould return a yaml containing:If it’s not there you may need to add it.
If it’s none of the above you can try to look into kube-apiserver and metrics-server logs or open an issue against metrics-server repo (or the repo of your deployment tool).
@gjmzj how did you set the parameter --horizontal-pod-autoscaler-use-rest-clients=false. Please help. I am having a same problem and running on Kubernetes 1.9 (minikube)
it works well on k8s 1.8.6