prometheus-operator: kube-prmetheus kubelet exporter target forbidden
What did you do?
helm install coreos/prometheus-operator --name prometheus-operator --namespace=monitoring
helm install coreos/kube-prometheus --name kube-prometheus --namespace=monitoring
What did you expect to see? All prometheus exporter targets on prometheus dashboard
What did you see instead? Under which circumstances?
target on prometheus dashboard: server returned HTTP status 403 Forbidden
Environment RedHat
- Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1+coreos.0", GitCommit:"59359d9fdce74738ac9a672d2f31e9a346c5cece", GitTreeState:"clean", BuildDate:"2017-10-12T21:53:13Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1+coreos.0", GitCommit:"59359d9fdce74738ac9a672d2f31e9a346c5cece", GitTreeState:"clean", BuildDate:"2017-10-12T21:53:13Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
- 
Kubernetes cluster kind: kube-spray/kube-admin 
- 
Manifests: Kubelet Service Monitor 
kubectl get ServiceMonitor -n monitoring kube-prometheus-exporter-kubelets -oyaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  clusterName: ""
  creationTimestamp: 2018-01-08T14:11:35Z
  deletionGracePeriodSeconds: null
  deletionTimestamp: null
  generation: 0
  labels:
    chart: exporter-kubelets-0.1.2
    component: kubelets
    heritage: Tiller
    prometheus: kube-prometheus
    release: kube-prometheus
  name: kube-prometheus-exporter-kubelets
  namespace: monitoring
  resourceVersion: "66465"
  selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kube-prometheus-exporter-kubelets
  uid: d5b50f43-f47d-11e7-9293-005056a4032b
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 15s
    port: https-metrics
    scheme: https
    tlsConfig:
      caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecureSkipVerify: true
  - honorLabels: true
    interval: 30s
    port: cadvisor
  jobLabel: kube-prometheus-exporter-kubelets
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      k8s-app: kubelet
- Prometheus Operator Logs: Kubelet
Jan 08 13:41:27 tapps773 kubelet[54973]: goroutine 27204 [running]:
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc4205ea620, 0x193)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc4205ea620, 0x193)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Response).WriteHeader(0xc420cc80c0, 0x193)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/response.go:201 +0x41
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Response).WriteErrorString(0xc420cc80c0, 0x193, 0xc4228c4050, 0x50, 0x4, 0xc422
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/response.go:181 +0x46
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/pkg/kubelet/server.(*Server).InstallAuthFilter.func1(0xc422ed30e0, 0xc420cc80c0, 0xc422ed3020)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/server.go:246 +0x4fa
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*FilterChain).ProcessFilter(0xc422ed3020, 0xc422ed30e0, 0xc420cc80c0)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/filter.go:19 +0x68
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).HandleWithFilter.func1(0x9a9d760, 0xc4205ea620, 0xc42304c500)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:313 +0x355
Jan 08 13:41:27 tapps773 kubelet[54973]: net/http.HandlerFunc.ServeHTTP(0xc420b59c60, 0x9a9d760, 0xc4205ea620, 0xc42304c500)
Jan 08 13:41:27 tapps773 kubelet[54973]: /usr/local/go/src/net/http/server.go:1942 +0x44
Jan 08 13:41:27 tapps773 kubelet[54973]: net/http.(*ServeMux).ServeHTTP(0xc4207df590, 0x9a9d760, 0xc4205ea620, 0xc42304c500)
Jan 08 13:41:27 tapps773 kubelet[54973]: /usr/local/go/src/net/http/server.go:2238 +0x130
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).ServeHTTP(0xc4211fb830, 0x9a9d760, 0xc4205ea620, 0xc42304c500)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:292 +0x4d
Jan 08 13:41:27 tapps773 kubelet[54973]: k8s.io/kubernetes/pkg/kubelet/server.(*Server).ServeHTTP(0xc420aaa9b0, 0x9a9d760, 0xc4205ea620, 0xc42304c500)
Jan 08 13:41:27 tapps773 kubelet[54973]: /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/server.go:778 +0x110
Jan 08 13:41:27 tapps773 kubelet[54973]: net/http.serverHandler.ServeHTTP(0xc4205a2b00, 0x9a9e9e0, 0xc420791960, 0xc42304c500)
Jan 08 13:41:27 tapps773 kubelet[54973]: /usr/local/go/src/net/http/server.go:2568 +0x92
Jan 08 13:41:27 tapps773 kubelet[54973]: net/http.(*conn).serve(0xc421ab2140, 0x9aa2160, 0xc4213ec240)
Jan 08 13:41:27 tapps773 kubelet[54973]: /usr/local/go/src/net/http/server.go:1825 +0x612
Jan 08 13:41:27 tapps773 kubelet[54973]: created by net/http.(*Server).Serve
Jan 08 13:41:27 tapps773 kubelet[54973]: /usr/local/go/src/net/http/server.go:2668 +0x2ce
Jan 08 13:41:27 tapps773 kubelet[54973]: logging error output: "Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=metrics)"
Jan 08 13:41:27 tapps773 kubelet[54973]: [[Prometheus/2.0.0] 10.94.130.14:33920]
Jan 08 13:41:29 tapps773 kubelet[54973]: E0108 13:41:29.322199   54973 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 4; ignoring extra CPUs
Jan 08 13:41:42 tapps773 kubelet[54973]: I0108 13:41:42.027127   54973 server.go:245] Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=metrics)
Jan 08 13:41:42 tapps773 kubelet[54973]: I0108 13:41:42.027206   54973 server.go:779] GET /metrics: (123.477µs) 403
Note: api server is able to communicate with same config. Also, it works if I change ServiceMonitor to use http instead of https.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 34 (18 by maintainers)
Hi everyone. I have another work around. Some background: I’m on EKS and I didn’t want to have to change the default kubelet arguments from what they specify in the AMI.
I came across this code from the core prometheus library that attempts to scrape metrics using the main k8s proxy, rather than going direct to the kubelet port like it does in all the examples.
I tried that in the
additionalScrapeConfigsand it worked! However, I didn’t want to stop there, I wanted it inside a ServiceMonitor object. After a lot of hacking around I finally found a standardServiceMonitordefinition that uses relabelling to use the main k8s proxy. Please find the code below:Note that you’ll need prometheus operator version > 0.24, which is where the
relabelingsparameter was introduced. This works by changing the address and metrics path metadata just before doing the ingestion, using the node name information coming from the endpoint.Hopefully there will be better solution soon (via a kube-rbac-proxy perhaps?) but I find this solution cleaner than having to alter the kubelet parameters.
Thanks!
yes I can confirm it works,
--authentication-token-webhookdid the trick for me, also addedto the role
It’s slightly more tricky with the kubelet. I actually have a branch ready to do this with the kubelet. See here: https://github.com/brancz/prometheus-operator/commit/0b96a8d86716e47b2c1028e2930c8aeca4ae8b3a
What you need to make sure though, is that these two flags are enabled on your kubelets:
--authentication-token-webhook--authorization-mode=WebhookI will open a PR and merge that branch as soon as there is a way to do that with minikube.
Environment
Server Version: version.Info{Major:"1", Minor:"10"I used the manifest files with different namespace + node-ports. I too get a 403 on the kubelets but my apiserver + operator is up.
I checked the the service role and looks like its what was described above.
As suggested in this thread, KUBELET_EXTRA_ARGS fixes it.
$ cat /etc/systemd/system/kubelet.service.d/01-kubeadm.conf
Ansible managed
[Service] Environment=“KUBELET_EXTRA_ARGS=–authentication-token-webhook”
Update: I don’t think its a permission because I was able to get the metrics using a different job.
@kedare need to use
KUBELET_EXTRA_ARGSinstead ofKUBELET_CUSTOM_ARGS(at least forkubeadminstallation)