prometheus-kubernetes: kubelet Kubernetes node labels are missing
Hi
I encounter the same issue as described here https://github.com/prometheus/prometheus/issues/3294 when deploying Prometheus
Earlier (before the Prometheus operator implementation) metrics like
container_memory_working_set_bytes{id='/'}
provided all the node labels
But unfortunately now most of the useful labels are missing.
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 2
- Comments: 22 (4 by maintainers)
@camilb I’m facing the same issue with the
node-exporter
.Hi guys, probably this is a more cleaner way to do this (on the serviceMonitor object)
These values are for kube-prometheus-stack helm chart. Check service discovery on prometheus for more labels. most of the useful ones are available.
I managed to sort out this issue. There are several things worth mentioning. The Prometheus Operator ServiceMonitor object/kind/crd does not provide access to the kubernetes_sd_config role=node
And therefor non of the
__meta_kubernetes_node_*
labels are available. This can be worked around usingadditionalScrapeConfigs
in the PrometheusSpec (https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec).Trying to use
targetLabels
in the service monitor will fail, becasue they will translate into__meta_kubernetes_service_label_<labelname>
as the role of targets created by the automated processes in the operator always are service.So I ended up removing the kubelet ServiceMonitor (prometheus-k8s-service-monitor-kubelet.yaml), and replacing this with a custom scraping config. To get this to work you have to do several things.
Create a secret with content from the custom scraping config yaml
kubectl create secret generic additional-scrape-configs --from-file=kubelet.yaml=.\kubelet.yaml
Modify the Prometheus config to include the customs scraping configs
All node labels are then propagated to metrics like
container_memory_working_set_bytes
,machine_memory_bytes
etc.