kube-state-metrics: pod_labels does not exists in kube_pod_labels metric
What happened:
After updating to version 2.0.0 I lost needed pod labels in kube_pod_labels and kube_pod_info metrics
What you expected to happen: In version 1.9.7 I has metrics like:
kube_pod_labels{container="kube-rbac-proxy-main", instance="10.244.5.180:8443", job="kube-state-metrics", label_app="main-api", label_component="api", label_part_of="some_site", label_pod_template_hash="7f99f8b877", label_version="0.0.1243", namespace="production", pod="some-service-7f99f8b877-fn6gv"}
but after upgrading to version 2.0.0 this metric looks like:
kube_pod_labels{container="kube-state-metrics", endpoint="http", instance="10.244.1.225:8080", job="kube-state-metrics", namespace="production", pod="some-service-6b474ff44-gqplt", service="prometheus-stack-kube-state-metrics"}
How to reproduce it (as minimally and precisely as possible): Downgrade to version 1.9.7 and check it.
Anything else we need to know?: In documentation at https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md I found kube_pod_labels in needed behavior for me, but it doesn’t work.
Generated deployment:
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: prometheus-stack
meta.helm.sh/release-namespace: monitoring
generation: 44
labels:
app.kubernetes.io/instance: prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.0.0
helm.sh/chart: kube-state-metrics-3.1.1
name: prometheus-stack-kube-state-metrics
namespace: monitoring
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: prometheus-stack
app.kubernetes.io/name: kube-state-metrics
spec:
containers:
- args:
- --resources=certificatesigningrequests
- --resources=configmaps
- --resources=cronjobs
- --resources=daemonsets
- --resources=deployments
- --resources=endpoints
- --resources=horizontalpodautoscalers
- --resources=ingresses
- --resources=jobs
- --resources=limitranges
- --resources=mutatingwebhookconfigurations
- --resources=namespaces
- --resources=networkpolicies
- --resources=nodes
- --resources=persistentvolumeclaims
- --resources=persistentvolumes
- --resources=poddisruptionbudgets
- --resources=pods
- --resources=replicasets
- --resources=replicationcontrollers
- --resources=resourcequotas
- --resources=secrets
- --resources=services
- --resources=statefulsets
- --resources=storageclasses
- --resources=validatingwebhookconfigurations
- --resources=volumeattachments
- --telemetry-port=8081
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.0.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kube-state-metrics
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsUser: 65534
serviceAccount: prometheus-stack-kube-state-metrics
serviceAccountName: prometheus-stack-kube-state-metrics
terminationGracePeriodSeconds: 30
Environment:
- kube-state-metrics version: 2.0.0
- Kubernetes version (use
kubectl version): 1.20.4 - Cloud provider or hardware configuration: bare-metal
- Other info: kube-state-metrics as a part of chart https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 17
- Comments: 24 (9 by maintainers)
This
- --metric-labels-allowlist=pods=[*],deployments=[*]works properly in version 2.2.0Please see https://github.com/kubernetes/kube-state-metrics/issues/1489, that should answer your question. Also let us know where we should add this information so its easier to find, thanks!
Hi,
I have the same issue, with v2.1.1 the metric
kube_pod_labelsdoes not contain any pod label, e.g. the exporter exposes thiswhile that pod clearly a some labels :
Note that I’ not using the
--metric-labels-allowlistargument when launching kube-state-metrics.Is that the expected behavior ? is the
--metric-labels-allowlist=pods=[*]now required to get all pod’s label inkube_pod_labels?thanks,
I’m facing the same issue after upgrade kube-state-metrics from 1.9.8 to 2.0.0 or 2.1.0
kube-state-metrics version: 2.1.0
metric:
kube_pod_labels{app_kubernetes_io_instance="prometheus", app_kubernetes_io_managed_by="Helm", app_kubernetes_io_name="kube-state-metrics", helm_sh_chart="kube-state-metrics-3.1.1", instance="0.0.0.0:8080", job="kubernetes-service-endpoints", kubernetes_name="prometheus-kube-state-metrics", kubernetes_namespace="monitoring", kubernetes_node="docker-desktop", namespace="default", pod="argocd-00000"}argocd-00000pod has labelapp.kubernetes.io/name: argocd-serverbut, there’s no label in metrickube_pod_labels