kube-state-metrics: pod_labels does not exists in kube_pod_labels metric

What happened: After updating to version 2.0.0 I lost needed pod labels in kube_pod_labels and kube_pod_info metrics

What you expected to happen: In version 1.9.7 I has metrics like:

kube_pod_labels{container="kube-rbac-proxy-main", instance="10.244.5.180:8443", job="kube-state-metrics", label_app="main-api", label_component="api", label_part_of="some_site", label_pod_template_hash="7f99f8b877", label_version="0.0.1243", namespace="production", pod="some-service-7f99f8b877-fn6gv"}

but after upgrading to version 2.0.0 this metric looks like: kube_pod_labels{container="kube-state-metrics", endpoint="http", instance="10.244.1.225:8080", job="kube-state-metrics", namespace="production", pod="some-service-6b474ff44-gqplt", service="prometheus-stack-kube-state-metrics"}

How to reproduce it (as minimally and precisely as possible): Downgrade to version 1.9.7 and check it.

Anything else we need to know?: In documentation at https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md I found kube_pod_labels in needed behavior for me, but it doesn’t work.

Generated deployment:

kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: prometheus-stack
    meta.helm.sh/release-namespace: monitoring
  generation: 44
  labels:
    app.kubernetes.io/instance: prometheus-stack
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kube-state-metrics
    app.kubernetes.io/version: 2.0.0
    helm.sh/chart: kube-state-metrics-3.1.1
  name: prometheus-stack-kube-state-metrics
  namespace: monitoring
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/name: kube-state-metrics
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: prometheus-stack
        app.kubernetes.io/name: kube-state-metrics
    spec:
      containers:
      - args:
        - --resources=certificatesigningrequests
        - --resources=configmaps
        - --resources=cronjobs
        - --resources=daemonsets
        - --resources=deployments
        - --resources=endpoints
        - --resources=horizontalpodautoscalers
        - --resources=ingresses
        - --resources=jobs
        - --resources=limitranges
        - --resources=mutatingwebhookconfigurations
        - --resources=namespaces
        - --resources=networkpolicies
        - --resources=nodes
        - --resources=persistentvolumeclaims
        - --resources=persistentvolumes
        - --resources=poddisruptionbudgets
        - --resources=pods
        - --resources=replicasets
        - --resources=replicationcontrollers
        - --resources=resourcequotas
        - --resources=secrets
        - --resources=services
        - --resources=statefulsets
        - --resources=storageclasses
        - --resources=validatingwebhookconfigurations
        - --resources=volumeattachments
        - --telemetry-port=8081
        image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.0.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: kube-state-metrics
        ports:
        - containerPort: 8080
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 65534
        runAsGroup: 65534
        runAsUser: 65534
      serviceAccount: prometheus-stack-kube-state-metrics
      serviceAccountName: prometheus-stack-kube-state-metrics
      terminationGracePeriodSeconds: 30

Environment:

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 17
  • Comments: 24 (9 by maintainers)

Most upvoted comments

This - --metric-labels-allowlist=pods=[*],deployments=[*] works properly in version 2.2.0

Please see https://github.com/kubernetes/kube-state-metrics/issues/1489, that should answer your question. Also let us know where we should add this information so its easier to find, thanks!

Hi,

I have the same issue, with v2.1.1 the metric kube_pod_labels does not contain any pod label, e.g. the exporter exposes this

kube_pod_labels{namespace="default",pod="mysite-nginx-6958b75c8c-p97xv",uid="a54274d5-c433-49f9-bc18-df9b05125b95"} 1

while that pod clearly a some labels :

> kubectl  describe pod mysite-nginx-6958b75c8c-p97xv
Name:         mysite-nginx-6958b75c8c-p97xv
Namespace:    default
Priority:     0
Node:         bm-debian-1/172.16.21.11
Start Time:   Mon, 23 Aug 2021 18:51:53 +0200
Labels:       app=mysite-nginx
              pod-template-hash=6958b75c8c
...

Note that I’ not using the --metric-labels-allowlistargument when launching kube-state-metrics.

Is that the expected behavior ? is the --metric-labels-allowlist=pods=[*] now required to get all pod’s label in kube_pod_labels ?

thanks,

I’m facing the same issue after upgrade kube-state-metrics from 1.9.8 to 2.0.0 or 2.1.0

kube-state-metrics version: 2.1.0

containers:
  - name: kube-state-metrics
    image: 'k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.1.0'
    args:
      - '--port=8080'
      - >-
      --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,verticalpodautoscalers,volumeattachments
      - '--telemetry-port=8081'
      - '--metric-labels-allowlist=pods=[*]'

metric: kube_pod_labels{app_kubernetes_io_instance="prometheus", app_kubernetes_io_managed_by="Helm", app_kubernetes_io_name="kube-state-metrics", helm_sh_chart="kube-state-metrics-3.1.1", instance="0.0.0.0:8080", job="kubernetes-service-endpoints", kubernetes_name="prometheus-kube-state-metrics", kubernetes_namespace="monitoring", kubernetes_node="docker-desktop", namespace="default", pod="argocd-00000"}

argocd-00000 pod has label app.kubernetes.io/name: argocd-server but, there’s no label in metric kube_pod_labels

kind: Pod
apiVersion: v1
metadata:
  name: argocd-00000
  namespace: default
  labels:
    app.kubernetes.io/name: argocd-server