kube-state-metrics: Label information is not exported to kube_node_labels metrics

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: The Node Metrics documentation has a description of kube_node_labels, but kube_node_labels is not export labels when using v2.0.0-alpha.1.

  • kube-state-metrics:v2.0.0-alpha.1

    Labels information is not export

    # HELP kube_node_labels Kubernetes labels converted to Prometheus labels.
    # TYPE kube_node_labels gauge
    - kube_node_labels 1
    - kube_node_labels 1
    - kube_node_labels 1
    - kube_node_labels 1
    # HELP kube_node_role The role of a cluster node.
    
  • kube-state-metrics:v1.9.7 Using v1.9.7 in same environment exports Labels.

    # HELP kube_node_labels Kubernetes labels converted to Prometheus labels.
    # TYPE kube_node_labels gauge
    + kube_node_labels{node="kind-worker",label_beta_kubernetes_io_arch="amd64",label_beta_kubernetes_io_os="linux",label_kubernetes_io_arch="amd64",label_kubernetes_io_hostname="kind-worker",label_kubernetes_io_os="linux"} 1
    + kube_node_labels{node="kind-control-plane",label_beta_kubernetes_io_arch="amd64",label_beta_kubernetes_io_os="linux",label_ingress_ready="true",label_kubernetes_io_arch="amd64",label_kubernetes_io_hostname="kind-control-plane",label_kubernetes_io_os="linux",label_node_role_kubernetes_io_master=""} 1
    + kube_node_labels{node="kind-worker3",label_beta_kubernetes_io_arch="amd64",label_beta_kubernetes_io_os="linux",label_kubernetes_io_arch="amd64",label_kubernetes_io_hostname="kind-worker3",label_kubernetes_io_os="linux"} 1
    + kube_node_labels{node="kind-worker2",label_beta_kubernetes_io_arch="amd64",label_beta_kubernetes_io_os="linux",label_kubernetes_io_arch="amd64",label_kubernetes_io_hostname="kind-worker2",label_kubernetes_io_os="linux"} 1
    # HELP kube_node_role The role of a cluster node.
    

What you expected to happen: kube_node_labels is exported Labels when using v2.0.0-alpha.1.

How to reproduce it (as minimally and precisely as possible):

  1. Create a cluster using kind.

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
    - role: control-plane
      image: kindest/node:v1.19.1
    - role: worker
      image: kindest/node:v1.19.1
    - role: worker
      image: kindest/node:v1.19.1
    - role: worker
      image: kindest/node:v1.19.1
    
  2. install kube-state-metrics from https://github.com/kubernetes/kube-state-metrics/tree/master/examples/standard.

Anything else we need to know?: The other “kube_XXXX_labels” metrics are in the same state. I have confirmed that the following metrics are the same in my environment.

# TYPE kube_daemonset_labels gauge
kube_daemonset_labels 1

# TYPE kube_deployment_labels gauge
kube_deployment_labels 1

# TYPE kube_endpoint_labels gauge
kube_endpoint_labels 1

# TYPE kube_ingress_labels gauge
kube_ingress_labels 1

# TYPE kube_job_labels gauge
kube_job_labels 1

# TYPE kube_namespace_labels gauge
kube_namespace_labels 1

# TYPE kube_node_labels gauge
kube_node_labels 1

# TYPE kube_pod_labels gauge
kube_pod_labels 1

# TYPE kube_replicaset_labels gauge
kube_replicaset_labels 1

# TYPE kube_secret_labels gauge
kube_secret_labels 1

# TYPE kube_service_labels gauge
kube_service_labels 1

# TYPE kube_statefulset_labels gauge
kube_statefulset_labels 1

Environment:

  • Kubernetes version (use kubectl version): v1.19.3
  • Kube-state-metrics image version: v2.0.0-alpha.1

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 23 (10 by maintainers)

Commits related to this issue

Most upvoted comments

Yes I am currently working on a fix, will open a PR sometime this week if all goes well. The new format will be a bit different from the current one, due to some performance regressions we noticed that the new --labels-allow-list flag introduced. Note that will fix the that it will always have the resource name and namespace there by default, this was a bug.

New format should be something like this:

--labels-allow-list nodes=[your-actual-label-name-as-seen-in-k8s, your-other-label] 

Will mention this issue on the PR when I open it!

I happened to stumble upon this bug/feature as I was working on metric discovery. It might be useful to mention this in the documentation.

@dgrisonnet I agree that only the label_* labels should be gated, otherwise it’s impossible to relate them back without also specifying their “deployment/node/etc.” label and namespace. I would also say that it would be nicer to not require the label_ prefix part as it adds quite some redundancy when more labels are specified. After all, this seems to be limited to the *_label metrics that all start the labels with label_ anyways. It would also be useful to allow some pattern matching on both sides of the =, even though this would open up the door again to cardinality explosion. But I do think that it should be possible to do *=[*] to get the old behavior, which I think is acceptable as users can choose whether to get the old behavior or not. Though for me a *=[label1,label2,label3] would be sufficient for my current workflow.

Edit: Now I notice it was written in the release notes, I should have seen that. Though it might still be useful to document it a bit more.

Yes, that is the current plan, note this will be converted to Prometheus format always later. The idea was that it’s easier for users to know the k8s label name and pass that name to allowlist, rather than converting that by hand before. It also is more performance efficient for reasons which will hopefully be evident in the PR.

But yes the current plan is to have something like this to work yes:

–labels-allow-list=“pods=[“app.kubernetes.io/component”]”

Like I said it’s still WIP, so can’t promise it will be that format but will detail any reasons if something does not work in the PR. 😃

@ntavares ah, yes, you mean the node default label (without label_ prefix) one that should have been included by default. Yes it should get added back in. In the meantime it is possible to mark them in the same allow list which was why I initially added this as a remark to the documentation. In fact, in my opinion, I wrote the documentation to be as accurate to the current state of master as possible as it would have been trivial to fix the documentation as soon as a fix was out. Hence why I did add, to the example, --labels-allow-list kube_node_labels=[node,namespace,labels_your_own] as anyone trying out alpha.x would hit the same problem.

I’m more with the “do what you can now and quickly to help as much as possible” regarding documentation instead of delaying the documentation till bugs are fixed. As people will read the docs first, then fail and eventually come to the issue.