prometheus-operator: New serviceMonitor does not show up in prometheus

What did you do?

Added a new serviceMonitor in values.yaml for kube-prometheus

    - name: kube-prometheus-nginx-ingress
      selector:
        matchLabels:
          app: nginx-ingress
      endpoints:
        - port: metrics
          interval: 30s
      namespaceSelector:
        any: true

What did you expect to see?

I’d expect to see kube-prometheus-nginx-ingress show up under targets in Prometheus and that Prometheus started scarping the targets.

What did you see instead? Under which circumstances?

No new targets where added and kube-prometheus-nginx-ingress does not show up in the prometheus configuration. The serviceMonitor is created but not “applied”.

$ kubectl get servicemonitor kube-prometheus-nginx-ingress
NAME                            AGE
kube-prometheus-nginx-ingress   11m

Environment

  • Kubernetes version information:
kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:

    KOPS on AWS

  • Manifests:

prometheus:
  rbacEnable: false
  alertingEndpoints: []
  config:
    specifiedInValues: true
    value: {}
  externalUrl: ""
  image:
    repository: quay.io/prometheus/prometheus
    tag: v2.0.0

  ingress:
    enabled: false
    annotations: {}
    fqdn: ""
    tls: []
  nodeSelector: {}
  paused: false
  replicaCount: 1
  resources: {}
  retention: 24h
  routePrefix: /
  rulesSelector: {}
  rules:
    specifiedInValues: true
    value: {}

  service:
    annotations: {}
    clusterIP: ""
    externalIPs: []
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    nodePort: 30900
    type: ClusterIP

  serviceMonitorsSelector: {}
  serviceMonitors:
    - name: kube-prometheus-nginx-ingress
      selector:
        matchLabels:
          app: nginx-ingress
      endpoints:
        - port: metrics
          interval: 30s
      namespaceSelector:
        any: true

  • Prometheus Operator Logs:
ts=2018-01-14T22:23:58Z caller=operator.go:980 component=prometheusoperator msg="updating config skipped, no configuration change"
ts=2018-01-14T22:24:09Z caller=operator.go:671 component=prometheusoperator msg="sync prometheus" key=monitoring/kube-prometheus
ts=2018-01-14T22:24:09Z caller=operator.go:980 component=prometheusoperator msg="updating config skipped, no configuration change"

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 25 (21 by maintainers)

Most upvoted comments

I finally figure out what’s the issue 💣

In your prometheus crd you have the serviceMonitorSelector below

serviceMonitorSelector:
      matchLabels:
        prometheus: kube-prometheus

That means the operator will just look for serviceMonitors with this labels. On the kube-prometheus-nginx-ingress serviceMonitor you have all the labels below, except prometheus: kube-prometheus.

  labels:
      app: prometheus
      chart: prometheus-0.0.9
      heritage: Tiller
      release: kube-prometheus
    name: kube-prometheus-nginx-ingress
...

My guesss is when you created the serviceMonitorSelector, the helm chart didn’t inject the same labels used by others serviceMonitors that are working. The quick fix is to change the kube-prometheus-nginx-ingress serviceMonitor and manually add the label prometheus: kube-prometheus. Let me know if this fix your issue so I’ll address a PR to fix that.

I had same problem, solved by changing serviceMonitorsSelector

Prometheus object generated by helm:

kube-prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{},"labels":{"app":"prometheus","chart":"prometheus-0.0.44","heritage":"Tiller","prometheus":"kube-prometheus","release":"kube-prometheus"},"name":"kube-prometheus","namespace":"monitoring"},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app":"prometheus","prometheus":"kube-prometheus"}},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}},"alerting":{"alertmanagers":[{"name":"kube-prometheus-alertmanager","namespace":"monitoring","port":"http"}]},"baseImage":"quay.io/prometheus/prometheus","externalUrl":"http://kube-prometheus.monitoring:9090","logLevel":"info","paused":false,"replicas":1,"resources":{},"retention":"48h","routePrefix":"/","ruleSelector":{"matchLabels":{"prometheus":"kube-prometheus","role":"alert-rules"}},"serviceAccountName":"kube-prometheus","serviceMonitorSelector":{"matchExpressions":[{"key":"app","operator":"In","values":["alertmanager","exporter-coredns","exporter-kube-controller-manager","exporter-kube-dns","exporter-kube-etcd","exporter-kube-scheduler","exporter-kube-state","exporter-kubelets","exporter-kubernetes","exporter-node","grafana","prometheus","prometheus-operator"]}]},"storage":{"volumeClaimTemplate":{"selector":{},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"general-storage"}}},"version":"v2.2.1"}}
  creationTimestamp: 2018-07-21T19:52:34Z
  generation: 1
  labels:
    app: prometheus
    chart: prometheus-0.0.44
    heritage: Tiller
    prometheus: kube-prometheus
    release: kube-prometheus
  name: kube-prometheus
  namespace: monitoring
  resourceVersion: "4255820"
  selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheuses/kube-prometheus
  uid: 9cd1f2d5-8d1f-11e8-9982-9600000c29a0
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              app: prometheus
              prometheus: kube-prometheus
          topologyKey: kubernetes.io/hostname
        weight: 100
  alerting:
    alertmanagers:
    - name: kube-prometheus-alertmanager
      namespace: monitoring
      port: http
  baseImage: quay.io/prometheus/prometheus
  externalUrl: http://kube-prometheus.monitoring:9090
  logLevel: info
  paused: false
  replicas: 1
  resources: {}
  retention: 48h
  routePrefix: /
  ruleSelector:
    matchLabels:
      prometheus: kube-prometheus
      role: alert-rules
  serviceAccountName: kube-prometheus
  serviceMonitorSelector:
    # THIS SO FUCKING COUNTERINTUITIVE! why need to block from adding monitors?
    matchExpressions:
    - key: app
      operator: In
      values:
      - alertmanager
      - exporter-coredns
      - exporter-kube-controller-manager
      - exporter-kube-dns
      - exporter-kube-etcd
      - exporter-kube-scheduler
      - exporter-kube-state
      - exporter-kubelets
      - exporter-kubernetes
      - exporter-node
      - grafana
      - prometheus
      - prometheus-operator
  storage:
    volumeClaimTemplate:
      selector: {}
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 5Gi
        storageClassName: general-storage
  version: v2.2.1

Solution:

Change serviceMonitorSelector in prometheus resource (using kubectl edit prometheus -n monitoring kube-prometheus)

...
  serviceMonitorSelector:
    matchLabels:
      prometheus: kube-prometheus

Adding Metrics example:

Service to expose nginx metrics

apiVersion: v1
kind: Service
metadata:
  namespace: ingress-nginx
  name: ingress-nginx-metrics
  labels:
    app: ingress-nginx-metrics
spec:
  ports:
    - name: metrics
      port: 10254
      protocol: TCP
      targetPort: 10254
  selector:
    app: ingress-nginx

Service monitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  namespace: monitoring
  name: ingress-nginx
  labels:
    component: ingress-nginx
    prometheus: kube-prometheus
spec:
  endpoints:
    - interval: 30s
      port: metrics
      path: /metrics
  jobLabel: component
  namespaceSelector:
    matchNames:
    - ingress-nginx
  selector:
    matchLabels:
      app: ingress-nginx-metrics

After adding this you should be able to see new target in prometheus UI: https://you-prometheus-ui/targets screen shot 2018-07-22 at 21 25 07

Adding nginx dashboard to grafana:

  1. Save dashboard from here https://github.com/kubernetes/ingress-nginx/blob/master/deploy/grafana/dashboards/nginx.yaml (it’s a json file)
  2. Change
  • "pluginName": "Prometheus" -> "pluginName": "prometheus"
  • "datasource": "Prometheus" -> "datasource": "prometheus"
  1. Import to grafana (need to login, default is admin/admin)

You should see something like:

@brancz Would it be nice to add docs how to integrate with most common services like ingress?

#895 fixed this issue