prometheus-operator: Prometheus-operator doesn't scrape metrics from node-exporter

What did you do that produced an error?

Here’s my config for node_exporter scrape job:

- job_name: staging/node-exporter/0
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - api_server: null
    role: endpoints
    namespaces:
      names:
      - staging
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_service_monitor]
    separator: ;
    regex: node-exporter
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: http-metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_service_label_service_monitor]
    separator: ;
    regex: (.+)
    target_label: job
    replacement: ${1}
    action: replace
  - source_labels: []
    separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace

This one for node_exporter configuration itself:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter
spec:
  template:
    metadata:
      labels:
        app: node-exporter
      name: node-exporter
    spec:
      hostNetwork: true
      hostPID: true
      containers:
      - image:  quay.io/prometheus/node-exporter:v0.14.0
        args:
        - "-collector.procfs=/host/proc"
        - "-collector.sysfs=/host/sys"
        name: node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: scrape
        resources:
          requests:
            memory: 30Mi
            cpu: 100m
          limits:
            memory: 50Mi
            cpu: 200m
        volumeMounts:
        - name: proc
          readOnly:  true
          mountPath: /host/proc
        - name: sys
          readOnly: true
          mountPath: /host/sys
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: node-exporter
    service-monitor: node-exporter
  name: node-exporter
spec:
  type: ClusterIP
  clusterIP: None
  ports:
  - name: http-metrics
    port: 9100
    protocol: TCP
  selector:
    app: node-exporter

Service Monitor for node_exporter:

apiVersion: monitoring.coreos.com/v1alpha1
kind: ServiceMonitor
metadata:
  name: node-exporter
  labels:
    service-monitor: node-exporter
spec:
  jobLabel: service-monitor
  selector:
    matchLabels:
      service-monitor: node-exporter
  namespaceSelector:
    matchNames:
    - staging
  endpoints:
  - port: http-metrics
    interval: 30s

Here’s my Prometheus config to match ServiceMonitor:

apiVersion: monitoring.coreos.com/v1alpha1
kind: Prometheus
metadata:
  name: alertmanager-lumin-document-exporter
spec:
  replicas: 3
  externalUrl: xxxxxxxx
  serviceAccountName: prometheus-operator
  alerting:
    alertmanagers:
    - namespace: staging
      name: alertmanager-lumin-document-exporter
      port: web
  serviceMonitorSelector:
    matchExpressions:
    - {key: service-monitor, operator: Exists}
  resources:
    requests:
      memory: 400Mi
  ruleSelector:
    matchLabels:
      role: prometheus-rulefiles
      prometheus: alertmanager-lumin-document-exporter

What did you expect to see?

Node_exporter worked and its state would be UP

What did you see instead?

image

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 33 (14 by maintainers)

Most upvoted comments

This sounds like a problem rather related to rancher, as I don’t have any experience using rancher I can’t help on that front. If this is not a production cluster I’d recommend you have a look at other solutions, for example the tectonic-installer (which can also create vanilla kubernetes clusters), and/or re-create this cluster to see whether this issue persists. If that is the case, I’d open an issue on rancher as they will have better insight. Feel free to tag this issue if you open an issue there.