node_exporter: Node-Exporter : memory usage too high (OOME)

Host operating system: output of uname -a

3.10.0-862.3.2.el7.x86_64

node_exporter version: output of node_exporter --version

sh-4.2$ node_exporter --version
node_exporter, version 0.16.0 (branch: HEAD, revision: d42bd70f4363dced6b77d8fc311ea57b63387e4f)
build user: root@node-exporter-binary-3-build
build date: 20180606-16:48:15
go version: go1.10

node_exporter command line flags

–path.procfs=/host/proc --path.sysfs=/host/sys

Are you running node_exporter in Docker?

Yes, in Openshift

Hi,

I use node-exporter (openshift/prometheus-node-exporter:v0.16.0) in Openshift with Prometheus and Grafana. I have a problem with memory recycling. The Pod is killed each time after at an OOME (OutOfMemory). Memory usage increases continuously without being recycled.

By default, the limits were (template here):

          resources:
            requests:
              memory: 30Mi
              cpu: 100m
            limits:
              memory: 50Mi
              cpu: 200m

I tested several configurations without success. Today, the configuration is:

          resources:
            limits:
              cpu: 250m
              memory: 250Mi
            requests:
              cpu: 100m
              memory: 75Mi

Do you have any idea ? An adjustment to make ? Do you have recommended resources limits ?

Thanks in advance for your help !

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 44 (22 by maintainers)

Commits related to this issue

Most upvoted comments

There have been a number of releases since this issue was reported. The only root cause, the wifi collector, has been disabled by default for quite a while. I think we can close this.