windows_exporter: High memory usage on .16.0

Hello!

I have a couple of servers reporting high memory usage by the windows_exporter. I am using the default configuration/collectors. Is there anyway to limit the resource usage on the exporter?

Screen Shot 2021-07-01 at 3 05 13 PM

Exporter version: windows_exporter, version 0.16.0 (branch: master, revision: f316d81d50738eb0410b0748c5dcdc6874afe95a) build user: appveyor-vm\appveyor@appveyor-vm build date: 20210225-10:52:19 go version: go1.15.6 platform: windows/amd64

OS Name: Microsoft Windows Server 2016 Standard OS Version: 10.0.14393 N/A Build 14393

Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName


1260 142 12748860 12496424 110,102.69 11388 0 windows_exporter

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 3
  • Comments: 53 (13 by maintainers)

Commits related to this issue

Most upvoted comments

scheduled_task memory leak has been addressed in #1080. I’ll try to investigate the service collector when there’s time.

It’s been some time since I last looked at this, but I believe my intention with #998 was to remove a potential leak in the process collector. I suspect we have multiple leaks in each of the collectors. A fix for one won’t resolve them all.

I’ve reopened #998 in #1062 if anyone would like to test the branch.

@breed808 Made a few tests and the most stable config was disabling scheduler_task collector completely. Its been 1 day since its running ok, will report back in case it changes. We only monitor 2 scheduled tasks so not sure why such aggressive leak is happening.

# working config
collectors:
  enabled: cpu,cs,logical_disk,net,os,service,system,memory,tcp,vmware,process,iis,netframework_clrexceptions
collector:
  process:
    whitelist: "xService.?|windows_exporter.?|app.+|antivirus.+|w3wp|Scheduler|xConnector|Ccm.+|xClient|inetinfo|.+agent|.+Agent"
  service:
    services-where: Name LIKE 'appname%'

@datamuc Thanks for all the work that your are putting in on this.

Hi, same problem here. So I tried to build a new package with the commit of @datamuc on perflib and performed some tests (100 req/s on /metrics during 30 minutes) to compare both versions, the configured collectors are cpu,cs,logical_disk,net,os,system,textfile,process.

But a leak is still here. Do you have any other idea to fix this ?

We removed the process and service collector from our configuration (and added the tcp collector, so it is: [defaults] - service + tcp) now the memory usage is stable. It seems that every collector that makes use of github.com/StackExchange/wmi leaks.