ModSecurity: memory leak on nginx reload

RAM usage constantly grows on nginx -s reload

Having modsecurity rules loaded (even with modsecurity off) causes RAM usage to grow with each nginx -s reload and ultimately leads nginx to stuck with messages like:

Logs and dumps (/var/log/nginx/error.log)

Output of: 2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning “worker process” (12: Cannot allocate memory) 2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor) 2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning “worker process” (12: Cannot allocate memory) 2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor) 2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning “cache manager process” (12: Cannot allocate memory) 2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)

To Reproduce

  1. Configure nginx to load rules: /etc/nginx/nginx.conf
http {
...
   modsecurity off;
   modsecurity_rules_file /etc/nginx/modsec/main.conf;
..
}
  1. Restart Nginx and check rules were loaded (/var/log/nginx/error.log):
2020/08/06 08:57:13 [notice] 13627#13627: ModSecurity-nginx v1.0.1 (rules loaded inline/local/remote: 0/911/0)
  1. Generate load:
./nikto.pl -h https://your-site-name
  1. Run several ‘nginx -s reload’ (with 3-4 minutes interval) and check RAM consumption with free -m command before and after nginx reload:
# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         433        2122          30        1395        3136
Swap:          2043          49        1994

# nginx -s reload

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         451        2103          30        1395        3117
Swap:          2043          49        1994

# nginx -s reload

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         464        2083          30        1404        3104
Swap:          2043          49        1994

# nginx -s reload

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         481        2051          30        1417        3086
Swap:          2043          49        1994

.....

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         901        1534          30        1515        2666
Swap:  

Expected behavior

‘RAM used’ should not steadily grow and should stay around the same level as it does for example without modsecurity rules loaded (in which case ‘ram used’ stays about 300 MB)

Server

Rule Set:

Additional context

The same happens with modsecurity on in server’s context. Using SecResponseBodyAccess Off in modsecurity.conf

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 41 (14 by maintainers)

Most upvoted comments

Is anyone there?

We are also experiencing a memory leak on reload signal. It seems that the leak started manifesting when we enabled SecAuditLog /dev/stdout in our rule set (however, we have not confirmed this yet). On reload, the memory of the master process consistently grows about 6 MB.

The leak is definitely present in the latest v3/master branch (afefda53c69bb17e2ec16d24dd6fbc2c8fd7d063) but it seems that versionv3.0.4 is fine. By bisecting the changes I identified commit 7a48245aed517c5cba0455b5d4e99cdaea14129e as the culprit. I tried to track the leak to a specific call-stack with the memleak script (top 10 outstanding allocations after 60s), however it was not very helpful (even for debug builds with optimizations disabled, may be limitation of the script):

1153930 bytes in 16780 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
1268410 bytes in 18320 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
1536770 bytes in 22510 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
1914850 bytes in 30490 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
2177110 bytes in 28440 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
3771110 bytes in 54810 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
4531060 bytes in 62420 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
4897110 bytes in 72520 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
5016560 bytes in 156793 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
5531980 bytes in 31644 allocations from stack
		operator new(unsigned long)+0x15 [libstdc++.so.6.0.28]

I am going to run this through Valgrind and gdb to find out more soon. HTH.

@hazardousmonk Have you had a chance to test v3/dev/3.1-experimental ?

Sorry for the delayed response. I built it, however it crashed on launch - it was a production server down for maintenance so I had to revert back to safer territory. I will try this again and keep you posted. Probably Nginx 1.19 and the Dev version. I’m just curious as to how everyone is dealing with this so far? I’ve hacked a quick script to restart nginx the moment it sees those evil lines in the error log.

I’ve tried everything, even boosting the RAM on my instance, however it is still crashing at times. Will using another operating system help? I have a similar setup running on Debian 10 (not as many vhosts though), that has not yet crashed.

Hi, zimmerle, you are right - can’t reproduce this with v3.0.4.