ModSecurity: memory leak on nginx reload
RAM usage constantly grows on nginx -s reload
Having modsecurity rules loaded (even with modsecurity off) causes RAM usage to grow with each nginx -s reload and ultimately leads nginx to stuck with messages like:
Logs and dumps (/var/log/nginx/error.log)
Output of: 2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning “worker process” (12: Cannot allocate memory) 2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor) 2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning “worker process” (12: Cannot allocate memory) 2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor) 2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning “cache manager process” (12: Cannot allocate memory) 2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)
To Reproduce
- Configure nginx to load rules: /etc/nginx/nginx.conf
http {
...
modsecurity off;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
..
}
- Restart Nginx and check rules were loaded (/var/log/nginx/error.log):
2020/08/06 08:57:13 [notice] 13627#13627: ModSecurity-nginx v1.0.1 (rules loaded inline/local/remote: 0/911/0)
- Generate load:
./nikto.pl -h https://your-site-name
- Run several ‘nginx -s reload’ (with 3-4 minutes interval) and check RAM consumption with free -m command before and after nginx reload:
# free -m
total used free shared buff/cache available
Mem: 3951 433 2122 30 1395 3136
Swap: 2043 49 1994
# nginx -s reload
# free -m
total used free shared buff/cache available
Mem: 3951 451 2103 30 1395 3117
Swap: 2043 49 1994
# nginx -s reload
# free -m
total used free shared buff/cache available
Mem: 3951 464 2083 30 1404 3104
Swap: 2043 49 1994
# nginx -s reload
# free -m
total used free shared buff/cache available
Mem: 3951 481 2051 30 1417 3086
Swap: 2043 49 1994
.....
# free -m
total used free shared buff/cache available
Mem: 3951 901 1534 30 1515 2666
Swap:
Expected behavior
‘RAM used’ should not steadily grow and should stay around the same level as it does for example without modsecurity rules loaded (in which case ‘ram used’ stays about 300 MB)
Server
- ModSecurity v3 master - https://github.com/SpiderLabs/ModSecurity/commit/51d06d7a8edc7a400782c06e24ebe71834736f77 with nginx-connector v1.0.1
- WebServer: nginx-1.18.0
- OS : Ubuntu 16.04
Rule Set:
Additional context
The same happens with modsecurity on in server’s context. Using SecResponseBodyAccess Off in modsecurity.conf
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 41 (14 by maintainers)
Is anyone there?
We are also experiencing a memory leak on reload signal. It seems that the leak started manifesting when we enabled
SecAuditLog /dev/stdout
in our rule set (however, we have not confirmed this yet). On reload, the memory of the master process consistently grows about 6 MB.The leak is definitely present in the latest
v3/master
branch (afefda53c69bb17e2ec16d24dd6fbc2c8fd7d063) but it seems that versionv3.0.4
is fine. By bisecting the changes I identified commit 7a48245aed517c5cba0455b5d4e99cdaea14129e as the culprit. I tried to track the leak to a specific call-stack with thememleak
script (top 10 outstanding allocations after 60s), however it was not very helpful (even for debug builds with optimizations disabled, may be limitation of the script):I am going to run this through Valgrind and gdb to find out more soon. HTH.
Sorry for the delayed response. I built it, however it crashed on launch - it was a production server down for maintenance so I had to revert back to safer territory. I will try this again and keep you posted. Probably Nginx 1.19 and the Dev version. I’m just curious as to how everyone is dealing with this so far? I’ve hacked a quick script to restart nginx the moment it sees those evil lines in the error log.
I’ve tried everything, even boosting the RAM on my instance, however it is still crashing at times. Will using another operating system help? I have a similar setup running on Debian 10 (not as many vhosts though), that has not yet crashed.
Hi, zimmerle, you are right - can’t reproduce this with v3.0.4.