framework: [5.3] Queue worker memory leak

  • Laravel Version: 5.3.26
  • PHP Version: 70
  • Database Driver & Version:

I run a single queue worker on my production server (EC2, Amazon Linux, nginx, PHP70) with supervisor.

The supervisor config is:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
numprocs=1
redirect_stderr=true
stdout_logfile=/tmp/supervisor_worker.log

The php process than slowly starts eating up memory and after 3-4 days the server runs out of memory and becomes unresponsive.

I’m not even running jobs yet! It’s just idle. I’m tracking the memory usage now and can see that it slowly and steadily goes up.

memory

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 30 (8 by maintainers)

Most upvoted comments

+1 unresolved bug! @GrahamCampbell please reopen an issue.

I realize this issue is closed but I am experiencing this problem too. I am running php 7.1.2 on ubuntu 16.04 and centos (CentOS Linux release 7.3.1611 (Core)) systems and I see a drastic difference between what top/htop/ps -aux all report vs what php’s own memory_get_usage() reports (way less).

Thus, my system runs out of memory while the processes themselves think they’re well under the 128MB limit. I am not sure if this is a php internals issue or what.

I will say my current work around is a scheduled hourly soft restart with this in Console/Kernel.php:

        // Used to combat memory creep issues that I can't solve otherwise at this moment.
        $schedule->command('queue:restart')->hourly();

5.3 is not supported. Raise an issue with fresh Laravel version.

I think I know what it is, but want to wait a few hours watching the metrics before reporting back with 100% certainty.

Also experiencing this issue. @mfn 's suggestion above to use gc_collect_cycles() works well!

I have another workaround that is helpful for anyone using supervisord with long-running workers: the superlance plugin for supervisord can monitor memory usage for each program it runs using memmon. superlance installed the same way supervisord is (through pip).

In supervisord.conf add another program like so to automatically restart any process consuming more than 100MB of memory:

[eventlistener:memmonListener]
command=memmon -a 100MB
events=TICK_60

@GrahamCampbell But isn’t 164MB way over where a restart is due, even if there is a difference in what PHP reports? I still don’t get what you are trying to tell me.

I understand that how the worker measures its memory usage might be different from what I see in the console with top. But what is the solution? Because I’m running out of RAM. Why is there such a big discrepancy? I got a couple of alarms today from AWS that my server is running low on memory.