horizon: Horizon wrong time displayed, errors in log for completed job and ignoring memory restrictions for hard tasks
- Horizon Version: 3.4.3
- Laravel Version: 6.6.0
- PHP Version: 7.2.24
- Redis Driver & Version: phpredis 5.1.1
- Database Driver & Version: mysql
Description:
When running
Steps To Reproduce:
- Create a job that needs more than 60 seconds and to run and uses a lot of memory (I generated 3,000,000 unique strings using $faker->unique()->regexify() for the sake of testing). I’ll attach the job file.
- (Re)start horizon
- Queue the job
What happens:
-
Despite the memory limit being specified as 64MB in config/horizon.php the worker threads are started with the option --memory=128 which is also ignored and the processes are allowed to consume nearly a whole gigabyte of resident memory without failing
-
Despite the job completing without issue, Laravel logs an exception (local.ERROR: … attempted too many times or run too long. The job may have previously timed out. {“exception”:"[object] (Illuminate\Queue\MaxAttemptsExceededException(code: 0): App\Jobs\SendOrderEmail … at /work/jsong/vendor/laravel/framework/src/Illuminate/Queue/Worker.php:612…)
-
A job that took 1 minute and 36 seconds to complete is displayed in the horizon recent jobs list as having taken 3.63 seconds. Another that took 1m32s displays as 0.68s. Another that also took 1m32s displays as 0.18s.
Looking at the timing of the exceptions and when the job was completed it seems that Horizon is logging the difference between when the exception was thrown and when the job was completed. I thought that perhaps this is more of a laravel issue than a horizon issue. But when I work the queue manually with php artisan queue:work or php artisan queue:work --deamon the exception is not thrown and the horizon interface reports time correctly
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 30 (15 by maintainers)
Some people may want a more graceful memory_limit option. For example, setting the PHP memory limit is going to kill your job in the middle of processing a job, possibly leaving your system in an inconsistent state. The current memory limit functionality is graceful and checks the memory consumption after each job is finished processing.
I can see the use case for both… note you can already set a memory_limit in your CLI php.ini file. If we want to make the PHP memory limit configurable from Horizon I think it would need to be a configuration option in addition to the current memory limit option.
I personally do not want my jobs to be terminated in the middle of processing for only going 1mb over the memory limit for example. I would rather the worker restart after the job is finished.
Any activity on this lately? We switched from regular queue workers to horizon recently, and are experiencing all the same issues reported in this thread. Additionally, the supervisor process (from horizon, not server supervisor.d) is taking way more memory than I would expect. Here’s a screenshot showing 2 horizon instances (with 1 worker and 2 workers) using almost 2GB of memory (even though we have the horizon config at the default of 64MB).
Unfortunately this is impacting our server performance greatly, and we’re going to have to abandon horizon and go back to regular queue workers, where we did not have this issue.
It would also be nice at the start of the worker to write to the log or console that the memory limit of the horizon is higher than the memory limit of the interpreter. Or even prohibit start the worker in this case. To avoid false expectations.
It’s a setting you define in your php.ini on your server. Let’s not get off track here with discussing on how things are named.
Yeah agreed. I’ll run this over with the rest of the team.
I’m looking into it! Currently it looks more like an issue in Laravel’s own queue worker, as the
--memoryflag is being ignored there as well.So would having both be an option?
Maybe our current use-case can shine some light on why we’re looking to do this:
We’ve got 2 supervisors:
Currently the high load queue requires us to raise PHP’s memory_limit to 1GB. This also means our small queue jobs can now use up to 1GB of memory. Worst case resulting in 12GB memory usage due to concurrent jobs.
So for us the combination of the existing
memoryconfig and an additionalphp_memory_limitthat sets the worker’smemory_limitwould be ideal.@driesvints Isn’t it also just possible to assume when memory is passed that this is the memory in which we want the process to run? Couldn’t we just pass
-d memory_limit=${$options->memory}Mstraight through?I personally don’t think it’s a good idea to overwrite server settings from within Horizon (or the queue workers for that matter).