horizon: After performing a horizon:terminate, new config is not picked up

Hi,

I was under the impression that by running horizon:terminate, changes to my .env file would have been picked up. Apparantly though, it doesn’t, and I had to manually trigger queue:restart.

Is that expected to be eecuted seperately? Or is this a bug / missing feature from horizon?

I noticed because I change my database connection credentials, and it wasn’t picked up by the queue worker even though I did a rebuild of the config cache, and my app was working, but the jobs were failing.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 5
  • Comments: 25 (10 by maintainers)

Most upvoted comments

Can confirm that executing

php artisan horizon:purge
php artisan horizon:terminate

alone do not make Horizon use updated configuration files. During a deploy to production, we’re automatically restarting Horizon using

sudo supervisorctl stop all
php artisan horizon:purge
php artisan horizon:terminate
sudo supervisorctl start all

But locally or without restarting the Supervisor program, a call to php artisan queue:restart is required.

IMO the documentation about queue commands is getting a bit confusing —especially with the release of Horizon. Perhaps we can optimize that, or even add one single command to prevent this confusion? We’d first need some clarification on the advised commands to execute though. Should we stop the Supervisor program, purge all jobs, then terminate Horizon … or just purge, terminate, restart … or … Goal would be to accept new jobs, but halt processing of any during deployment and resume (with new codebase and/or database) after. Some additional advise about quickly restarting the queue in a local environment would be welcome too.

Hi.

After lots of testing, this seems to be rock solid in production; been running like this for almost a year:

// ... artisan down, composer install, migrate etc
php artisan config:cache
php artisan route:cache
php artisan horizon:purge
php artisan horizon:terminate
php artisan queue:restart
php artisan up

Quick follow-up: still an issue for me. I can pause, terminate, restart, etc but the Supervisor job / Horizon won’t pick up the new release. The process stays alive and keeps using the old directory.

Anything I’m missing here? php artisan horizon:terminate should terminate the process and Supervisor should start a new one, right? If so, then it would boot it again and navigate into the new current directory which it doesn’t seem to do. The only thing that works is restarting the process via Supervisor itself using sudo supervisorctl restart project.

@georgeboot Can you confirm your setup works? In hindsight, regardless if Supervisor cd’s into the current dir first or the command itself does so, if the Supervisor process stays alive throughout releases, the current symlink will always be the one the process started with. So if you start the job with release 5 active and symlinked, no matter how many more you deploy, the active process will keep using that release as it only starts the process in that dir once.

For example, what doesn’t work:

$ php artisan queue:restart
Broadcasting queue restart signal.
$ php artisan horizon:terminate
Sending TERM Signal To Process: 1281
$ php artisan horizon:pause
Sending USR2 Signal To Process: 1281
$ php -r "posix_kill(1281, SIGTERM);"

Yet the process keeps running and in case of pause, it stays active on the dashboard. The last command is what is executed in the Laravel job (php artisan horizon:terminate). If it doesn’t work via the CLI, it surely won’t work in code. If anyone can confirm if that last command does or does not work, we can exclude if this is a bug or just something in my environment.

Any of the following commands do work:

$ sudo supervisorctl restart horizon
$ kill -SIGTERM 1281

Edit: it was a simple user/permissions issue 😶 If your Supervisor process runs as user A, but you terminate Horizon as B, it won’t do anything and not even throw an error or warning. Perhaps we can implement http://php.net/manual/en/function.posix-get-last-error.php in the different Horizon jobs that call posix_kill()? Would’ve saved me a few days 😅

We had the same issue and came across https://github.com/Supervisor/supervisor/issues/152.

Basically, we updated our Forge daemon to have the path as /home/forge/site.com and change the command to php current/artisan horizon.

Tested with Laravel 5.7.11 and Horizon 1.4.9, I cannot reproduce this

unless

I cache the config. Then environment variable changes don’t get picked up.

But without caching the config, every time I issue a horizon:terminate it terminates, supervisor starts it again and the new env and config is picked up. I see that Jobs which were already in the queue get executed with the new values.

I can second this issue. I now restart supervisor on each deploy through envoyer.

cd {{release}}
php artisan horizon:terminate
supervisorctl restart all

This then moves all of horizon over to the correct new code. Not sure if this is a bug or issue then since this seems to resolve the issue they are having.

Just for sanity sake i do run php artisan horizon:purge once a day to make sure that i am not leaving orphaned processes running into space and using up CPU.

I’m going to close this as this obviously is a configuration/deployment setup issue and not an issue with Horizon itself. This can best be explained in a blog post or a tutorial somewhere. If you feel that anything is missing from the docs for deploying feel free to send in a PR to https://laravel.com/docs/5.7/horizon#running-horizon