framework: Worker timeout has no effect when making cURL requests with no curl timeout
- Laravel Version: 5.4.x
- PHP Version: 7.1
- Database Driver & Version: MySQL 5.7
Description:
When a Job is making api calls using cURL (Guzzle 6 in this case), if cURL has not timeout set (CURLOPT_TIMEOUT) the worker’s timeout will not have any effect.
I do have the pcntl extension installed.
vagrant@homestead:/vagrant/dev/apptest$ php -i | grep pcntl
pcntl
pcntl support => enabled
Steps To Reproduce:
Have a job that makes api calls to an endpoint that will take longer to respond than the workers timeout. Also configure the client to not use a timeout. In this case I’ve commented the timeout for the guzzle client. When using the timeout everything will work fine since cURL will exit with an exception (cURL 28) and the worker will pick up the next job.
public function handle()
{
$client = new \GuzzleHttp\Client([
// 'timeout' => 3
]);
$client->get('127.0.0.1/sleep');
}
and a route
$app->get('/sleep', function() {
sleep(10);
});
and run the worker command with a timeout of 5
artisan queue:work --timeout=5
If the endpoint does not respond the worker will not timeout and will wait forever. If the endpoint decides to respond after a while but the worker timeout has passed, as soon as curl closes the connection due to the successful response, the worker detects that the timeout has passed and kills the script. So it looks like cURL is somehow interfering with the SIGALRM set by the worker but I have not clue on to how.
Video of the issue: http://screen.ac/1T2a2Z2J0I1Y
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 2
- Comments: 17 (15 by maintainers)
So on that note, basically there isn’t much that can be done here. I replicated the scenario a few days ago and had the same results. It would seem logical to also set a timeout on the cURL request if you want the worker to kill the job after a set amount of time anyway.
I wonder if it’s worth being noted somewhere about this because I’m sure most people that use queues / background workers are probably forking other processes.
@arjasco When CURL does not have a timeout set Guzzle will automatically set the
CURLOPT_NOSIGNALhttps://github.com/guzzle/guzzle/blob/9e28b54996d260184abc9c36c59e2740be2184a4/src/Handler/CurlFactory.php#L400.I’ve also thought that CURL might be setting it’s own alarm which would cancel the previously set alarm. But that does not explain why after CURL alarm get’s triggered, the worker alarm is still triggered as well. It feels that the worker alarm waits for the CURL alarm or whatever CURL does and only after that it will be able to trigger it’s own alarm.
From the RFC: Asynchronous Signal Handling:
This means that async signals will only execute on 1) loop iteration, 2) user function entry or 3) internal function exit. I’m unsure how sleep() works in this, but I presume it is abrupted by incoming signals (as sleeps often are) and the signal is then processed from the third type; internal function exit.
However, a external library that blocks on something wouldn’t quality for any of these categories. The event is probably sent to the process, and is awaiting processing, but the handler isn’t executing until the blocking code is done.
But shouldnt the worker superceeded anything/everything? i.e. if the worker process stalls, for any reason - we should be in control to kill it.
Relying on other code to behave seems to reduce the resilience of the worker timeout protections?
Please see https://github.com/laravel/framework/issues/19584
It’s working as expected, Use 0 to wait indefinitely (the default behavior) in
GuzzleHttp\Client