httpx: PoolTimeout when num tasks in asyncio.gather() exceeds client max_connections
Checklist
- Reproducible on 0.13.3
- This issue seems similar but it’s closed and was supposedly fixed
Describe the bug
If the number of tasks executed via asyncio.gather(...) is greater than max_connections, i get a PoolTimeout. It seems like maybe this is happening because the tasks that have completed aren’t releasing their connections upon completion.
I’m new to asyncio so it’s possible I’m doing something wrong, but haven’t been able to find any documentation or issues that cover this case definitively.
To reproduce
import asyncio
import httpx
async def main() -> None:
url = "https://www.example.com"
max_connections = 2
timeout = httpx.Timeout(5.0, pool=2.0)
limits = httpx.Limits(max_connections=2)
client = httpx.AsyncClient(timeout=timeout, pool_limits=limits)
async with client:
tasks = []
for _ in range(max_connections + 1):
tasks.append(client.get(url))
await asyncio.gather(*tasks)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close()
Expected behavior
I would expect all tasks to complete, rather than getting a PoolTimeout on the nth task, where n = max_connections + 1.
Actual behavior
Getting a PoolTimeout on the nth task, where n = max_connections + 1.
Debugging material
Traceback (most recent call last):
File "test_async.py", line 21, in <module>
loop.run_until_complete(main())
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
return future.result()
File "test_async.py", line 16, in main
await asyncio.gather(*tasks)
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1416, in get
timeout=timeout,
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1242, in request
request, auth=auth, allow_redirects=allow_redirects, timeout=timeout,
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1273, in send
request, auth=auth, timeout=timeout, allow_redirects=allow_redirects,
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1302, in _send_handling_redirects
request, auth=auth, timeout=timeout, history=history
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1338, in _send_handling_auth
response = await self._send_single_request(request, timeout)
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1374, in _send_single_request
timeout=timeout.as_dict(),
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_exceptions.py", line 359, in map_exceptions
raise mapped_exc(message, **kwargs) from None # type: ignore
httpx._exceptions.PoolTimeout
Environment
- OS: macOS 10.14.6
- Python version: 3.6.9
- HTTPX version: 0.13.3
- Async environment: asyncio
- HTTP proxy: no
- Custom certificates: no
Additional context
I commented on this issue, but it’s closed so figured it would be better to create a new one.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 11
- Comments: 22 (9 by maintainers)
Commits related to this issue
- Remove workaround from scraper Fixed in httpx 0.21, see https://github.com/encode/httpx/issues/1171#issuecomment-995799510 — committed to michael-k/awacs by michael-k 3 years ago
- Remove workaround from scraper Fixed in httpx 0.21, see https://github.com/encode/httpx/issues/1171#issuecomment-995799510 — committed to cloudtools/awacs by michael-k 3 years ago
Have confirmed that the given example now works in
httpx0.21 (Fixed due to the substantial reworking in the latest httpcore.)I have verified that https://github.com/encode/httpcore/pull/880 resolves this issue.
Using server example at: https://github.com/encode/httpx/issues/1171#issuecomment-1850923234 And client example at: https://github.com/encode/httpx/issues/1171#issuecomment-1864841152
Until this is resolved, is there any reasonable way to work around this? Maybe we use our own
asyncio.Semaphorelike this:I’m planning at getting stuck into this one pretty soon yup. It’s a bit of an involved one, but I know what we need to do to resolve it.
I’ve a reproducer:
Run this HTTP server script (a simple HTTP server that takes long to respond):
Then run this client code:
Looks like the slots in the connection pool are not released during cancellation.
This happens for me on both httpx 0.25.0 + httpcore 0.18.0 as well as on httpx 0.25.2 + httpcore 1.0.2. Shielding the
get()call orstream()call from cancellation is a workaround that works for us.this was the fix:
Faced this error in 0.25.1. Fallback to 0.25.0 fixed the problem @tomchristie
Is this still a problem in 0.19 or 1.0.0? I tried running @tomchristie 's code sample but couldn’t replicate the behavior on 0.18.x or 0.19.
We’ve held off updating beyond 0.17.1 due to this, but would really like to get back onto the latest.
Hello everyone,
just want to make sure that this it what I’m looking for. The server I want to send request to have a limited number of allowed connections. Currently I limit the number of async task by using
Semaphore. But thepool_limitsparameter forAsyncClientlooks like this is intended for my use case. Am I right here? If so, any idea when this issue here will be resolved?Thanks a lot!
fin swimmer
@pssolanki111 Sure is, yup… https://www.python-httpx.org/advanced/#pool-limit-configuration