httpx: PoolTimeout when num tasks in asyncio.gather() exceeds client max_connections

Checklist

  • Reproducible on 0.13.3
  • This issue seems similar but it’s closed and was supposedly fixed

Describe the bug

If the number of tasks executed via asyncio.gather(...) is greater than max_connections, i get a PoolTimeout. It seems like maybe this is happening because the tasks that have completed aren’t releasing their connections upon completion.

I’m new to asyncio so it’s possible I’m doing something wrong, but haven’t been able to find any documentation or issues that cover this case definitively.

To reproduce

import asyncio
import httpx

async def main() -> None:
    url = "https://www.example.com"
    max_connections = 2
    timeout = httpx.Timeout(5.0, pool=2.0)
    limits = httpx.Limits(max_connections=2)
    client = httpx.AsyncClient(timeout=timeout, pool_limits=limits)

    async with client:
        tasks = []
        for _ in range(max_connections + 1):
            tasks.append(client.get(url))
        await asyncio.gather(*tasks)

if __name__ == "__main__":
    loop = asyncio.get_event_loop()
    try:
        loop.run_until_complete(main())
    finally:
        loop.close()

Expected behavior

I would expect all tasks to complete, rather than getting a PoolTimeout on the nth task, where n = max_connections + 1.

Actual behavior

Getting a PoolTimeout on the nth task, where n = max_connections + 1.

Debugging material

Traceback (most recent call last):
  File "test_async.py", line 21, in <module>
    loop.run_until_complete(main())
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "test_async.py", line 16, in main
    await asyncio.gather(*tasks)
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1416, in get
    timeout=timeout,
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1242, in request
    request, auth=auth, allow_redirects=allow_redirects, timeout=timeout,
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1273, in send
    request, auth=auth, timeout=timeout, allow_redirects=allow_redirects,
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1302, in _send_handling_redirects
    request, auth=auth, timeout=timeout, history=history
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1338, in _send_handling_auth
    response = await self._send_single_request(request, timeout)
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_client.py", line 1374, in _send_single_request
    timeout=timeout.as_dict(),
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "/Users/redacted/.pyenv/versions/3.6.9/lib/python3.6/site-packages/httpx/_exceptions.py", line 359, in map_exceptions
    raise mapped_exc(message, **kwargs) from None  # type: ignore
httpx._exceptions.PoolTimeout

Environment

  • OS: macOS 10.14.6
  • Python version: 3.6.9
  • HTTPX version: 0.13.3
  • Async environment: asyncio
  • HTTP proxy: no
  • Custom certificates: no

Additional context

I commented on this issue, but it’s closed so figured it would be better to create a new one.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 11
  • Comments: 22 (9 by maintainers)

Commits related to this issue

Most upvoted comments

Have confirmed that the given example now works in httpx 0.21 (Fixed due to the substantial reworking in the latest httpcore.)

Until this is resolved, is there any reasonable way to work around this? Maybe we use our own asyncio.Semaphore like this:

async def main() -> None:
    url = "https://www.example.com"
    max_connections = 2
    timeout = httpx.Timeout(5.0, pool=2.0)
    limits = httpx.Limits(max_connections=max_connections)
    client = httpx.AsyncClient(timeout=timeout, pool_limits=limits)
    semaphore = asyncio.Semaphore(max_connections)
    
    async def aw_task(aw):
        async with semaphore:
            return await aw

    async with client:
        tasks = []
        for _ in range(max_connections + 1):
            tasks.append(aw_task(client.get(url)))
        await asyncio.gather(*tasks)

I’m planning at getting stuck into this one pretty soon yup. It’s a bit of an involved one, but I know what we need to do to resolve it.

I’ve a reproducer:

Run this HTTP server script (a simple HTTP server that takes long to respond):

  import asyncio
  from hypercorn.asyncio import serve
  from hypercorn.config import Config
  from starlette.applications import Starlette
  from starlette.responses import JSONResponse
  from starlette.routing import Route

  async def homepage(request):
      await asyncio.sleep(10)
      return JSONResponse({})

  app = Starlette(
      routes=[
          Route("/", homepage),
      ],
  )

  config = Config.from_mapping({})
  config.bind = ["127.0.0.1:8001"]
  asyncio.run(serve(app, config))

Then run this client code:

  from anyio import create_task_group
  import asyncio
  import httpx

  async def main() -> None:
      async with httpx.AsyncClient(
          limits=httpx.Limits(max_connections=2),
          verify=False,
      ) as client:

          async def do_one_request() -> None:
              await client.get("http://localhost:8001/")

          # First, create many requests, then cancel while they are in progress.
          async with create_task_group() as tg:
              for i in range(5):
                  tg.start_soon(do_one_request)
              await asyncio.sleep(0.5)
              tg.cancel_scope.cancel()

          # Starting another request will now fail with a `PoolTimeout`.
          await do_one_request()

  asyncio.run(main())

Looks like the slots in the connection pool are not released during cancellation.

This happens for me on both httpx 0.25.0 + httpcore 0.18.0 as well as on httpx 0.25.2 + httpcore 1.0.2. Shielding the get() call or stream() call from cancellation is a workaround that works for us.

this was the fix:

- httpcore==1.0.1
- httpx==0.25.1
+ httpcore==0.18.0
+ httpx==0.25.0

Faced this error in 0.25.1. Fallback to 0.25.0 fixed the problem @tomchristie

Is this still a problem in 0.19 or 1.0.0? I tried running @tomchristie 's code sample but couldn’t replicate the behavior on 0.18.x or 0.19.

We’ve held off updating beyond 0.17.1 due to this, but would really like to get back onto the latest.

Hello everyone,

just want to make sure that this it what I’m looking for. The server I want to send request to have a limited number of allowed connections. Currently I limit the number of async task by using Semaphore. But the pool_limits parameter for AsyncClient looks like this is intended for my use case. Am I right here? If so, any idea when this issue here will be resolved?

Thanks a lot!

fin swimmer