redis-py: "No connection available" errors since 5.0.1

Version: 5.0.1

Platform: Python 3.11.5(3.11.5 (main, Sep 4 2023, 15:30:52) [GCC 10.2.1 20210110])

Description:

Since 5.0.1, we noticed a significant increase in No connection available. errors from redis. We are using cashews (6.3.0) + redis-py for request caching for our fastapi application. We don’t experience this with 5.0.0.

Stack trace:

CancelledError: null
  File "redis/asyncio/connection.py", line 1170, in get_connection
    async with self._condition:
  File "asyncio/locks.py", line 15, in __aenter__
    await self.acquire()
  File "asyncio/locks.py", line 114, in acquire
    await fut
TimeoutError: null
  File "redis/asyncio/connection.py", line 1169, in get_connection
    async with async_timeout(self.timeout):
  File "asyncio/timeouts.py", line 111, in __aexit__
    raise TimeoutError from exc_val
ConnectionError: No connection available.
  File "cashews/backends/redis/client.py", line 26, in execute_command
    return await super().execute_command(command, *args, **kwargs)
  File "redis/asyncio/client.py", line 601, in execute_command
    conn = self.connection or await pool.get_connection(command_name, **options)
  File "redis/asyncio/connection.py", line 1174, in get_connection
    raise ConnectionError("No connection available.") from err

About this issue

  • Original URL
  • State: open
  • Created 9 months ago
  • Reactions: 8
  • Comments: 30 (14 by maintainers)

Most upvoted comments

Redis can work in “single_connection_client” mode, but normally it doesn’t. In Sentinel mode it does not. and __aenter__() does nothing. Don’t worry about single connection mode.Redis可以在“single_connection_client”模式下工作,但通常不会。 在Sentinel模式下,它不会。 #20002;什么都不做。 不要担心单一连接模式。

(Actually, a single connection client is created when you call redis.client(), … I’ll get to that later.)(实际上,当您调用redis.client()时,会创建一个连接客户端,… 我以后再谈这个问题。)

In normal mode, this is what happens:在正常模式下,会发生以下情况:

  1. redis=Redis() is created. An internal ConnectionPool is created.redis=Redis()创建 创建内部ConnectionPool
  2. you do await redis.execute_command indirecly via e.g. await redis.get(). This is what happens a. a connection is got from the pool, maybe created b. the command is executed c. the connection is put back to the pool.你做await redis.execute_command间接地通过例如。await redis.get(). 这就是结局 a.从池中获取连接, B.执行命令 C.连接被放回到池。
  3. repeat step 2 above, for many Tasks, in parallel, for a long time.重复上述步骤2,对于多个任务,并行,长时间。
  4. call await redis.aclose(). This closes all connections in the pool.呼叫await redis.aclose()。 这将关闭池中的所有连接。
redis = Sentinel.master_from()
async with redis:
    for request in requests:
       await redis.get(key)   # just get and return a connection automatically
# now all the pool is closed.

The purpose of async with is chiefly to remember to call redis.close(), which again is to call redis.connection_pool.close(). You only call redis.aclose() if you do not want to use this redis instance again.async with的目的主要是记住调用redis.close(),这又是调用redis.connection_pool.close()。 如果你不想再使用这个redis实例,你只能调用redis.aclose()

Redis.client()

But it is also possible to use the client() method, to get a private client connection if you want to do many things with it on the same Task:但是如果你想用它做很多事情,也可以使用client()方法来获得一个私有客户端连接 同样的Task

  1. redis=Redis() is created. An internal ConnectionPool is created.redis=Redis()创建 创建内部ConnectionPool
  2. cli = redis.client(). This creates a single connection client, borrowing the same pool as redis``cli = redis.client(). 这将创建一个连接客户端,借用与redis相同的池
  3. async with client, will get the connection, allowing you to use it, and _return it to the pol after``async with client,将获取连接,允许您使用它,并_在之后将其返回到poll
redis = Sentinel.master_from()
async with redis:
    for request in requests:
       async with redis.client() as cli:   # get a private single-connection client. makes sense if you want to do many operations.
           await cli.get(key)
       # now connection goes back to the pool
# now all the pool is closed.

Now, as I understand it:据我所知

  1. you have a global Sentinel.你有一个全球哨兵。
  2. You want a global master_for(), that can re-use the connections for different "requests"您需要一个全局master_for(),它可以为不同的“请求”重用连接

The answer is simple: do not call redis.aclose(). Keep it open. Do not use async with on the master Redis() object.答案很简单:不要打电话#1。 保持开放。 不要在主Redis()对象上使用redis.aclose()

You can use redis.client() as in your code. That will create a new single_connection_client, using the same pool as the parent.你可以在代码中使用redis.client()。 这将创建一个新的single_connection_client,使用与父节点相同的池。

client()

You just use the same master Redis() object for each connection你只需要为每个连接使用相同的主Redis()对象

# cache the returned redis object.  only create _one_ for each db
@functools.cache
async def redis_master(db: int = REDIS_DB_0) -> Redis:
    global redis_sentinel
    # create a new Redis object, connected to this service, as master
    # it has its own, _private_, ConnectionPool, which is bound to the unique address of this
    # particular server.
    return redis_sentinel.master_for(service_name=REDIS_SENTINEL_SERVICE_NAME, db=db)

# a fastAPI dependency
async def redis_master_0():
    # just a short cut to get the master for 0
    return redis_master(0)

#fastapi endpoint handler
@fastapi.get # or whatever
async def do_something(redis_master_0):
    # each request is sharing the same master object
    value = await redis_master_0.get("key")

redis.client()

Here you to create a private Redis object for each request在这里,您可以为每个请求创建一个私有Redis对象

# cache the returned redis object.  only create _one_ for each db
@functools.cache
async def redis_master(db: int = REDIS_DB_0) -> Redis:
    global redis_sentinel
    # create a new Redis object, connected to this service, as master
    # it has its own, _private_, ConnectionPool, which is bound to the unique address of this
    # particular server.
    return redis_sentinel.master_for(service_name=REDIS_SENTINEL_SERVICE_NAME, db=db)

# a fastAPI dependency
async def redis_master_0():
    # get a single connection client to use for a request. It shares the pool with the parent. closing it will not close the pool.
    async with redis_master(0).client() as single_connection_client:
        yield single_connection_client
    # here, the single_connection_client is closed at the end of the request.  Closing it just meand
    # returning it to the pool.

#fastapi endpoint handler
@fastapi.get # or whatever
async def do_something(redis_master_0):
    # each request gets a new private Redis, and must close it.  the fastapi dependency will take care of that.
    value = await redis_master_0.get("key")

Thank you for your solution. The second solution is a method that I feel is more understandable, and it indeed reduces the creation process. Because I still think that the connection recycling operation is necessary, because I cannot guarantee that my program will not generate errors during operation.

Thank you very much for your support

Redis can work in “single_connection_client” mode, but normally it doesn’t. In Sentinel mode it does not. and __aenter__() does nothing. Don’t worry about single connection mode.

(Actually, a single connection client is created when you call redis.client(), … I’ll get to that later.)

In normal mode, this is what happens:

  1. redis=Redis() is created. An internal ConnectionPool is created.
  2. you do await redis.execute_command indirecly via e.g. await redis.get(). This is what happens a. a connection is got from the pool, maybe created b. the command is executed c. the connection is put back to the pool.
  3. repeat step 2 above, for many Tasks, in parallel, for a long time.
  4. call await redis.aclose(). This closes all connections in the pool.
redis = Sentinel.master_from()
async with redis:
    for request in requests:
       await redis.get(key)   # just get and return a connection automatically
# now all the pool is closed.

The purpose of async with is chiefly to remember to call redis.close(), which again is to call redis.connection_pool.close(). You only call redis.aclose() if you do not want to use this redis instance again.

Redis.client()

But it is also possible to use the client() method, to get a private client connection if you want to do many things with it on the same Task:

  1. redis=Redis() is created. An internal ConnectionPool is created.
  2. cli = redis.client(). This creates a single connection client, borrowing the same pool as redis
  3. async with client, will get the connection, allowing you to use it, and _return it to the pol after`
redis = Sentinel.master_from()
async with redis:
    for request in requests:
       async with redis.client() as cli:   # get a private single-connection client. makes sense if you want to do many operations.
           await cli.get(key)
       # now connection goes back to the pool
# now all the pool is closed.

Your case:

Now, as I understand it:

  1. you have a global Sentinel.
  2. You want a global master_for(), that can re-use the connections for different “requests”

The answer is simple: do not call redis.aclose(). Keep it open. Do not use async with on the master Redis() object.

You can use redis.client() as in your code. That will create a new single_connection_client, using the same pool as the parent.

solution 1. No client()

You just use the same master Redis() object for each connection

# cache the returned redis object.  only create _one_ for each db
@functools.cache
async def redis_master(db: int = REDIS_DB_0) -> Redis:
    global redis_sentinel
    # create a new Redis object, connected to this service, as master
    # it has its own, _private_, ConnectionPool, which is bound to the unique address of this
    # particular server.
    return redis_sentinel.master_for(service_name=REDIS_SENTINEL_SERVICE_NAME, db=db)

# a fastAPI dependency
async def redis_master_0():
    # just a short cut to get the master for 0
    return redis_master(0)

#fastapi endpoint handler
@fastapi.get # or whatever
async def do_something(redis_master_0):
    # each request is sharing the same master object
    value = await redis_master_0.get("key")

solution 2: with redis.client()

Here you to create a private Redis object for each request

# cache the returned redis object.  only create _one_ for each db
@functools.cache
async def redis_master(db: int = REDIS_DB_0) -> Redis:
    global redis_sentinel
    # create a new Redis object, connected to this service, as master
    # it has its own, _private_, ConnectionPool, which is bound to the unique address of this
    # particular server.
    return redis_sentinel.master_for(service_name=REDIS_SENTINEL_SERVICE_NAME, db=db)

# a fastAPI dependency
async def redis_master_0():
    # get a single connection client to use for a request. It shares the pool with the parent. closing it will not close the pool.
    async with redis_master(0).client() as single_connection_client:
        yield single_connection_client
    # here, the single_connection_client is closed at the end of the request.  Closing it just meand
    # returning it to the pool.

#fastapi endpoint handler
@fastapi.get # or whatever
async def do_something(redis_master_0):
    # each request gets a new private Redis, and must close it.  the fastapi dependency will take care of that.
    value = await redis_master_0.get("key")

Every time you call slave_for() a new Redis() instance is created, with a new ConnectionPool… I see in your code that you do this globally. Therefore, these objects will not get deleted, they will live forever. The change in 5.0.1 was to make sure that when you close the client, you also close this automatically created connection pool, otherwise, there would be no way to close it.

Your problem is a different one. You don’t want the “async with”. You are keeping the Redis objects that you got from master_for() and slave_for(), in a global dict, and not deleting / closing them. the async with is only about automatically calling Redis.aclose() for you. This will close the connection pool. Your solution, is to not close them, not use the async for.

Instead, just do the following:

async def redis_master(db: int = REDIS_DB_0) -> Redis:
    global redis_db_master_dict
    return redis_db_master_dict[db]

the async with pattern is for ensuring that you close all connections after you are done with them:

async with sentinel.slave_for(...) as client:
    await client.do_something()
    await client.do_something_different()
# now , 'client' is closed, and all its connections too.

Cheers!

Hi. I was wondering, in addition to Tasks, maybe your web stack is also using threads? I’m just guessing here, it could be an avenue to explore, since your global ConnectionPool would then need to be thread safe.

Yes, we do spawn threads for some jobs, but not for request handling. Threads we spawn do not interfere with asyncio/redis part. At least to my knowledge. We’re using FastAPI as our web-framework. I’ll look it up, thanks for the tip!

@kristjanvalur Based on my tests, it looks like your PR indeed fixed the issue with the ConnectionError raised ! Thanks again for taking care of this, this quickly.

@kristjanvalur Thanks for being so reactive ! I’m currently trying to reproduce the issue as I did not yet find a consistent trigger for the “No connection available” error. I’ll keep you posted !