redis-py: timeout connecting to redis hosted on aws ElastiCache
Hi,
I have a redis server hosted by aws using aws ElastiCache. When I try to connect to it using a lambda and on my local machine, it gives ‘timeout connecting to server’ error. However, my colleague could connect to this redis using another client in PHP. I wonder what I did wrong.
try:
redisClient = redis.Redis(host='***.use1.cache.amazonaws.com', port=6379, db=0, socket_timeout=10)
print(redisClient)
try:
print(redisClient.ping())
print(redisClient.set('foo','bar'))
# r.get('foo')
except Exception as e:
print('get set error: ', e)
except Exception as e:
print('redis err: ', e)
// redisClient Redis<ConnectionPool<Connection<host=***.use1.cache.amazonaws.com,port=6379,db=0>>>
// get set error: Timeout connecting to server for ping or set
Without setting the socket_timeout parameter, it will time out based on redis-py operation timeout: Error 60 connecting to ***.use1.cache.amazonaws.com:6379. Operation timed out.
About this issue
- Original URL
- State: open
- Created 6 years ago
- Comments: 17 (3 by maintainers)
I ran into this issue as well, but from a Lambda. For me, there were a few problems that had to be ironed out
redis.RedisClietn(... ssl=True). The redis-py page mentions thatssl_cert_reqsneeds to be set toNonefor use with ElastiCache, but that didn’t seem to be true in my case. I did however need to passssl=True.It makes sense that
ssl=Trueneeded to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.This is happening randomly. Is there are way to tell redis-py to reconnect on connection timeouts with some backoff strategy?
@chayim , do you by any chance know how to make Celery use
retry_on_timeoutsetting by any chance?Yes you need to be in the same region. Also test with the redis-cli too, to rule out firewall rules, routing etc.