langchain: Frequent request timed out error

I am getting this error whenever the time is greater than 60 seconds. I tried giving timeout=120 seconds in ChatOpenAI().

Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).

What is the reason for this issue and how can I rectify it?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 38 (4 by maintainers)

Commits related to this issue

Most upvoted comments

gpt-4 is always timing out for me (gpt-3.5-turbo works fine). Increasing the request_timeout helps:

llm = ChatOpenAI(temperature=0, model_name=model, request_timeout=120)

+1 frequently timeout with gpt-4, I increased the request_timeout, but didn’t help much. Tried OpenAI direct call, works as expected. Any workaround or potential root cause?

Usage: Refine summarization chain

gpt-4 is always timing out for me (gpt-3.5-turbo works fine). Increasing the request_timeout helps:

llm = ChatOpenAI(temperature=0, model_name=model, request_timeout=120)

Increasing the timeout helped. Thanks for the tip, @rafaelquintanilha !

+1 I’m seeing a lot of these as with ChatOpenAI and retrievers connected.

That same problem is happening to me with “model=gpt-3.5-turbo” and “request_timeout=120”

https://github.com/langchain-ai/langchain/issues/15222#issuecomment-1879607706

basically it was an issue wrt to gevent and grpc after upgrading library and doing monkey patching it worked for me , simple worker type by i mean is using sync worker and not the gevent one …

Any updates on this issue?

Getting this same error. Code seems to be fine but problem is exponentially worse when executing within AWS