gpt-researcher: The model: `gpt-4` does not exist

From a fresh install. Seems after end of month everyone will have access to gpt-4. After clicking research I get:

Exception in ASGI application Traceback (most recent call last): File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/uvicorn/protocols/websockets/wsproto_impl.py”, line 249, in run_asgi result = await self.app(self.scope, self.receive, self.send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py”, line 78, in call return await self.app(scope, receive, send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/applications.py”, line 289, in call await super().call(scope, receive, send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/applications.py”, line 122, in call await self.middleware_stack(scope, receive, send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py”, line 149, in call await self.app(scope, receive, send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py”, line 79, in call raise exc File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py”, line 68, in call await self.app(scope, receive, sender) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py”, line 20, in call raise e File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py”, line 17, in call await self.app(scope, receive, send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/routing.py”, line 718, in call await route.handle(scope, receive, send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/routing.py”, line 341, in handle await self.app(scope, receive, send) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/starlette/routing.py”, line 82, in app await func(session) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/fastapi/routing.py”, line 324, in app await dependant.call(**values) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/main.py”, line 50, in websocket_endpoint await manager.start_streaming(task, report_type, agent, websocket) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/run.py”, line 38, in start_streaming report, path = await run_agent(task, report_type, agent, websocket) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/run.py”, line 50, in run_agent await assistant.conduct_research() File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/research_agent.py”, line 133, in conduct_research search_queries = await self.create_search_queries() File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/research_agent.py”, line 89, in create_search_queries result = await self.call_agent(prompts.generate_search_queries_prompt(self.question)) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/research_agent.py”, line 76, in call_agent answer = create_chat_completion( File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/llm_utils.py”, line 48, in create_chat_completion response = send_chat_completion_request( File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/agent/llm_utils.py”, line 69, in send_chat_completion_request result = openai.ChatCompletion.create( File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py”, line 25, in create return super().create(*args, **kwargs) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py”, line 153, in create response, _, api_key = requestor.request( File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_requestor.py”, line 298, in request resp, got_stream = self._interpret_response(result, stream) File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_requestor.py”, line 700, in _interpret_response self._interpret_response_line( File “/mnt/c/Users/Kontor/Github Repos/gpt-researcher/.venv/lib/python3.10/site-packages/openai/api_requestor.py”, line 763, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model: gpt-4 does not exist

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 19 (1 by maintainers)

Most upvoted comments

Hi @rotemweiss57 Thank you so much for the prompt reply! Indeed, I’m using GPT-3 as I cannot access the gpt-4 api currently due to whatever limit OpenAI has. Thank you for helping me with the issue, the package works excellently. I’m going to be testing it as a way to compile sources for a manuscript that I’m working on. Will update you on how well it works, and hopefully I will be able to use GPT-4 soon.

Thanks again!

Hi @AHerik of course! Do you currently have gpt-4 working? or are you using gpt-3?

If you already changed the config file to gpt3 (based on your error I assume you have), All you need to do is to go to /agent/prompts.py and in that file look for the function “generate_search_queries_prompt(question)”

Inside that function, modify this line: “You must respond with a list of strings in the following format: [“query 1”, “query 2”, “query 3”, “query 4”]”

to: “You must respond only with a list of strings in the following json format: [“query 1”, “query 2”, “query 3”, “query 4”]”

Again, it is best to use gpt 4, but it should be stable enough.

Hope it helps!