AutoGPT: The model: `gpt-4` does not exist

Hi, Amazing work you’re doing here! ❤ I’m getting this message:

Traceback (most recent call last): File “/Users/stas/Auto-GPT/AutonomousAI/main.py”, line 154, in <module> assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/Users/stas/Auto-GPT/AutonomousAI/chat.py”, line 48, in chat_with_ai response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py”, line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py”, line 153, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_requestor.py”, line 226, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_requestor.py”, line 619, in _interpret_response self._interpret_response_line( File “/usr/local/lib/python3.11/site-packages/openai/api_requestor.py”, line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model: gpt-4 does not exist

Does it mean I don’t have an API access to gpt-4 yet? How can I use previous versions?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 4
  • Comments: 15 (8 by maintainers)

Commits related to this issue

Most upvoted comments

Wow! 🤯 I didn’t know that was possible, great work guys! @stan-voo @Koobah

I’d tried getting 3.5 to work in the past and it refused to acknowledge the prompt. Great idea asking it to parse it’s final output as JSON.

If you want to submit a pull request that would be a huge help!:

  • - Add a model argument (If none is provided, default to gpt-4)
  • - Create different prompt.txt files for each model (this appears to be necessary), load the appropriate one when building the prompt.
  • - Add model to Config, set on start-up and get every time gpt-4 is currently mentioned.

GPT3.5 is so much cheaper, allowing for much more testing and development.

In future we could even get GPT4 instances of Auto-GPT to cheaply spin up entire sub-instances of Auto-GPT running GPT3.5 for multiple step tasks…

It did it! Completed a task on v.3.5! Here’s a screencast if anybody is curious: https://www.loom.com/share/9bf888d9c925474899257d072f1a562f

Guys, glad that it helped you. Just be very cautious when using my code. This is my very first programming attempt. I have no clue what I am doing 😃

Btw, the idea I am working on is to create specialist GPT instances project managers, marketers, operators), where the bot would have it’s own complex prompt replicating what a person in such role would do

I agree, @jcp. I think this decision makes my PRs not useful anymore. lessons learned, don’t format on save(when contributing to OSS), make the PRs small, and address each issue separately.

Nice!

I think we should use both. I prefer dotenv, and was thrilled to see that PR, but was trying to keep my PR from fixing all the things at once. 😉

Brilliant! Make sure you’re up-to-date, as I’m adding features all the time.

Please share your journey with us over at Discussions, I’d love to see how things progress as you go.

I’m playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28

Got stopped by this error:

Traceback (most recent call last): File “/Users/stas/dev/Auto-GPT/AutonomousAI/main.py”, line 134, in <module> assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py”, line 50, in chat_with_ai response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py”, line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py”, line 153, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_requestor.py”, line 226, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/usr/local/lib/python3.11/site-packages/openai/api_requestor.py”, line 619, in _interpret_response self._interpret_response_line( File “/usr/local/lib/python3.11/site-packages/openai/api_requestor.py”, line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model’s maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.