AutoGPT: Prompt overflows aren't handled gracefully

Duplicates

  • I have searched the existing issues

Steps to reproduce 🕹

No response

Current behavior 😯

When using Chinese text, the length increases after encoding, which may cause the number of tokens to exceed 8191 and result in the following error:

Traceback (most recent call last): File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\runpy.py”, line 196, in _run_module_as_main return run_code(code, main_globals, None, File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\runpy.py”, line 86, in run_code exec(code, run_globals) File "E:\openai\git\Auto-GPT\autogpt_main.py", line 53, in <module> main() File "E:\openai\git\Auto-GPT\autogpt_main.py", line 49, in main agent.start_interaction_loop() File “E:\openai\git\Auto-GPT\autogpt\agent\agent.py”, line 65, in start_interaction_loop assistant_reply = chat_with_ai( File “E:\openai\git\Auto-GPT\autogpt\chat.py”, line 85, in chat_with_ai else permanent_memory.get_relevant(str(full_message_history[-9:]), 10) File “E:\openai\git\Auto-GPT\autogpt\memory\local.py”, line 122, in get_relevant embedding = create_embedding_with_ada(text) File “E:\openai\git\Auto-GPT\autogpt\llm_utils.py”, line 136, in create_embedding_with_ada return openai.Embedding.create( File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\embedding.py”, line 33, in create response = super().create(*args, **kwargs) File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py”, line 153, in create response, _, api_key = requestor.request( File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py”, line 226, in request resp, got_stream = self._interpret_response(result, stream) File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py”, line 619, in _interpret_response self._interpret_response_line( File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py”, line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model’s maximum context length is 8191 tokens, however you requested 10681 tokens (10681 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Expected behavior 🤔

If not encoded, there will be no problem:

autogpt\app.py

130: google_result = google_search(arguments[“input”]) 131: #safe_message = google_result.encode(“utf-8”, “ignore”) 132: return str(google_result)

Your prompt 📝

# Paste your prompt here

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 9
  • Comments: 28 (8 by maintainers)

Most upvoted comments

I’ve drafted code that chunks, combines by averaging, then returning that combined embedding. I won’t do anything with it if you’ve already done all the things @Pwuts.

I have a PR in my review-inbox that should fix all of these issues. 😃

@Fadude working on it

same same, it run a long time, and then suddenly it kill itself due to number of tokens to exceed 8191

  1. why it run so long and never summarize a report
  2. why it cannot handle to split the tokens to send gpt

Patched in #3646, with better fixes in the works.

I am facing the same issue. It gets to a certain point and dies when it tries to list directory contents of a project that I cloned. AutoGPT should batch requests itself

I checked this PR #2542 and it works, had no more “maximum context length” issues since

It seems like the problem has been fixed for browse_website command (according to the test under #2542 ). However I’m still having the same error when using search_files command -

NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': 'my/path/to-dir/'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-= 
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/python@3.10/3.10.7/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/homebrew/Cellar/python@3.10/3.10.7/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/fadude/lab/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
  File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/opt/homebrew/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/Users/fadude/lab/autogpt/cli.py", line 151, in main
    agent.start_interaction_loop()
  File "/Users/fadude/lab/autogpt/agent/agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "/Users/fadude/lab/autogpt/memory/milvus.py", line 66, in add
    embedding = get_ada_embedding(data)
  File "/Users/fadude/lab/autogpt/memory/base.py", line 19, in get_ada_embedding
    return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[
  File "/opt/homebrew/lib/python3.10/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
  File "/opt/homebrew/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/opt/homebrew/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 236485 tokens (236485 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Same issue when use Google search and processing.

Goal 1: Search_files and make a descriptions of all files. Goal 2: be aware that you cannot send long requests to the api. i think max is 8k tokens. Goal 3: Using memory of type: LocalCache Using Browser: chrome THOUGHTS: I suggest we start by searching for the files in the current directory using the ‘search_files’ command. REASONING: Before we can work on any files, we need to know what files are available in the current directory. This will help us plan our next steps. PLAN:

  • Use the ‘search_files’ command to find all files in the current directory.
  • Save the file descriptions to a file for future reference. CRITICISM: I need to ensure that I am saving the file descriptions to a file for future reference, so that I don’t have to search for them again. SPEAK: I suggest we start by searching for the files in the current directory using the ‘search_files’ command. Attempting to fix JSON by finding outermost brackets Apparently json was fixed. NEXT ACTION: COMMAND = search_files ARGUMENTS = {‘directory’: ‘.’} Enter ‘y’ to authorise command, ‘y -N’ to run N continuous commands, ‘n’ to exit program, or enter feedback for … Input:y -10 -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-= Traceback (most recent call last): File “C:\Users\mcbub\anaconda3\lib\runpy.py”, line 197, in _run_module_as_main return run_code(code, main_globals, None, File “C:\Users\mcbub\anaconda3\lib\runpy.py”, line 87, in run_code exec(code, run_globals) File "C:\Users\mcbub\Auto-GPT\Auto-GPT\autogpt_main.py", line 53, in <module> main() File "C:\Users\mcbub\Auto-GPT\Auto-GPT\autogpt_main.py", line 49, in main agent.start_interaction_loop() File “C:\Users\mcbub\Auto-GPT\Auto-GPT\autogpt\agent\agent.py”, line 170, in start_interaction_loop self.memory.add(memory_to_add) File “C:\Users\mcbub\Auto-GPT\Auto-GPT\autogpt\memory\local.py”, line 76, in add embedding = create_embedding_with_ada(text) File “C:\Users\mcbub\Auto-GPT\Auto-GPT\autogpt\llm_utils.py”, line 137, in create_embedding_with_ada return openai.Embedding.create( File “C:\Users\mcbub\anaconda3\lib\site-packages\openai\api_resources\embedding.py”, line 33, in create response = super().create(*args, **kwargs) File “C:\Users\mcbub\anaconda3\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py”, line 153, in create response, _, api_key = requestor.request( File “C:\Users\mcbub\anaconda3\lib\site-packages\openai\api_requestor.py”, line 226, in request resp, got_stream = self._interpret_response(result, stream) File “C:\Users\mcbub\anaconda3\lib\site-packages\openai\api_requestor.py”, line 619, in _interpret_response self._interpret_response_line( File “C:\Users\mcbub\anaconda3\lib\site-packages\openai\api_requestor.py”, line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model’s maximum context length is 8191 tokens, however you requested 17113 tokens (17113 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

(base) C:\Users\mcbub\Auto-GPT\Auto-GPT>