AutoGPT: Prompt overflows aren't handled gracefully
Duplicates
- I have searched the existing issues
Steps to reproduce 🕹
No response
Current behavior 😯
When using Chinese text, the length increases after encoding, which may cause the number of tokens to exceed 8191 and result in the following error:
Traceback (most recent call last): File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\runpy.py”, line 196, in _run_module_as_main return run_code(code, main_globals, None, File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\runpy.py”, line 86, in run_code exec(code, run_globals) File "E:\openai\git\Auto-GPT\autogpt_main.py", line 53, in <module> main() File "E:\openai\git\Auto-GPT\autogpt_main.py", line 49, in main agent.start_interaction_loop() File “E:\openai\git\Auto-GPT\autogpt\agent\agent.py”, line 65, in start_interaction_loop assistant_reply = chat_with_ai( File “E:\openai\git\Auto-GPT\autogpt\chat.py”, line 85, in chat_with_ai else permanent_memory.get_relevant(str(full_message_history[-9:]), 10) File “E:\openai\git\Auto-GPT\autogpt\memory\local.py”, line 122, in get_relevant embedding = create_embedding_with_ada(text) File “E:\openai\git\Auto-GPT\autogpt\llm_utils.py”, line 136, in create_embedding_with_ada return openai.Embedding.create( File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\embedding.py”, line 33, in create response = super().create(*args, **kwargs) File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py”, line 153, in create response, _, api_key = requestor.request( File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py”, line 226, in request resp, got_stream = self._interpret_response(result, stream) File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py”, line 619, in _interpret_response self._interpret_response_line( File “C:\Users\tony1\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py”, line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model’s maximum context length is 8191 tokens, however you requested 10681 tokens (10681 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
Expected behavior 🤔
If not encoded, there will be no problem:
autogpt\app.py
130: google_result = google_search(arguments[“input”]) 131: #safe_message = google_result.encode(“utf-8”, “ignore”) 132: return str(google_result)
Your prompt 📝
# Paste your prompt here
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 9
- Comments: 28 (8 by maintainers)
I have a PR in my review-inbox that should fix all of these issues. 😃
@Fadude working on it
same same, it run a long time, and then suddenly it kill itself due to number of tokens to exceed 8191
Patched in #3646, with better fixes in the works.
I am facing the same issue. It gets to a certain point and dies when it tries to list directory contents of a project that I cloned. AutoGPT should batch requests itself
I checked this PR #2542 and it works, had no more “maximum context length” issues since
It seems like the problem has been fixed for
browse_website
command (according to the test under #2542 ). However I’m still having the same error when usingsearch_files
command -Same issue when use Google search and processing.
Goal 1: Search_files and make a descriptions of all files. Goal 2: be aware that you cannot send long requests to the api. i think max is 8k tokens. Goal 3: Using memory of type: LocalCache Using Browser: chrome THOUGHTS: I suggest we start by searching for the files in the current directory using the ‘search_files’ command. REASONING: Before we can work on any files, we need to know what files are available in the current directory. This will help us plan our next steps. PLAN:
(base) C:\Users\mcbub\Auto-GPT\Auto-GPT>