AutoGPT: Maximum context length exceeded after `read_file`, `ingest_file`, `search_files`
Description
When attempting to analyze a code file using the Auto-GPT project, I encountered an error due to exceeding the maximum context length of the OpenAI model. The model’s maximum context length is 8191 tokens, but my request used 19023 tokens.
Steps to Reproduce
- Run Auto-GPT in GPT3.5 only mode
- Set up an AI with the following parameters:
- AI Name: Yoyo
- Role: Lua coder
- Goal 1: Improve the code file WoWinArabic_Chat.lua and document it then save it.
- Authorize the analyze_code command with the code file WoWinArabic_Chat.lua
Expected Behavior
The AI should be able to analyze the code and suggest improvements without exceeding the maximum context length.
Actual Behavior
The program encountered an error due to exceeding the maximum context length of the OpenAI model (19023 tokens used, while the limit is 8191 tokens).
Possible Solution
Consider implementing a method for breaking down the code file into smaller sections or reducing the context length by removing unnecessary content before passing it to the OpenAI model.
Additional Context
- Auto-GPT version: 0.2.2
- Python version: 3.11
- Operating System: Win11
Please let me know if there are any workarounds or if a fix is planned for this issue.
F:\Projects\autogpt\Auto-GPT>python -m autogpt --gpt3only
Warning: The file 'auto-gpt.json' does not exist. Local memory would not be saved to a file.
GPT3.5 Only Mode: ENABLED
NEWS: ::UPDATED:: Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
Welcome to Auto-GPT! run with '--help' for more information.
Create an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.
Name your AI: For example, 'Entrepreneur-GPT'
AI Name: Yoyo
Yoyo here! I am at your service.
Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
Yoyo is: you are lua coder
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1: improve the code file WoWinArabic_Chat.lua and documented it then save it.
Goal 2:
Using memory of type: LocalCache
Using Browser: chrome
THOUGHTS: I think I should start by analyzing the code file WoWinArabic_Chat.lua to see what improvements can be made.
REASONING: Analyzing the code file will give me a better understanding of the current state of the code and help me identify areas that can be improved.
PLAN:
- Analyze the code file
- Identify areas for improvement
- Make necessary changes
- Document changes
- Save the file
CRITICISM: I need to make sure that I am thorough in my analysis and that I document all changes that I make.
NEXT ACTION: COMMAND = analyze_code ARGUMENTS = {'code': 'WoWinArabic_Chat.lua'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
SYSTEM: Command analyze_code returned: I'm sorry, but you forgot to provide the code for me to analyze. Please provide the code so I can give you suggestions for improvements.
THOUGHTS: I need to read the code file WoWinArabic_Chat.lua to get the code and analyze it.
REASONING: I need to read the code file to get the code and analyze it.
PLAN:
- Read the code file
- Analyze the code
- Identify areas for improvement
- Make necessary changes
- Document changes
- Save the file
CRITICISM: I need to make sure that I am thorough in my analysis and that I document all changes that I make.
NEXT ACTION: COMMAND = read_file ARGUMENTS = {'file': 'WoWinArabic_Chat.lua'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "F:\Projects\autogpt\Auto-GPT\autogpt\__main__.py", line 5, in <module>
autogpt.cli.main()
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1635, in invoke
rv = super().invoke(ctx)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Projects\autogpt\Auto-GPT\autogpt\cli.py", line 151, in main
agent.start_interaction_loop()
File "F:\Projects\autogpt\Auto-GPT\autogpt\agent\agent.py", line 184, in start_interaction_loop
self.memory.add(memory_to_add)
File "F:\Projects\autogpt\Auto-GPT\autogpt\memory\local.py", line 76, in add
embedding = create_embedding_with_ada(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Projects\autogpt\Auto-GPT\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
return openai.Embedding.create(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 19023 tokens (19023 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 9
- Comments: 17 (3 by maintainers)
I am still having the issue and updated to master: openai.error.InvalidRequestError: This model’s maximum context length is 8191 tokens, however you requested 8481 tokens (8481 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. I received this error after it read a local file I asked it to summarize.
This issue also applies for
execute_shell
command as shown in #3244.FYI this seems to be fixed in master branch, if anyone needs a quick solution