AutoGPT: Failure: command list_files and read_file returned too much output.

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Windows

Which version of Auto-GPT are you using?

Stable (branch)

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

bunch of files in a workspace folder

Current behavior 😯

Error: Attempted to access absolute path ‘C:\Users\test\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace’ in workspace ‘C:\Users\test\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace’. NEXT ACTION: COMMAND = list_files ARGUMENTS = {‘directory’: ‘C:\Users\test\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace’} SYSTEM: Failure: command list_files returned too much output. Do not execute this command again with the same arguments.

Error: Attempted to access absolute path ‘C:\Users\rrayapaati\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace\smoketests_basic_qemu.yml’ in workspace ‘C:\Users\rrayapaati\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace’. NEXT ACTION: COMMAND = read_file ARGUMENTS = {‘filename’: ‘C:\Users\rrayapaati\Desktop\Auto-GPT-0.3.0\autogpt\auto_gpt_workspace\smoketests_basic_qemu.yml’} SYSTEM: Failure: command read_file returned too much output. Do not execute this command again with the same arguments.

Expected behavior 🤔

The process of handling large amounts of data by dividing it into multiple segments from files and lists of files.

Your prompt 📝

# Paste your prompt here

Your Logs 📒

<insert your logs here>

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 7
  • Comments: 29 (9 by maintainers)

Most upvoted comments

same issue. the json I want it to ingest is 16KB, not huge

edit: I just tried it with a json file that has 680 bytes, still says too large.

If I remember correctly, there was not chunking taking place where I looked (somewhere in file utils I believe) - so this may be one of those cases were nobody expected that dumping a directory tree or a text file, might require chunking. We will see what things look like once the re-arch is done

That is not due to autogpt, it is how docker works, you should probably consider reading up on docker itself

Also, this seems unrelated to the original issue - please open your own, and separate, one. Thank you

Same issue, any solution?

we probably need to use wrappers to optionally chunk up output from commands

The error comes from agent.py:

if result_tlength + memory_tlength + 600 > cfg.fast_token_limit:
    result = f"Failure: command {command_name} returned too much output. \
        Do not execute this command again with the same arguments."

which refers to this .evn variable in config.py

self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000))

Setting that to a higher number reveals that it’s an OpenAI limitation:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 24497 tokens (2527 in the messages, 21970 in the completion). Please reduce the length of the messages or completion.

Seems the only thing you can do is instruct AutoGPT to buffer the file to send to OpenAI, which will result in more API calls. This is in the OpenAI docs. What are tokens and how to count them?

ran into this today with completely different code/commands - it’s a hard-coded check that prevents the system from ingesting the data. I would have thought, we could just as well add the output to memory and then chunk it back to the LLM using autogpt/processing/text.py . Thoughts / ideas ?

Haven’t looked into the code but it seems that it’s actually a hard-coded thing. Asking AutoGPT to read a small, local file got me the same error message. I just copied the contents of the file to a pastebin URL and asked AutoGPT to read it, this time no problem.

Me too, I just let it read a Markdown file (test.md) with few lines, and this error message appeared.