AutoGPT: Data Ingestion doesn't work
Duplicates
- I have searched the existing issues
Steps to reproduce 🕹
Running autogpt/data_ingestion.py
fails on stable , but i got it work by borrowing code from an open PR which is adding the following to the file:
import sys
from pathlib import Path
sys.path.append(str(Path(__file__).resolve().parent.parent))
Running:
python autogpt/data_ingestion.py --dir auto_gpt_workspace
Returns:
Using memory of type: LocalCache Directory 'auto_gpt_workspace' ingested successfully.
I also tried inserting the contents of older auto-got.json
into the blank version, but it still got overridden.
Current behavior 😯
log-ingestion.txt
also remains empty after this and the script starts from the very beginning that is running google search.
Expected behavior 🤔
It should load previous from memory and start from there?
Your prompt 📝
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 24 (5 by maintainers)
Once ingested, how do you actually have the auto GPT search it’s memory banks or “recall” information from memory?
any successful data-ingestions with
local_json
? (since other memory backends aren’t currently supported) after fixing a few errors myself (missingworkspace_path
in the config object, invalid number of arguments when callingingest_file
) and still no luck. I’m getting:no luck with specific files either. what gives? any idea?
The ingestion isn’t the problem. It is successfully ingested. The issue is instructing AutoGPT to use it.
I was able to ingest one file using the PR #1679 (move data_ingestion.py from
autogpt/
to the root dir).The file I ingested has been placed on:
And the command executed:
I’m using redis as backend, and I can see the data correctly embedded and ingested.