aider: Failed to apply edit
Amazing project! I love where this is going, this is the future of development!
I’m trying the ‘pong’ example, and it seems to get confused a lot, but making slow progress. At some point when I ask to fix the scores not getting incremented, it systematically fails to apply the edits to the file.
main.py
<<<<<<< ORIGINAL
# Main game loop
running = True
while running:
# Set up the scores
left_score = 0
right_score = 0
=======
# Set up the scores
left_score = 0
right_score = 0
# Main game loop
running = True
while running:
>>>>>>> UPDATED
I tried cleaning the chat, starting from scratch, but it always seems to get stuck applying that simple change. It has no issue doing other more complex changes in the same file, it only fails for that particular change.
I will look into it and submit a PR if I find the issue.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 17 (6 by maintainers)
It’s actually quite challenging to convince even GPT-4 to always return properly formatted code edits. As you’ve noticed, it sometimes just skips a bunch of code with “…” even though the system prompt strongly disallows that. It can make other mistakes sometimes.
If you see that it has returned a bad edit that fails to apply, you can reply and tell it to try again. Saying something like this might help: “your edit block is missing a bunch of code that you replaced with …, please give me a proper edit that doesn’t skip over any code”.
Those original/updated sections are called “edit blocks” and you can use that phrase to talk to GPT about mistakes it made with them.
Absolutely awesome project. I had a similar idea but no clue how to implement it, but you pulled it off! For me, the issue was that whatever I typed into the prompt of aider, it failed, thus being stuck, like the others. But I found a workaround for the problem that likely is also useful for all others, namely pasting the failing file into GPT4 and instructing it to merge with the git diff that was failing. After updating the file, aider created the commit prompt, and after acknowledging with “y” I could proceed as normal.
Ya, I have spent a fair amount of effort improving and refining the prompts and edit formats that aider uses with GPT-3.5 and 4.
It can get tricky making ad-hoc prompt changes, because they often help in some situations and hurt in others. I ended up building a benchmark so that I could quantitatively measure how prompting changes affected the overall code editing performance.
I wrote up some notes on the results. If you’re thinking about experimenting with prompts it might be useful background.
https://aider.chat/docs/benchmarks.html