AutoGPT: crashes with error when using GPT-4-32k model in azure.

Duplicates

  • I have searched the existing issues

Steps to reproduce 🕹

while using the GPT-4-32k model (yes, i do have access to it) after entering my 5th “Goal” during the initial stage, i get this error message

Current behavior 😯

Goal 5: this the goal text 5 blah…<enter> Warning: The file ‘auto-gpt.json’ does not exist. Local memory would not be saved to a file. Using memory of type: LocalCache Traceback (most recent call last): File “C:\Users\Brentf\source\repos\auto-gpt\scripts\token_counter.py”, line 17, in count_message_tokens encoding = tiktoken.encoding_for_model(model) File “C:\Users\Brentf\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\tiktoken\model.py”, line 70, in encoding_for_model raise KeyError( KeyError: ‘Could not automatically map GPT-4-32k to a tokeniser. Please use tiktok.get_encoding to explicitly get the tokeniser you expect.’

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File “C:\Users\Brentf\source\repos\auto-gpt\scripts\main.py”, line 441, in <module> main() File “C:\Users\Brentf\source\repos\auto-gpt\scripts\main.py”, line 304, in main agent.start_interaction_loop() File “C:\Users\Brentf\source\repos\auto-gpt\scripts\main.py”, line 345, in start_interaction_loop assistant_reply = chat.chat_with_ai( File “C:\Users\Brentf\source\repos\auto-gpt\scripts\chat.py”, line 77, in chat_with_ai next_message_to_add_index, current_tokens_used, insertion_index, current_context = generate_context( File “C:\Users\Brentf\source\repos\auto-gpt\scripts\chat.py”, line 40, in generate_context current_tokens_used = token_counter.count_message_tokens(current_context, model) File “C:\Users\Brentf\source\repos\auto-gpt\scripts\token_counter.py”, line 19, in count_message_tokens logger.warn(“Warning: model not found. Using cl100k_base encoding.”) NameError: name ‘logger’ is not defined

Expected behavior 🤔

expected = no error

Your prompt 📝

# Paste your prompt here

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 28 (2 by maintainers)

Most upvoted comments

Hi @Cytranics , thanks but as you can see from my screenshot above the name I gave to my “Model deployment name” on my Azure Open AI resource is gpt-4 (same a the model name) so I don’t understand what’s wrong in my model map file (azure.yaml ) 😢

image

Could be the code. I’ll be releasing an actual working autonomous bot here in a day or so, strictly built for GPT-4-32k. Mine wrote an entire html form with a node.js backend in 10 min. https://i.imgur.com/80J6gsW.png

while using the GPT-4-32k model (yes, i do have access to it)

Now you have to tell, how Is it? When does it excel and when is it barely better? On which tasks did you try it?