AutoGPT: Azure support broken?

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

GPT-3 or GPT-4

  • I am using Auto-GPT with GPT-3 (GPT-3.5)

Steps to reproduce 🕹

azure.yaml:
azure_api_type: azure
azure_api_base: https://test.openai.azure.com/
azure_api_version: 2023-03-15-preview
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-4"
    embedding_model_deployment_id: "emb-ada"  

Current behavior 😯

When I run “python -m autogpt”, it just broken Welcome back! Would you like me to return to being Entrepreneur-GPT? Continue with the last settings? Name: Entrepreneur-GPT Role: an AI designed to autonomously develop and run businesses with the Goals: [‘Increase net worth’, ‘Grow Twitter Account’, ‘Develop and manage multiple businesses autonomously’] Continue (y/n): y Using memory of type: LocalCache Using Browser: chrome

Traceback (most recent call last): File “<frozen runpy>”, line 198, in _run_module_as_main File “<frozen runpy>”, line 88, in _run_code File “/data/Auto-GPT/autogpt/main.py”, line 50, in <module> main() File “/data/Auto-GPT/autogpt/main.py”, line 46, in main agent.start_interaction_loop() File “/data/Auto-GPT/autogpt/agent/agent.py”, line 75, in start_interaction_loop assistant_reply = chat_with_ai( ^^^^^^^^^^^^^ File “/data/Auto-GPT/autogpt/chat.py”, line 159, in chat_with_ai assistant_reply = create_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^ File “/data/Auto-GPT/autogpt/llm_utils.py”, line 84, in create_chat_completion deployment_id=CFG.get_azure_deployment_id_for_model(model), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “/data/Auto-GPT/autogpt/config/config.py”, line 120, in get_azure_deployment_id_for_model return self.azure_model_to_deployment_id_map[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: list indices must be integers or slices, not str

Expected behavior 🤔

It should works well.

Your prompt 📝

# Paste your prompt here

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 22 (5 by maintainers)

Most upvoted comments

Will be interested to hear if you get it going, as the code doesn’t seem to support the Azure implementation

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Figured out the problem is api version incorrect, seems like it has to be 2023-03-15-preview Now I face another error: openai.error.APIError: Invalid response object from API: '{ "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }' (HTTP response code was 401)

azure.yaml:

azure_api_type: azure_ad
azure_api_base: "https://xxxx.openai.azure.com/"
azure_api_version: "2023-03-15-preview"
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-35-turbo" -> I don't have GPT4 access so I change it to gpt 35, won't be called anyway I assume
    embedding_model_deployment_id: "text-embedding-ada-002"

I have setup my openai key in .env image

The key should work as I tested it in other project (senmentic-kernel), don’t know why it is not working in the context of AutoGPT

Issues I met can be a book of Azure OpenAI FAQ in AutoGPT I guess:)

Line 133 in /autogpt/config/config.py should read:

AZURE_CONFIG_FILE = os.path.join(os.path.dirname(__file__), "..", "..", "azure.yaml")

The azure.yaml file is two folders up (instead of one).