llama_index: [Bug]: openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: xxxxx********************xxxx
Bug Description
I am calling aevaluate method for anwerrelevancy keep getting this message openai.AuthenticationError: Error code: 401 - {‘error’: {‘message’: 'Incorrect API key provided: xxxxx********************xxxx. The same key works for other OPENAI calls. Please help.
judges[“answer_relevancy”] = AnswerRelevancyEvaluator( service_context=ServiceContext.from_defaults( llm=OpenAI(temperature=0, model=“gpt-3.5-turbo”)
eval_tasks.append( await judges[“answer_relevancy”].aevaluate( query=query, response=answer, sleep_time_in_seconds=1.0,
Version
llama-index==0.9.44
Steps to Reproduce
None
Relevant Logs/Tracbacks
LOG BELOW
/bin/python3 /home/tlmalla/answerrelevancy.py
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 401 Unauthorized"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 401 Unauthorized"
Traceback (most recent call last):
File "/home/tlmalla/answerrelevancy.py", line 90, in <module>
asyncio.run(extract_json(json_file, output_file))
File "/home/tlmalla/.local/lib/python3.10/site-packages/nest_asyncio.py", line 30, in run
return loop.run_until_complete(task)
File "/home/tlmalla/.local/lib/python3.10/site-packages/nest_asyncio.py", line 98, in run_until_complete
return f.result()
File "/usr/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/usr/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/home/tlmalla/answerrelevancy.py", line 76, in extract_json
await judges["answer_relevancy"].evaluate(
File "/home/tlmalla/.local/lib/python3.10/site-packages/llama_index/evaluation/base.py", line 62, in evaluate
return asyncio.run(
File "/home/tlmalla/.local/lib/python3.10/site-packages/nest_asyncio.py", line 30, in run
return loop.run_until_complete(task)
File "/home/tlmalla/.local/lib/python3.10/site-packages/nest_asyncio.py", line 98, in run_until_complete
return f.result()
File "/usr/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/usr/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/home/tlmalla/.local/lib/python3.10/site-packages/llama_index/evaluation/answer_relevancy.py", line 119, in aevaluate
eval_response = await self._service_context.llm.apredict(
File "/home/tlmalla/.local/lib/python3.10/site-packages/llama_index/llms/llm.py", line 280, in apredict
chat_response = await self.achat(messages)
File "/home/tlmalla/.local/lib/python3.10/site-packages/llama_index/llms/base.py", line 57, in wrapped_async_llm_chat
f_return_val = await f(_self, messages, **kwargs)
File "/home/tlmalla/.local/lib/python3.10/site-packages/llama_index/llms/openai.py", line 494, in achat
return await achat_fn(messages, **kwargs)
File "/home/tlmalla/.local/lib/python3.10/site-packages/llama_index/llms/openai.py", line 538, in _achat
response = await aclient.chat.completions.create(
File "/home/tlmalla/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1322, in create
return await self._post(
File "/home/tlmalla/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1725, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "/home/tlmalla/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1428, in request
return await self._request(
File "/home/tlmalla/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1519, in _request
raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: xxx********************xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
About this issue
- Original URL
- State: closed
- Created 5 months ago
- Reactions: 1
- Comments: 17 (1 by maintainers)
🤖
Based on the information you’ve provided, it seems like you’re using the Azure OpenAI service, which has a different authentication mechanism compared to the standard OpenAI API. The Azure OpenAI service uses Azure AD for token authentication, and the token might have expired, leading to the “Incorrect API key provided” error.
In the LlamaIndex codebase, the
AzureOpenAIMultiModalclass handles the API key authentication for OpenAI calls. Ifuse_azure_adis set toTrue, the_get_credential_kwargsmethod refreshes the Azure AD token and sets it as the API key. If the token has expired and is not refreshed properly, it could lead to this error.Here is the relevant code:
You can find this method in the llama_index/llms/azure_openai.py file.
Ensure that
use_azure_adis set toTrueonly if you are using Azure AD for token authentication and that the token is being refreshed correctly. If you are not using Azure AD for token authentication, make sureuse_azure_adis set toFalseand the correct API key is provided.If you’re still facing issues, could you please provide more details about your setup? Specifically, it would be helpful to know how you’re initializing the
AzureOpenAIMultiModalinstance and whetheruse_azure_adis set toTrueorFalse.Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Thanks @logan-markewich for pointing out the direction! My issue is resolved.
For llama-index==0.9.48, if you use AzureOpenAI you need to add following. Reference: https://docs.llamaindex.ai/en/latest/module_guides/supporting_modules/service_context_migration.html# However, because the code is slightly different for various versions, I made following code working for me.
from llama_index import ServiceContext, set_global_service_context service_context = ServiceContext.from_defaults( llm=AzureOpenAI(engine=“gpt-4-32k”, model=“gpt-4-32k”), ) set_global_service_context(service_context)
@YanZhangADS the pandas query engine uses an LLM to write pandas queries. You’ll need to pass in the LLM if you want to use it with azure 👍
Thank you for a quick response. It does not work for me. Still gives the same error.