openai-python: Exception is thrown during parsing of response to a request which triggered Azure's content management

Describe the bug

When the Azure content management system flags a request, the library fails to handle the response, causing an exception to be thrown.

Example stack trace:

Traceback (most recent call last):
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/api_requestor.py", line 331, in handle_error_response
    error_data = resp["error"]
TypeError: string indices must be integers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/WSkinner/work/ripcord/ml-generative/bug.py", line 9, in <module>
    response = openai.Completion.create(
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/api_resources/completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/api_requestor.py", line 333, in handle_error_response
    raise error.APIError(
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/error.py", line 32, in __init__
    self.error = self.construct_error_object()
  File "/Users/WSkinner/.pyenv/versions/3.10.7/envs/ml-generative/lib/python3.10/site-packages/openai/error.py", line 62, in construct_error_object
    or not isinstance(self.json_body["error"], dict)
TypeError: string indices must be integers

There is a related issue, from which I have copied the prompt which triggers the content management policy. However, that issue appears to be treating the rejection of a request due to content management, resulting in a 400 status code, as a bug. This new issue is specifically regarding the openai-python library’s treatment of that response, not the fact that the response was returned.

To Reproduce

To reproduce the issue, run the following code.

import os
import openai

openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"
openai.api_base = "https://rc-ai.openai.azure.com/"
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")

response = openai.Completion.create(
    engine="gpt-35-turbo",
    prompt="SUBREDDIT: r/AskReddit TITLE: Cock blocked by a friend (Who's a girl). POST: So for the past week there's "
        "been this girl in one of my classes I've been talking to, she's pretty cute (dyed red hair, fair skin, "
        "a few freckles, not ginger), she loves star wars and I suspect she's a redditor. I was going to ask her for "
        "her number today, but a girl i met about a year ago came and sat right where the red head had been sitting, "
        "effectively cock-blocking me and driving the girl I was interested in away. Now it seems like the red head "
        "thinks I'm uninterested in her and has since found some other guy to talk to. Has anybody been in a similar "
        "scenario? Advice? \nTL;DR: Got cock blocked by a friend who's a girl."
)

Code snippets

No response

OS

macOS

Python version

3.10.7

Library version

0.27.2

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 18 (5 by maintainers)

Most upvoted comments

Hey folks! Any updates on this bug?

@JensMadsen the Azure service API was fixed to return the correct content-type. As a result, the openai library returns an openai.error.InvalidRequestError with error code == “content_filter” in the case that a prompt is filtered by the content policy. For now, I suspect you can write something like this to judge retry logic:

try:
    openai.Completion.create(
        prompt="<prompt triggering content filter>",
        deployment_id="<deployment_id>"
    )
except openai.error.InvalidRequestError as e:
    if e.error.code == "content_filter":
        # no retry 

Related: https://github.com/openai/openai-python/commit/931b4d23c742349bf5f82a2a995dd46ae684cdd5 was merged to at least prevent this from throwing an exception.

@JensMadsen that seems reasonable to me. Let me bring it up with the team.

@kristapratico Would it be an idea to raise a new error e.g. ContentPolicyError? If we e.g. want to apply retry logic (the case in e.g. Langchain) it would be very practical to be able to distinguish “normal” API errors from errors due to triggering the content policy. A normal API error could be that you (or Azure) have some temporary problem. In that case, I would like to wait a few seconds and retry to see if things went back to normal. On the other hand, if I triggered the content policy system I do not want to retry. I was looking into solving this via a PR in Langchain but it looks difficult due to the way the that the retry logic I build there (tenacity).

Looks like self.json_body is a string. Need to json.loads(self.json_body) somewhere.