openai-python: [Audio.transcribe] JsonDecodeError when printing vtt from m4a

Describe the bug

This section of the codebase expects json even when the response_format is not json:

https://github.com/openai/openai-python/blob/75c90a71e88e4194ce22c71edeb3d2dee7f6ac93/openai/api_requestor.py#L668C7-L673

I think I can contribute a quick bug fix PR today!

To Reproduce

  1. Open an m4a file in a jupyter notebook (python 3.10.10)
  2. Transcribe with whisper-1
  3. Print transcript

Stack: JSONDecodeError Traceback (most recent call last) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\openai\api_requestor.py:669, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream) 668 try: –> 669 data = json.loads(rbody) 670 except (JSONDecodeError, UnicodeDecodeError) as e:

File C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\json\__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\json\decoder.py:337, in JSONDecoder.decode(self, s, _w)
    333 """Return the Python representation of ``s`` (a ``str`` instance
    334 containing a JSON document).
    335 
    336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    338 end = _w(s, end).end()

File C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\json\decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
    354 except StopIteration as err:
--> 355     raise JSONDecodeError("Expecting value", s, err.value) from None

Code snippets

f = open("testing.m4a", "rb")
transcript = openai.Audio.transcribe("whisper-1", f,response_format="vtt")
print(transcript)


https://github.com/openai/openai-python/blob/75c90a71e88e4194ce22c71edeb3d2dee7f6ac93/openai/api_requestor.py#L668C7-L673`

OS

Windows 11

Python version

Python v3.10.10

Library version

openai-python 0.27.0

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 4
  • Comments: 20 (9 by maintainers)

Most upvoted comments

Background

  1. The Whisper repo uses ResultWriter in utils.py. That’s missing here.

  2. This is the relevant snippet before going into __interpret_response_line https://github.com/openai/openai-python/blob/1165639843d1be71b009e17b9c29686d05299d4e/openai/api_requestor.py#L216-L227

  3. Whisper uses a get_writer function, then writes the result to an output path. https://github.com/openai/whisper/blob/da600abd2b296a5450770b872c3765d0a5a5c769/whisper/transcribe.py#L313-L317. But in this repo, the result gets passed to _interpret_response and an OpenAIResponse.

  4. A long time ago, this repo was forked from the Stripe python library, and if you look at their API_Requestor, it’s a lot cleaner: https://github.com/stripe/stripe-python/blob/master/stripe/api_requestor.py#L385.

Discussion

Fundamentally, the problem here is that we’re violating the assumption that the request always returns a JSON encoded response.

My proposal is to prepare requests to Whisper such that they always return serialized JSON, and if the user wants a vtt or srt file, we do the conversion on the client inside audio.py.

Concretely, a call to transcribe would use transcribe_raw in the beginning to get the raw JSON, and then in the middle process that according to response_format, and in the end return the content in the format that the caller requested.

Evaluation

Let’s consider our options and tradeoffs in terms of the user experience, product roadmap, and technical architecture.

In some cases, it may be more beneficial to perform formatting on the client-side, especially when there are multiple clients consuming the same API resource, but each client requires a different data format. For instance, if Alice and Bob request a transcription of an identical audio file, Alice may require the transcription in vtt format, while Bob may need it in srt format.

It is more efficient to store a single compact idempotent representation instead of multiple representations. When sending text over the wire, using a binary format that is deserialized into JSON on the client can be more efficient compared to the wasteful I/O overhead of sending the text directly.

In the future, as we support passing URLs to remote files, we may see a pattern of a few popular, large-size tracks being transcribed frequently, while the majority of requests will be for smaller one-off transcriptions.

It’s worth noting that we can cache the result of the first request and serve it from the cache on the second request. Cache management is more complex in practice, but let’s keep it simple for now.

Biggest drawback I can see is that converting large files might be slow, especially on older hardware, and we should be careful to control the footprint of client workloads.

Alternatives

One potential fix may be to check and enforce the Content-Type HTTP header. Another might be to band-aid over this with a hack that peeks into rbody to check the type. I don’t think either of those would be good design decisions in this case, but I’m open to other perspectives, and it’s always possible I’m missing something.

If nobody suggests something more clever/elegant/scalable, I’ll just implement the best I can come up with right now and get that reviewed.

thanks for reporting @sheikheddy - we seem to have missed this. please tag me if you do end up putting a pr up!