instructor: Instructor's `response_model` doesn't typecheck

Describe the bug Instructor’s patch of openai’s completions.create doesn’t typecheck when used with response_model.

To Reproduce Use the sample code from the README and open a project in VSCode with pylance installed.

Expected behavior The code type-checks.

Screenshots image

About this issue

  • Original URL
  • State: open
  • Created 7 months ago
  • Comments: 19 (11 by maintainers)

Most upvoted comments

i don’t hate it.

@savarin happy to take contribs into types

For the sync client, I have written a simple wrapper class around the patched client that allows the rest of my code to avoid type issues.

from typing import TypeVar, Generic, Type
import instructor

from openai import OpenAI

# Define a generic type for the response model
T = TypeVar("T")


class StructuredOpenAI(Generic[T]):
    """A wrapper class around instructor's patched OpenAI client.

    This simple wrapper allows us to avoid type errors when using Instructor's patched OpenAI client.

    """

    def __init__(self, openai_client: OpenAI):
        self._client = instructor.patch(openai_client)

    def create(
        self,
        response_model: Type[T],
        max_retries: int = 1,
        validation_context=None,
        *args,
        **kwargs,
    ) -> T:
        return self._client.chat.completions.create(  # type: ignore
            response_model=response_model,
            validation_context=validation_context,
            max_retries=max_retries,
            *args,
            **kwargs,
        )