This is the official OpenAI Python SDK, generated from OpenAPI specs via Stainless. The codebase has three layers:
- Client Layer (
_client.py):OpenAIandAsyncOpenAIclasses expose API resources and HTTP methods (get,post, etc.) - Resource Layer (
resources/): Domain-specific classes (e.g.,Batches,Chat,Embeddings) inherit fromSyncAPIResourceorAsyncAPIResource - Type Layer (
types/): Generated Pydantic models for request/response schemas and per-resource params
- Most SDK files are generated and will be overwritten by the Stainless generator
- Safe-to-edit directories:
src/openai/lib/,examples/,tests/ - Manual patches: Persist between generations but may cause merge conflicts
- See CONTRIBUTING.md for generator details
Every API endpoint follows this pattern:
# src/openai/resources/batches.py
class Batches(SyncAPIResource):
def create(self, *, param1: str, extra_headers: Headers | None = None) -> Batch:
return self._post("/v1/batches", body=maybe_transform(...), cast_to=Batch)
class AsyncBatches(AsyncAPIResource):
async def create(self, ...) -> Batch:
return await self._post(...)- Both sync and async versions required
- Use
self._post(),self._get(), etc. (inherited fromSyncAPIResource/AsyncAPIResource) - Transform params via
maybe_transform()/async_maybe_transform() - Cast responses using
cast_to=parameter
- Raw response:
APIResponse[T]with.parse()method - Streaming:
Stream[T]/AsyncStream[T]for server-sent events - Use
.with_raw_responseproperty to access raw HTTP data (headers, status) - Use
.with_streaming_responsefor non-eager body reads
- Request params live in
types/*_params.pyfiles (e.g.,batch_create_params.py) - Response types in
types/*.py(e.g.,batch.py→Batchclass) - All models inherit from
BaseModel(Pydantic v1/v2 compatible via_models.py) - Use
Omit,NOT_GIVENfor optional-but-not-provided distinction - Type unions:
str | Literal["custom"]for constrained values
# With Rye (preferred)
./scripts/bootstrap # Auto-provisions Python + venv
rye sync --all-features # Install dependencies
# Without Rye
pip install -r requirements-dev.lock# Requires Prism mock server
npx prism mock openapi.yml &
./scripts/test # Runs pytest with respx mocking
# Or test against custom endpoint
TEST_API_BASE_URL=https://api.example.com ./scripts/testrye run format # Ruff + docs formatting
rye run lint # Type checking (Pyright/mypy) + lints
./scripts/lint --fixNon-generated, always safe to edit:
chmod +x examples/my_example.py
./examples/my_example.py # Runs directly with rye shebangLocated in tests/ and tests/api_resources/:
- Fixtures (
conftest.py):client(sync) andasync_client(session-scoped) - Response Mocking: Uses
respx_mockto mock HTTP responses - Strict Validation:
_strict_response_validation=Trueflag validates responses match schemas - Test Organization: Mirror resource structure (e.g.,
test_api_resources/test_chat.pytestsresources/chat/)
Example test pattern:
def test_create_batch(client: OpenAI) -> None:
batch = client.batches.create(
endpoint="/v1/chat/completions",
input_file_id="file_xyz",
completion_window="24h"
)
assert isinstance(batch, Batch)- Create
resources/my_resource.pywithMyResource(SyncAPIResource)andAsyncMyResource(AsyncAPIResource) - Add type definitions in
types/my_resource.pyandtypes/my_resource_params.py - Update
_client.pyto expose resource:self.my_resource = MyResource(self) - Export in
__init__.py - Add tests mirroring the structure
- Check
_strict_response_validation=Falsein test fixtures to understand field mismatches - Inspect response models in
types/to match API responses - Use
_utils._transform.pyfor custom coercion logic
- Use
Omittype for "don't send this field" vsNonefor "send null" - Example:
metadata: Metadata | Omit = omitmeans optional and not sent by default - Compare with:
optional_field: str | Nonewhich allows sending None
- Min Python: 3.9+ (see
pyproject.toml) - Key Deps:
httpx>=0.23.0,pydantic>=1.9.0,typing-extensions>=4.10,anyio>=3.5.0 - Optional:
aiohttp(forDefaultAioHttpClient),websockets>=13(for realtime) - Dev: Rye, Ruff, Pyright, pytest, respx for mocking
- Import organization: Absolute imports from package root (not relative)
- Error handling: Use custom exception hierarchy in
_exceptions.py(e.g.,APIStatusError,RateLimitError) - Streaming: Check
types/completion.pyfor streaming event unions; usestream=Trueparam - Pagination:
SyncCursorPage/AsyncCursorPagewith.auto_paginate_iter()method - Breaking Changes: Detected via
scripts/detect-breaking-changes.py - Files/Uploads: Use
_files.pyandfile_from_path()helper for binary handling - Async Context Managers: Both clients support
async with AsyncOpenAI(...) as client:
- Enable debug logging:
openai.set_debug_logging(True)orlogging.getLogger("openai").setLevel(logging.DEBUG) - Inspect raw responses: Use
.with_raw_responseproperty for headers, status codes - Mock server issues: Test against real API with
TEST_API_BASE_URL=https://api.openai.com(requires valid key) - Type checking: Run
rye run pyrightto validate all type hints