Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.3.0"
".": "0.4.0"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 46
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/contextual-ai%2Fsunrise-5298551c424bb999f258bdd6c311e96c80c70701ad59bbce19b46c788ee13bd4.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/contextual-ai%2Fsunrise-f43814080090927ee22816c5c7f517d8a7eb7f346329ada67915608e32124321.yml
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
# Changelog

## 0.4.0 (2025-03-03)

Full Changelog: [v0.3.0...v0.4.0](https://github.com/ContextualAI/contextual-client-python/compare/v0.3.0...v0.4.0)

### Features

* Add special snowflake path for internal dns usage ([#52](https://github.com/ContextualAI/contextual-client-python/issues/52)) ([dd0ea41](https://github.com/ContextualAI/contextual-client-python/commit/dd0ea4117c37eb53620304a30f736747f30f6ce6))
* **api:** update via SDK Studio ([#59](https://github.com/ContextualAI/contextual-client-python/issues/59)) ([9b116a4](https://github.com/ContextualAI/contextual-client-python/commit/9b116a4e1d935a32ab8a44a36042891edf4d2125))


### Chores

* **docs:** update client docstring ([#55](https://github.com/ContextualAI/contextual-client-python/issues/55)) ([ef1ee6e](https://github.com/ContextualAI/contextual-client-python/commit/ef1ee6e351e2c1a84af871f70742045df23fbe7f))


### Documentation

* update URLs from stainlessapi.com to stainless.com ([#53](https://github.com/ContextualAI/contextual-client-python/issues/53)) ([4162888](https://github.com/ContextualAI/contextual-client-python/commit/41628880bfb7d72cb3759ea06f1c09c11bb60e1a))

## 0.3.0 (2025-02-26)

Full Changelog: [v0.2.0...v0.3.0](https://github.com/ContextualAI/contextual-client-python/compare/v0.2.0...v0.3.0)
Expand Down
4 changes: 2 additions & 2 deletions SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

## Reporting Security Issues

This SDK is generated by [Stainless Software Inc](http://stainlessapi.com). Stainless takes security seriously, and encourages you to report any security vulnerability promptly so that appropriate action can be taken.
This SDK is generated by [Stainless Software Inc](http://stainless.com). Stainless takes security seriously, and encourages you to report any security vulnerability promptly so that appropriate action can be taken.

To report a security issue, please contact the Stainless team at security@stainlessapi.com.
To report a security issue, please contact the Stainless team at security@stainless.com.

## Responsible Disclosure

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "contextual-client"
version = "0.3.0"
version = "0.4.0"
description = "The official Python library for the Contextual AI API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
4 changes: 2 additions & 2 deletions src/contextual/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ def __init__(
# part of our public interface in the future.
_strict_response_validation: bool = False,
) -> None:
"""Construct a new synchronous Contextual AI client instance.
"""Construct a new synchronous ContextualAI client instance.

This automatically infers the `api_key` argument from the `CONTEXTUAL_API_KEY` environment variable if it is not provided.
"""
Expand Down Expand Up @@ -276,7 +276,7 @@ def __init__(
# part of our public interface in the future.
_strict_response_validation: bool = False,
) -> None:
"""Construct a new async Contextual AI client instance.
"""Construct a new async AsyncContextualAI client instance.

This automatically infers the `api_key` argument from the `CONTEXTUAL_API_KEY` environment variable if it is not provided.
"""
Expand Down
2 changes: 1 addition & 1 deletion src/contextual/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "contextual"
__version__ = "0.3.0" # x-release-please-version
__version__ = "0.4.0" # x-release-please-version
30 changes: 30 additions & 0 deletions src/contextual/resources/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,10 @@ def create(
messages: Iterable[generate_create_params.Message],
model: str,
avoid_commentary: bool | NotGiven = NOT_GIVEN,
max_new_tokens: int | NotGiven = NOT_GIVEN,
system_prompt: str | NotGiven = NOT_GIVEN,
temperature: float | NotGiven = NOT_GIVEN,
top_p: float | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -83,9 +86,18 @@ def create(
context. However, commentary may provide useful context which improves the
helpfulness of responses.

max_new_tokens: The maximum number of tokens that the model can generate in the response.

system_prompt: Instructions that the model follows when generating responses. Note that we do
not guarantee that the model follows these instructions exactly.

temperature: The sampling temperature, which affects the randomness in the response. Note
that higher temperature values can reduce groundedness

top_p: A parameter for nucleus sampling, an alternative to temperature which also
affects the randomness of the response. Note that higher top_p values can reduce
groundedness

extra_headers: Send extra headers

extra_query: Add additional query parameters to the request
Expand All @@ -102,7 +114,10 @@ def create(
"messages": messages,
"model": model,
"avoid_commentary": avoid_commentary,
"max_new_tokens": max_new_tokens,
"system_prompt": system_prompt,
"temperature": temperature,
"top_p": top_p,
},
generate_create_params.GenerateCreateParams,
),
Expand Down Expand Up @@ -140,7 +155,10 @@ async def create(
messages: Iterable[generate_create_params.Message],
model: str,
avoid_commentary: bool | NotGiven = NOT_GIVEN,
max_new_tokens: int | NotGiven = NOT_GIVEN,
system_prompt: str | NotGiven = NOT_GIVEN,
temperature: float | NotGiven = NOT_GIVEN,
top_p: float | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -170,9 +188,18 @@ async def create(
context. However, commentary may provide useful context which improves the
helpfulness of responses.

max_new_tokens: The maximum number of tokens that the model can generate in the response.

system_prompt: Instructions that the model follows when generating responses. Note that we do
not guarantee that the model follows these instructions exactly.

temperature: The sampling temperature, which affects the randomness in the response. Note
that higher temperature values can reduce groundedness

top_p: A parameter for nucleus sampling, an alternative to temperature which also
affects the randomness of the response. Note that higher top_p values can reduce
groundedness

extra_headers: Send extra headers

extra_query: Add additional query parameters to the request
Expand All @@ -189,7 +216,10 @@ async def create(
"messages": messages,
"model": model,
"avoid_commentary": avoid_commentary,
"max_new_tokens": max_new_tokens,
"system_prompt": system_prompt,
"temperature": temperature,
"top_p": top_p,
},
generate_create_params.GenerateCreateParams,
),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@ class EvaluationRound(BaseModel):
num_predictions: Optional[int] = None
"""Total number of predictions made during the evaluation round"""

num_processed_predictions: Optional[int] = None
"""Number of predictions that have been processed during the evaluation round"""

num_successful_predictions: Optional[int] = None
"""Number of predictions that were successful during the evaluation round"""

Expand Down
18 changes: 17 additions & 1 deletion src/contextual/types/generate_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,16 +30,32 @@ class GenerateCreateParams(TypedDict, total=False):
helpfulness of responses.
"""

max_new_tokens: int
"""The maximum number of tokens that the model can generate in the response."""

system_prompt: str
"""Instructions that the model follows when generating responses.

Note that we do not guarantee that the model follows these instructions exactly.
"""

temperature: float
"""The sampling temperature, which affects the randomness in the response.

Note that higher temperature values can reduce groundedness
"""

top_p: float
"""
A parameter for nucleus sampling, an alternative to temperature which also
affects the randomness of the response. Note that higher top_p values can reduce
groundedness
"""


class Message(TypedDict, total=False):
content: Required[str]
"""Content of the message"""

role: Required[Literal["user", "system", "assistant", "knowledge"]]
role: Required[Literal["user", "assistant"]]
"""Role of the sender"""
4 changes: 0 additions & 4 deletions src/contextual_sdk/lib/.keep

This file was deleted.

4 changes: 0 additions & 4 deletions src/sunrise/lib/.keep

This file was deleted.

6 changes: 6 additions & 0 deletions tests/api_resources/test_generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,10 @@ def test_method_create_with_all_params(self, client: ContextualAI) -> None:
],
model="model",
avoid_commentary=True,
max_new_tokens=1,
system_prompt="system_prompt",
temperature=0,
top_p=1,
)
assert_matches_type(GenerateCreateResponse, generate, path=["response"])

Expand Down Expand Up @@ -115,7 +118,10 @@ async def test_method_create_with_all_params(self, async_client: AsyncContextual
],
model="model",
avoid_commentary=True,
max_new_tokens=1,
system_prompt="system_prompt",
temperature=0,
top_p=1,
)
assert_matches_type(GenerateCreateResponse, generate, path=["response"])

Expand Down