(client.chat)
Have a conversation with Glean AI.
from glean.api_client import Glean, models
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.create(messages=[
{
"fragments": [
models.ChatMessageFragment(
text="What are the company holidays this year?",
),
],
},
], timeout_millis=30000)
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
Example |
messages |
List[models.ChatMessage] |
✔️ |
A list of chat messages, from most recent to least recent. At least one message must specify a USER author. |
|
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
|
session_info |
Optional[models.SessionInfo] |
➖ |
N/A |
|
save_chat |
Optional[bool] |
➖ |
Save the current interaction as a Chat for the user to access and potentially continue later. |
|
chat_id |
Optional[str] |
➖ |
The id of the Chat that context should be retrieved from and messages added to. An empty id starts a new Chat, and the Chat is saved if saveChat is true. |
|
agent_config |
Optional[models.AgentConfig] |
➖ |
Describes the agent that executes the request. |
|
inclusions |
Optional[models.ChatRestrictionFilters] |
➖ |
N/A |
|
exclusions |
Optional[models.ChatRestrictionFilters] |
➖ |
N/A |
|
timeout_millis |
Optional[int] |
➖ |
Timeout in milliseconds for the request. A 408 error will be returned if handling the request takes longer. |
30000 |
application_id |
Optional[str] |
➖ |
The ID of the application this request originates from, used to determine the configuration of underlying chat processes. This should correspond to the ID set during admin setup. If not specified, the default chat experience will be used. |
|
agent_id |
Optional[str] |
➖ |
The ID of the Agent that should process this chat request. Only Agents with trigger set to 'User chat message' are invokable through this API. If not specified, the default chat experience will be used. |
|
stream |
Optional[bool] |
➖ |
If set, response lines will be streamed one-by-one as they become available. Each will be a ChatResponse, formatted as JSON, and separated by a new line. If false, the entire response will be returned at once. Note that if this is set and the model being used does not support streaming, the model's response will not be streamed, but other messages from the endpoint still will be. |
|
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
|
models.ChatResponse
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Deletes all saved Chats a user has had and all their contained conversational content.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
glean.client.chat.delete_all()
# Use the SDK ...
| Parameter |
Type |
Required |
Description |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Deletes saved Chats and all their contained conversational content.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
glean.client.chat.delete(ids=[])
# Use the SDK ...
| Parameter |
Type |
Required |
Description |
ids |
List[str] |
✔️ |
A non-empty list of ids of the Chats to be deleted. |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Retrieves the chat history between Glean Assistant and the user for a given Chat.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.retrieve(id="<id>")
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
id |
str |
✔️ |
The id of the Chat to be retrieved. |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.GetChatResponse
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Retrieves all the saved Chats between Glean Assistant and the user. The returned Chats contain only metadata and no conversational content.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.list()
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.ListChatsResponse
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Gets the Chat application details for the specified application ID.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.retrieve_application(id="<id>")
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
id |
str |
✔️ |
The id of the Chat application to be retrieved. |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.GetChatApplicationResponse
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Upload files for Chat.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.upload_files(files=[])
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
files |
List[models.File] |
✔️ |
Raw files to be uploaded for chat in binary format. |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.UploadChatFilesResponse
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Get files uploaded by a user for Chat.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.retrieve_files(file_ids=[
"<value 1>",
])
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
file_ids |
List[str] |
✔️ |
IDs of files to fetch. |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
models.GetChatFilesResponse
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Delete files uploaded by a user for Chat.
from glean.api_client import Glean
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
glean.client.chat.delete_files(file_ids=[
"<value 1>",
"<value 2>",
"<value 3>",
])
# Use the SDK ...
| Parameter |
Type |
Required |
Description |
file_ids |
List[str] |
✔️ |
IDs of files to delete. |
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |
Have a conversation with Glean AI.
from glean.api_client import Glean, models
import os
with Glean(
api_token=os.getenv("GLEAN_API_TOKEN", ""),
) as glean:
res = glean.client.chat.create_stream(messages=[
{
"fragments": [
models.ChatMessageFragment(
text="What are the company holidays this year?",
),
],
},
], timeout_millis=30000)
# Handle response
print(res)
| Parameter |
Type |
Required |
Description |
Example |
messages |
List[models.ChatMessage] |
✔️ |
A list of chat messages, from most recent to least recent. At least one message must specify a USER author. |
|
timezone_offset |
Optional[int] |
➖ |
The offset of the client's timezone in minutes from UTC. e.g. PDT is -420 because it's 7 hours behind UTC. |
|
session_info |
Optional[models.SessionInfo] |
➖ |
N/A |
|
save_chat |
Optional[bool] |
➖ |
Save the current interaction as a Chat for the user to access and potentially continue later. |
|
chat_id |
Optional[str] |
➖ |
The id of the Chat that context should be retrieved from and messages added to. An empty id starts a new Chat, and the Chat is saved if saveChat is true. |
|
agent_config |
Optional[models.AgentConfig] |
➖ |
Describes the agent that executes the request. |
|
inclusions |
Optional[models.ChatRestrictionFilters] |
➖ |
N/A |
|
exclusions |
Optional[models.ChatRestrictionFilters] |
➖ |
N/A |
|
timeout_millis |
Optional[int] |
➖ |
Timeout in milliseconds for the request. A 408 error will be returned if handling the request takes longer. |
30000 |
application_id |
Optional[str] |
➖ |
The ID of the application this request originates from, used to determine the configuration of underlying chat processes. This should correspond to the ID set during admin setup. If not specified, the default chat experience will be used. |
|
agent_id |
Optional[str] |
➖ |
The ID of the Agent that should process this chat request. Only Agents with trigger set to 'User chat message' are invokable through this API. If not specified, the default chat experience will be used. |
|
stream |
Optional[bool] |
➖ |
If set, response lines will be streamed one-by-one as they become available. Each will be a ChatResponse, formatted as JSON, and separated by a new line. If false, the entire response will be returned at once. Note that if this is set and the model being used does not support streaming, the model's response will not be streamed, but other messages from the endpoint still will be. |
|
retries |
Optional[utils.RetryConfig] |
➖ |
Configuration to override the default retry behavior of the client. |
|
str
| Error Type |
Status Code |
Content Type |
| errors.GleanError |
4XX, 5XX |
*/* |