Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,35 @@ response = client.responses.create(
print(response.output_text)
```

### Conversation state

For multi-turn conversations with the Responses API, use `previous_response_id`
to have the API retain context between turns.

```python
from openai import OpenAI

client = OpenAI()

response = client.responses.create(
model="gpt-5.2",
input="Write a haiku about recursion in programming.",
)
print(response.output_text)

response = client.responses.create(
model="gpt-5.2",
input="Now explain it in plain English.",
previous_response_id=response.id,
)
print(response.output_text)
```

If you manually manage conversation history instead, preserve all items from
`response.output` in their original order. Reasoning models may return reasoning
items together with assistant messages, and filtering those items down to only
messages can break subsequent requests.

The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.

```python
Expand Down
22 changes: 22 additions & 0 deletions examples/responses/conversation_state.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
from openai import OpenAI


client = OpenAI()

response = client.responses.create(
model="gpt-5.2",
input="Write a haiku about recursion in programming.",
)
print(response.output_text)

response = client.responses.create(
model="gpt-5.2",
input="Now explain it in plain English.",
previous_response_id=response.id,
)
print(response.output_text)

# If you manually manage conversation history instead of using
# previous_response_id, append response.output items in order. Reasoning models
# may return reasoning items together with assistant messages, and filtering
# those items down to only messages can break the next request.