Skip to content

Commit de2bf59

Browse files
committed
fix: Fix Python code and add multi-language support to Anthropic message-per-token guide
- Add multi-language support (Python, Java, Swift) to message-per-token guide, matching the structure of message-per-response - Fix Python code in both guides: use AsyncAnthropic with async/await, use Message objects for publish with extras and append_message, use transport_params for echo suppression - Scope "without await" asides to JavaScript only
1 parent a7c6c51 commit de2bf59

2 files changed

Lines changed: 436 additions & 52 deletions

File tree

src/pages/docs/guides/ai-transport/anthropic-message-per-response.mdx

Lines changed: 17 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -229,28 +229,29 @@ streamAnthropicResponse("Tell me a short joke");
229229
```
230230

231231
```agent_python
232+
import asyncio
232233
import anthropic
233234
234235
# Initialize Anthropic client
235-
client = anthropic.Anthropic()
236+
client = anthropic.AsyncAnthropic()
236237
237238
# Process each streaming event
238-
def process_event(event):
239+
async def process_event(event):
239240
print(event)
240241
# This function is updated in the next sections
241242
242243
# Create streaming response from Anthropic
243-
def stream_anthropic_response(prompt: str):
244-
with client.messages.stream(
244+
async def stream_anthropic_response(prompt: str):
245+
async with client.messages.stream(
245246
model="claude-sonnet-4-5",
246247
max_tokens=1024,
247248
messages=[{"role": "user", "content": prompt}],
248249
) as stream:
249-
for event in stream:
250-
process_event(event)
250+
async for event in stream:
251+
await process_event(event)
251252
252253
# Usage example
253-
stream_anthropic_response("Tell me a short joke")
254+
asyncio.run(stream_anthropic_response("Tell me a short joke"))
254255
```
255256

256257
```agent_java
@@ -364,7 +365,7 @@ const channel = realtime.channels.get('ai:{{RANDOM_CHANNEL_NAME}}');
364365
from ably import AblyRealtime
365366
366367
# Initialize Ably Realtime client
367-
realtime = AblyRealtime(key='{{API_KEY}}', echo_messages=False)
368+
realtime = AblyRealtime(key='{{API_KEY}}', transport_params={'echo': 'false'})
368369
369370
# Create a channel for publishing streamed AI responses
370371
channel = realtime.channels.get('ai:{{RANDOM_CHANNEL_NAME}}')
@@ -446,6 +447,8 @@ async function processEvent(event) {
446447
```
447448

448449
```agent_python
450+
from ably.types.message import Message
451+
449452
# Track state across events
450453
msg_serial = None
451454
text_block_index = None
@@ -456,7 +459,7 @@ async def process_event(event):
456459
457460
if event.type == 'message_start':
458461
# Publish initial empty message when response starts
459-
result = await channel.publish('response', data='')
462+
result = await channel.publish('response', '')
460463
461464
# Capture the message serial for appending tokens
462465
msg_serial = result.serials[0]
@@ -471,7 +474,9 @@ async def process_event(event):
471474
if (event.index == text_block_index and
472475
hasattr(event.delta, 'text') and
473476
msg_serial):
474-
channel.append_message(serial=msg_serial, data=event.delta.text)
477+
await channel.append_message(
478+
Message(serial=msg_serial, data=event.delta.text)
479+
)
475480
476481
elif event.type == 'message_stop':
477482
print('Stream completed!')
@@ -529,9 +534,11 @@ This implementation:
529534
- Filters for `content_block_delta` events with `text_delta` type from text content blocks
530535
- Appends each token to the original message
531536

537+
<If agent_lang="javascript">
532538
<Aside data-type="note">
533539
Append operations are published without `await` to maximize throughput. Ably maintains message ordering even without awaiting each append. For more information, see [Publishing tokens](/docs/ai-transport/token-streaming/message-per-response#publishing).
534540
</Aside>
541+
</If>
535542

536543
<Aside data-type="important">
537544
Standard Ably message [size limits](/docs/platform/pricing/limits#message) apply to the complete concatenated message. If appending a token would exceed the maximum message size, the append is rejected.

0 commit comments

Comments
 (0)