Skip to content

Commit 9fdfa49

Browse files
committed
fix: Fix Python code and add multi-language support to Anthropic message-per-token guide
- Add multi-language support (Python, Java, Swift) to message-per-token guide, matching the structure of message-per-response - Fix Python code in both guides: use AsyncAnthropic with async/await, use Message objects for publish with extras and append_message, use transport_params for echo suppression - Scope "without await" asides to JavaScript only
1 parent 2c8db4d commit 9fdfa49

2 files changed

Lines changed: 454 additions & 52 deletions

File tree

src/pages/docs/guides/ai-transport/anthropic-message-per-response.mdx

Lines changed: 26 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -229,28 +229,29 @@ streamAnthropicResponse("Tell me a short joke");
229229
```
230230

231231
```agent_python
232+
import asyncio
232233
import anthropic
233234
234235
# Initialize Anthropic client
235-
client = anthropic.Anthropic()
236+
client = anthropic.AsyncAnthropic()
236237
237238
# Process each streaming event
238-
def process_event(event):
239+
async def process_event(event):
239240
print(event)
240241
# This function is updated in the next sections
241242
242243
# Create streaming response from Anthropic
243-
def stream_anthropic_response(prompt: str):
244-
with client.messages.stream(
244+
async def stream_anthropic_response(prompt: str):
245+
async with client.messages.stream(
245246
model="claude-sonnet-4-5",
246247
max_tokens=1024,
247248
messages=[{"role": "user", "content": prompt}],
248249
) as stream:
249-
for event in stream:
250-
process_event(event)
250+
async for event in stream:
251+
await process_event(event)
251252
252253
# Usage example
253-
stream_anthropic_response("Tell me a short joke")
254+
asyncio.run(stream_anthropic_response("Tell me a short joke"))
254255
```
255256

256257
```agent_java
@@ -364,7 +365,7 @@ const channel = realtime.channels.get('ai:{{RANDOM_CHANNEL_NAME}}');
364365
from ably import AblyRealtime
365366
366367
# Initialize Ably Realtime client
367-
realtime = AblyRealtime(key='{{API_KEY}}', echo_messages=False)
368+
realtime = AblyRealtime(key='{{API_KEY}}', transport_params={'echo': 'false'})
368369
369370
# Create a channel for publishing streamed AI responses
370371
channel = realtime.channels.get('ai:{{RANDOM_CHANNEL_NAME}}')
@@ -446,6 +447,8 @@ async function processEvent(event) {
446447
```
447448

448449
```agent_python
450+
from ably.types.message import Message
451+
449452
# Track state across events
450453
msg_serial = None
451454
text_block_index = None
@@ -456,7 +459,7 @@ async def process_event(event):
456459
457460
if event.type == 'message_start':
458461
# Publish initial empty message when response starts
459-
result = await channel.publish('response', data='')
462+
result = await channel.publish('response', '')
460463
461464
# Capture the message serial for appending tokens
462465
msg_serial = result.serials[0]
@@ -471,7 +474,9 @@ async def process_event(event):
471474
if (event.index == text_block_index and
472475
hasattr(event.delta, 'text') and
473476
msg_serial):
474-
channel.append_message(serial=msg_serial, data=event.delta.text)
477+
await channel.append_message(
478+
Message(serial=msg_serial, data=event.delta.text)
479+
)
475480
476481
elif event.type == 'message_stop':
477482
print('Stream completed!')
@@ -529,9 +534,11 @@ This implementation:
529534
- Filters for `content_block_delta` events with `text_delta` type from text content blocks
530535
- Appends each token to the original message
531536

537+
<If agent_lang="javascript">
532538
<Aside data-type="note">
533539
Append operations are published without `await` to maximize throughput. Ably maintains message ordering even without awaiting each append. For more information, see [Publishing tokens](/docs/ai-transport/token-streaming/message-per-response#publishing).
534540
</Aside>
541+
</If>
535542

536543
<Aside data-type="important">
537544
Standard Ably message [size limits](/docs/platform/pricing/limits#message) apply to the complete concatenated message. If appending a token would exceed the maximum message size, the append is rejected.
@@ -732,7 +739,9 @@ Subscribers receive different message actions depending on when they join and ho
732739

733740
- `message.update`: Contains the whole response up to that point. The message `data` contains the full concatenated text so far. Replace the entire response content with this data for the message identified by `serial`. This action occurs when the channel needs to resynchronize the full message state, such as after a client [resumes](/docs/connect/states#resume) from a transient disconnection.
734741

742+
<If client_lang="javascript,java">
735743
Run the subscriber in a separate terminal:
744+
</If>
736745

737746
<If client_lang="javascript">
738747
<Code>
@@ -755,7 +764,12 @@ mvn compile exec:java -Dexec.mainClass="Subscriber"
755764
</Code>
756765
</If>
757766

767+
<If client_lang="javascript,java">
758768
With the subscriber running, run the publisher in another terminal. The tokens stream in realtime as the Anthropic model generates them.
769+
</If>
770+
<If client_lang="swift">
771+
With the subscriber running, run the publisher in a terminal. The tokens stream in realtime as the Anthropic model generates them.
772+
</If>
759773

760774
## Step 5: Stream with multiple publishers and subscribers <a id="step-5"/>
761775

@@ -765,7 +779,9 @@ Ably's [channel-oriented sessions](/docs/ai-transport/sessions-identity#connecti
765779

766780
Each subscriber receives the complete stream of tokens independently, enabling you to build collaborative experiences or multi-device applications.
767781

782+
<If client_lang="javascript,java">
768783
Run a subscriber in multiple separate terminals:
784+
</If>
769785

770786
<If client_lang="javascript">
771787
<Code>

0 commit comments

Comments
 (0)