Skip to content

Commit 20f161a

Browse files
committed
Fixup: text tweak from review
1 parent 0234d73 commit 20f161a

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

src/pages/docs/ai-transport/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,4 +153,4 @@ The cost of streaming token responses over Ably depends on:
153153
- the number of subscribers receiving the response.
154154
- the [token streaming pattern](/docs/ai-transport/token-streaming#token-streaming-patterns) you choose.
155155

156-
For example, suppose an AI support chatbot sends a response of 300 tokens, each as a discrete update, using the [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern, and with a single client subscribed to the channel. With AI Transport's [append rollup](/docs/ai-transport/messaging/token-rate-limits#per-response), this will result in usage of 100 inbound messages, 100 outbound messages and 100 persisted messages. See the [AI support chatbot pricing example](/docs/platform/pricing/examples/ai-chatbot) for a full breakdown of the costs in this scenario.
156+
For example, suppose an AI support chatbot sends a response of 300 tokens, each as a discrete update, using the [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern, and with a single client subscribed to the channel. With AI Transport's [append rollup](/docs/ai-transport/messaging/token-rate-limits#per-response),those 300 input tokens will be conflated to 100 discrete inbound messages, resulting in 100 outbound messages and 100 persisted messages. See the [AI support chatbot pricing example](/docs/platform/pricing/examples/ai-chatbot) for a full breakdown of the costs in this scenario.

0 commit comments

Comments
 (0)