You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update 94 broken link references across 66 files including typos
in paths, doubled /docs/docs/ URLs, legacy page references, and
links relying on redirects instead of pointing to canonical paths.
AI agents often draw information from external sources such as documents, web pages, or databases. Citations to those sources enable users to verify information, explore sources in detail, and understand where responses came from. Ably's [message annotations](/docs/messages/annotations) provide a model-agnostic, structured way to attach source citations to AI responses without modifying the response content. It enables clients to append information to existing messages on a channel.
8
8
9
-
This pattern works when publishing complete responses as messages on a channel or when streaming responses using the [message-per-response](/docs/ai-transport/message-per-response) pattern.
9
+
This pattern works when publishing complete responses as messages on a channel or when streaming responses using the [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern.
10
10
11
11
## Why citations matter <aid="why"/>
12
12
@@ -21,7 +21,7 @@ Including citations on AI responses provides:
21
21
22
22
Use [message annotations](/docs/messages/annotations) to attach source metadata to AI response messages without modifying the response content:
23
23
24
-
1. The agent publishes an AI response as a single message, or builds it incrementally using [message appends](/docs/ai-transport/message-per-response).
24
+
1. The agent publishes an AI response as a single message, or builds it incrementally using [message appends](/docs/ai-transport/token-streaming/message-per-response).
25
25
2. The agent publishes one or more annotations to attach citations to the response message, each referencing the response message [`serial`](/docs/messages#properties).
26
26
3. Ably automatically aggregates annotations and generates summaries showing total counts and groupings (for example, by source domain name).
27
27
4. Clients receive citation summaries automatically and can optionally subscribe to individual annotation events for detailed citation data as part of the realtime stream. Alternatively, clients can obtain annotations for a given message via the REST API.
When streaming response tokens using the [message-per-response](/docs/ai-transport/message-per-response) pattern, you can publish citations while the response is still streaming since the `serial` of the response message becomes known after you [publish the initial message](/docs/ai-transport/token-streaming/message-per-response#publishing).
185
+
When streaming response tokens using the [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern, you can publish citations while the response is still streaming since the `serial` of the response message becomes known after you [publish the initial message](/docs/ai-transport/token-streaming/message-per-response#publishing).
Copy file name to clipboardExpand all lines: src/pages/docs/ai-transport/messaging/human-in-the-loop.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -223,7 +223,7 @@ Set [`echoMessages`](/docs/api/realtime-sdk/types#client-options) to `false` in
223
223
The agent listens for human decisions and acts accordingly. When a response arrives, the agent retrieves the pending request using the `toolCallId`, verifies that the user is permitted to approve that specific action, and either executes the action or handles the rejection.
224
224
225
225
<Aside data-type="note">
226
-
For audit trails, use [integration rules](/docs/integrations) to stream approval messages to external systems.
226
+
For audit trails, use [integration rules](/docs/platform/integrations) to stream approval messages to external systems.
227
227
</Aside>
228
228
229
229
### Verify by user identity <a id="verify-identity"/>
When using API key authentication, provision API keys through the [Ably dashboard](https://ably.com/dashboard) or [Control API](/docs/account/control-api) with only the capabilities required by the agent.
417
+
When using API key authentication, provision API keys through the [Ably dashboard](https://ably.com/dashboard) or [Control API](/docs/platform/account/control-api) with only the capabilities required by the agent.
418
418
419
419
The following example uses the Control API to create an API key with specific capabilities for a weather agent:
Copy file name to clipboardExpand all lines: src/pages/docs/ai-transport/sessions-identity/resuming-sessions.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ An agent or user might resume an existing session when:
16
16
17
17
When you attach to a channel, Ably automatically syncs the complete current presence set to your client. You can then query the presence set or subscribe to presence events without any additional hydration steps. This works the same way for both users and agents.
18
18
19
-
For details on obtaining the synced presence set, see [Viewing who is online](/docs/ai-transport/sessions-and-identity/online-status#viewing-presence).
19
+
For details on obtaining the synced presence set, see [Viewing who is online](/docs/ai-transport/sessions-identity/online-status#viewing-presence).
Copy file name to clipboardExpand all lines: src/pages/docs/ai-transport/token-streaming/message-per-response.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ Token streaming with message-per-response is a pattern where every token generat
7
7
8
8
This pattern is useful for chat-style applications where you want each complete AI response stored as a single message in history, making it easy to retrieve and display multi-response conversation history. Each agent response becomes a single message that grows as tokens are appended, allowing clients joining mid-stream to catch up efficiently without processing thousands of individual tokens.
9
9
10
-
The message-per-response pattern includes [automatic rate limit protection](/docs/ai-transport/token-rate-limits#per-response) through rollups, making it the recommended approach for most token streaming use cases.
10
+
The message-per-response pattern includes [automatic rate limit protection](/docs/ai-transport/token-streaming/token-rate-limits#per-response) through rollups, making it the recommended approach for most token streaming use cases.
11
11
12
12
## How it works <aid="how-it-works"/>
13
13
@@ -215,7 +215,7 @@ The `appendRollupWindow` parameter controls how many tokens are combined into ea
215
215
The default 40ms window strikes a balance, delivering tokens at 25 messages per second - smooth enough for a great user experience while allowing you to run two simultaneous response streams on a single connection. If you need to support more concurrent streams, increase the rollup window (up to 500ms), accepting that tokens will arrive in more noticeable batches. Alternatively, instantiate a separate Ably client which uses its own connection, giving you access to additional message rate capacity.
216
216
217
217
<Asidedata-type="further-reading">
218
-
For more details on rate limits and rollup behavior, see [Token streaming limits](/docs/ai-transport/token-rate-limits#rollup).
218
+
For more details on rate limits and rollup behavior, see [Token streaming limits](/docs/ai-transport/token-streaming/token-rate-limits#rollup).
219
219
</Aside>
220
220
221
221
## Subscribing to token streams <aid="subscribing"/>
Copy file name to clipboardExpand all lines: src/pages/docs/ai-transport/token-streaming/message-per-token.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,7 @@ for (Event event : stream) {
83
83
This approach maximizes throughput while maintaining ordering guarantees, allowing you to stream tokens as fast as your AI model generates them.
84
84
85
85
<Asidedata-type="important">
86
-
Unlike the [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern, the message-per-token pattern requires you to [manage rate limits directly](/docs/ai-transport/token-rate-limits#per-token).
86
+
Unlike the [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern, the message-per-token pattern requires you to [manage rate limits directly](/docs/ai-transport/token-streaming/token-rate-limits#per-token).
Copy file name to clipboardExpand all lines: src/pages/docs/api/index.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,9 +60,9 @@ The API reference for the [Chat SDK](https://sdk.ably.com/builds/ably/ably-chat-
60
60
61
61
In addition to the API references listed previously, our developer documentation also provides information on how these interfaces are used, and this covers key concepts such as connections, channels, messages and the pub/sub pattern. You can find that information on the following pages:
[Channels](/docs/api/realtime-sdk/channels#channels-object) is a reference to the [Channel](/docs/channels) collection instance for this library indexed by the channel name. You can use the [Get](/docs/api/realtime-sdk/channels#get) method of this to get a `Channel` instance. See [channels](/docs/channels) and [messages](/docs/channels/messages) for more information.
209
+
[Channels](/docs/api/realtime-sdk/channels#channels-object) is a reference to the [Channel](/docs/channels) collection instance for this library indexed by the channel name. You can use the [Get](/docs/api/realtime-sdk/channels#get) method of this to get a `Channel` instance. See [channels](/docs/channels) and [messages](/docs/messages) for more information.
| <Iflang="javascript,nodejs,java,objc,swift,ruby">data</If><Iflang="csharp">Data</If> | The presence update payload, if provided | <Iflang="java">`String`, `ByteArray`, `JSONObject`, `JSONArray`</If><Iflang="csharp">`String`, `byte[]`, plain C# object that can be converted to Json</If><Iflang="javascript,nodejs">`String`, `StringBuffer`, `JSON Object`</If><Iflang="ruby">`String`, `Binary` (ASCII-8BIT String), `Hash`, `Array`</If><Iflang="swift">`String`, `NSData`, `Dictionary`, `Array`</If><Iflang="objc">`NSString *`, `NSData *`, `NSDictionary *`, `NSArray *`</If> |
370
-
| <Iflang="javascript,nodejs,java,objc,swift,ruby">extras</If><Iflang="csharp">Extras</If> | Metadata and/or ancillary payloads, if provided. The only currently valid payloads for extras are the [`push`](/docs/push/publish#sub-channels), [`ref`](/docs/channels/messages#interactions) and [`privileged`](/docs/platform/integrations/webhooks#skipping) objects. | <Iflang="java">`JSONObject`, `JSONArray`</If><Iflang="csharp">plain C# object that can be converted to Json</If><Iflang="javascript,nodejs">`JSON Object`</If><Iflang="ruby">`Hash`, `Array`</If><Iflang="swift">`Dictionary`, `Array`</If><Iflang="objc">`NSDictionary *`, `NSArray *`</If> |
370
+
| <Iflang="javascript,nodejs,java,objc,swift,ruby">extras</If><Iflang="csharp">Extras</If> | Metadata and/or ancillary payloads, if provided. The only currently valid payloads for extras are the [`push`](/docs/push/publish#sub-channels), [`ref`](/docs/messages#interactions) and [`privileged`](/docs/platform/integrations/webhooks#skipping) objects. | <Iflang="java">`JSONObject`, `JSONArray`</If><Iflang="csharp">plain C# object that can be converted to Json</If><Iflang="javascript,nodejs">`JSON Object`</If><Iflang="ruby">`Hash`, `Array`</If><Iflang="swift">`Dictionary`, `Array`</If><Iflang="objc">`NSDictionary *`, `NSArray *`</If> |
371
371
| <Iflang="javascript,nodejs,java,objc,swift,ruby">id</If><Iflang="csharp">Id</If> | Unique ID assigned by Ably to this presence update |`String`|
372
372
| <Iflang="javascript,nodejs">clientId</If><Iflang="ruby">client_id</If><Iflang="csharp">ClientId</If><Iflang="java,objc,swift">clientId</If> | The client ID of the publisher of this presence update |`String`|
373
373
| <Iflang="javascript,nodejs">connectionId</If><Iflang="ruby">connection_id</If><Iflang="csharp">ConnectionId</If><Iflang="java,objc,swift">connectionId</If> | The connection ID of the publisher of this presence update |`String`|
@@ -701,7 +701,7 @@ Returns a new `PaginatedResult` loaded with the next page of results. If there a
701
701
702
702
`Task<PaginatedResult<T>> NextAsync()`
703
703
704
-
Returns a new [`PaginatedResult`](/docs/api/realtime-sdk/historypaginated-result) loaded with the next page of results. If there are no further pages, then a blank PaginatedResult will be returned. The method is asynchronous and return a Task which needs to be awaited to get the `PaginatedResult`
704
+
Returns a new [`PaginatedResult`](/docs/api/realtime-sdk/history#paginated-result) loaded with the next page of results. If there are no further pages, then a blank PaginatedResult will be returned. The method is asynchronous and return a Task which needs to be awaited to get the `PaginatedResult`
0 commit comments