You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`clientData`| Typed by `clientDataSchema`| Custom data from the frontend |
117
117
|`writer`|[`ChatWriter`](/ai-chat/reference#chatwriter)| Stream writer for custom chunks |
118
118
119
-
Every lifecycle callback receives a `writer` — a lazy stream writer that lets you send custom `UIMessageChunk` parts (like `data-*` parts) to the frontend without the ceremony of `chat.stream.writer()`. See [ChatWriter](/ai-chat/reference#chatwriter).
119
+
Every lifecycle callback receives a `writer` — a lazy stream writer that lets you send custom `UIMessageChunk` parts (like `data-*` parts) to the frontend. Non-transient `data-*` chunks written via the `writer` are automatically added to the response message and available in `onTurnComplete`. Add `transient: true` for ephemeral chunks (progress indicators, etc.) that should not persist. See [Custom data parts](/ai-chat/features#custom-data-parts).
`chat.stream` is a typed stream bound to the chat output. Use it to write custom `UIMessageChunk` data alongside the AI-generated response — for example, status updates or progress indicators.
186
+
You can add custom data parts to the assistant's response message. These appear on the frontend in `message.parts` and are included in `onTurnComplete`'s `responseMessage` and `uiMessages` for persistence.
187
+
188
+
### Writing persistent data parts
189
+
190
+
Use `chat.response.write()` or the `writer` in lifecycle hooks. Non-transient `data-*` chunks are automatically added to the response message:
// responseMessage.parts includes the data-metadata part
206
+
awaitdb.messages.save(responseMessage);
207
+
},
193
208
run: async ({ messages, signal }) => {
194
-
// Write a custom data part to the chat stream.
195
-
// The AI SDK's data-* chunk protocol adds this to message.parts
196
-
// on the frontend, where you can render it however you like.
197
-
const { waitUntilComplete } =chat.stream.writer({
198
-
execute: ({ write }) => {
199
-
write({
200
-
type: "data-status",
201
-
id: "search-progress",
202
-
data: { message: "Searching the web...", progress: 0.5 },
203
-
});
204
-
},
209
+
// Also works from run() via chat.response
210
+
chat.response.write({
211
+
type: "data-context",
212
+
data: { searchResults: results },
205
213
});
206
-
awaitwaitUntilComplete();
207
214
208
-
// Then stream the AI response
209
215
returnstreamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
210
216
},
211
217
});
212
218
```
213
219
214
-
<Tip>
215
-
Use `data-*` chunk types (e.g. `data-status`, `data-progress`) for custom data. The AI SDK processes these into `DataUIPart` objects in `message.parts` on the frontend. Writing the same `type` + `id` again updates the existing part instead of creating a new one — useful for live progress.
216
-
</Tip>
220
+
### Transient data parts (ephemeral)
221
+
222
+
Add `transient: true` to data chunks that should stream to the frontend but NOT persist in the response message. Use this for progress indicators, loading states, and other temporary UI:
223
+
224
+
```ts
225
+
// Transient — frontend sees it, but NOT in onTurnComplete's responseMessage
226
+
writer.write({
227
+
type: "data-progress",
228
+
id: "search",
229
+
data: { percent: 50 },
230
+
transient: true,
231
+
});
232
+
```
233
+
234
+
<Info>
235
+
This matches the AI SDK's semantics: `data-*` chunks persist to `message.parts` by default.
236
+
Only `transient: true` chunks are ephemeral. Non-data chunks (`text-delta`, `tool-*`, etc.)
237
+
are handled by `streamText` and captured via `onFinish` — they don't need `chat.response`.
238
+
</Info>
239
+
240
+
<Note>
241
+
`chat.response` and the `writer` accumulation behavior work with `chat.agent` and
242
+
`chat.createSession`. If you're using `chat.customAgent`, manage data part accumulation
243
+
manually via your own message accumulator.
244
+
</Note>
245
+
246
+
### Raw streaming with chat.stream
247
+
248
+
For low-level stream access (piping from subtasks, reading streams by run ID), use `chat.stream`. Chunks written via `chat.stream` go directly to the realtime output — they are NOT accumulated into the response message regardless of the `transient` flag.
249
+
250
+
```ts
251
+
// Raw stream — always ephemeral, never in responseMessage
Inside lifecycle callbacks (`onPreload`, `onChatStart`, `onTurnStart`, `onBeforeTurnComplete`, `onCompacted`), you can use the `writer` parameter instead of `chat.stream.writer()` — it's simpler and avoids the `execute` + `waitUntilComplete` boilerplate. See [ChatWriter](/ai-chat/reference#chatwriter).
261
+
Use `data-*` chunk types (e.g. `data-status`, `data-progress`) for custom data. The AI SDK processes these into `DataUIPart` objects in `message.parts` on the frontend. Writing the same `type` + `id` again updates the existing part instead of creating a new one — useful for live progress.
|`chat.response.write(chunk)`| Write a data part that streams to the frontend AND persists in `onTurnComplete`'s `responseMessage`|
373
+
|`chat.stream`| Raw chat output stream — use `.writer()`, `.pipe()`, `.append()`, `.read()`. Chunks are NOT accumulated into the response. |
373
374
|`chat.MessageAccumulator`| Class that accumulates conversation messages across turns |
374
375
|`chat.withUIMessage(config?)`| Returns a [ChatBuilder](/ai-chat/types#chatbuilder) with a fixed `UIMessage` subtype. See [Types](/ai-chat/types)|
375
376
|`chat.withClientData({ schema })`| Returns a [ChatBuilder](/ai-chat/types#chatbuilder) with a fixed client data schema. See [Types](/ai-chat/types#typed-client-data-with-chatwithclientdata)|
0 commit comments