Skip to content

Commit 55ec385

Browse files
committed
docs: add chat.response API, persistent data parts, transient flag, tool approvals wire protocol
1 parent de4f93a commit 55ec385

File tree

3 files changed

+64
-21
lines changed

3 files changed

+64
-21
lines changed

docs/ai-chat/backend.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ export const myChat = chat.agent({
116116
| `clientData` | Typed by `clientDataSchema` | Custom data from the frontend |
117117
| `writer` | [`ChatWriter`](/ai-chat/reference#chatwriter) | Stream writer for custom chunks |
118118

119-
Every lifecycle callback receives a `writer` — a lazy stream writer that lets you send custom `UIMessageChunk` parts (like `data-*` parts) to the frontend without the ceremony of `chat.stream.writer()`. See [ChatWriter](/ai-chat/reference#chatwriter).
119+
Every lifecycle callback receives a `writer` — a lazy stream writer that lets you send custom `UIMessageChunk` parts (like `data-*` parts) to the frontend. Non-transient `data-*` chunks written via the `writer` are automatically added to the response message and available in `onTurnComplete`. Add `transient: true` for ephemeral chunks (progress indicators, etc.) that should not persist. See [Custom data parts](/ai-chat/features#custom-data-parts).
120120

121121
#### onChatStart
122122

docs/ai-chat/features.mdx

Lines changed: 61 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -181,42 +181,84 @@ export const myChat = chat.agent({
181181

182182
---
183183

184-
## Custom streaming with chat.stream
184+
## Custom data parts
185185

186-
`chat.stream` is a typed stream bound to the chat output. Use it to write custom `UIMessageChunk` data alongside the AI-generated response — for example, status updates or progress indicators.
186+
You can add custom data parts to the assistant's response message. These appear on the frontend in `message.parts` and are included in `onTurnComplete`'s `responseMessage` and `uiMessages` for persistence.
187+
188+
### Writing persistent data parts
189+
190+
Use `chat.response.write()` or the `writer` in lifecycle hooks. Non-transient `data-*` chunks are automatically added to the response message:
187191

188192
```ts
189193
import { chat } from "@trigger.dev/sdk/ai";
190194

191195
export const myChat = chat.agent({
192196
id: "my-chat",
197+
onBeforeTurnComplete: async ({ writer, turn }) => {
198+
// This data part will be in responseMessage.parts in onTurnComplete
199+
writer.write({
200+
type: "data-metadata",
201+
data: { turn, model: "gpt-4o", timestamp: Date.now() },
202+
});
203+
},
204+
onTurnComplete: async ({ responseMessage }) => {
205+
// responseMessage.parts includes the data-metadata part
206+
await db.messages.save(responseMessage);
207+
},
193208
run: async ({ messages, signal }) => {
194-
// Write a custom data part to the chat stream.
195-
// The AI SDK's data-* chunk protocol adds this to message.parts
196-
// on the frontend, where you can render it however you like.
197-
const { waitUntilComplete } = chat.stream.writer({
198-
execute: ({ write }) => {
199-
write({
200-
type: "data-status",
201-
id: "search-progress",
202-
data: { message: "Searching the web...", progress: 0.5 },
203-
});
204-
},
209+
// Also works from run() via chat.response
210+
chat.response.write({
211+
type: "data-context",
212+
data: { searchResults: results },
205213
});
206-
await waitUntilComplete();
207214

208-
// Then stream the AI response
209215
return streamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
210216
},
211217
});
212218
```
213219

214-
<Tip>
215-
Use `data-*` chunk types (e.g. `data-status`, `data-progress`) for custom data. The AI SDK processes these into `DataUIPart` objects in `message.parts` on the frontend. Writing the same `type` + `id` again updates the existing part instead of creating a new one — useful for live progress.
216-
</Tip>
220+
### Transient data parts (ephemeral)
221+
222+
Add `transient: true` to data chunks that should stream to the frontend but NOT persist in the response message. Use this for progress indicators, loading states, and other temporary UI:
223+
224+
```ts
225+
// Transient — frontend sees it, but NOT in onTurnComplete's responseMessage
226+
writer.write({
227+
type: "data-progress",
228+
id: "search",
229+
data: { percent: 50 },
230+
transient: true,
231+
});
232+
```
233+
234+
<Info>
235+
This matches the AI SDK's semantics: `data-*` chunks persist to `message.parts` by default.
236+
Only `transient: true` chunks are ephemeral. Non-data chunks (`text-delta`, `tool-*`, etc.)
237+
are handled by `streamText` and captured via `onFinish` — they don't need `chat.response`.
238+
</Info>
239+
240+
<Note>
241+
`chat.response` and the `writer` accumulation behavior work with `chat.agent` and
242+
`chat.createSession`. If you're using `chat.customAgent`, manage data part accumulation
243+
manually via your own message accumulator.
244+
</Note>
245+
246+
### Raw streaming with chat.stream
247+
248+
For low-level stream access (piping from subtasks, reading streams by run ID), use `chat.stream`. Chunks written via `chat.stream` go directly to the realtime output — they are NOT accumulated into the response message regardless of the `transient` flag.
249+
250+
```ts
251+
// Raw stream — always ephemeral, never in responseMessage
252+
const { waitUntilComplete } = chat.stream.writer({
253+
execute: ({ write }) => {
254+
write({ type: "data-status", data: { message: "Processing..." } });
255+
},
256+
});
257+
await waitUntilComplete();
258+
```
217259

218260
<Tip>
219-
Inside lifecycle callbacks (`onPreload`, `onChatStart`, `onTurnStart`, `onBeforeTurnComplete`, `onCompacted`), you can use the `writer` parameter instead of `chat.stream.writer()` — it's simpler and avoids the `execute` + `waitUntilComplete` boilerplate. See [ChatWriter](/ai-chat/reference#chatwriter).
261+
Use `data-*` chunk types (e.g. `data-status`, `data-progress`) for custom data. The AI SDK processes these into `DataUIPart` objects in `message.parts` on the frontend. Writing the same `type` + `id` again updates the existing part instead of creating a new one — useful for live progress.
220262
</Tip>
221263

222264
`chat.stream` exposes the full stream API:

docs/ai-chat/reference.mdx

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -369,7 +369,8 @@ All methods available on the `chat` object from `@trigger.dev/sdk/ai`.
369369
| `chat.defer(promise)` | Run background work in parallel with streaming, awaited before `onTurnComplete` |
370370
| `chat.isStopped()` | Check if the current turn was stopped by the user |
371371
| `chat.cleanupAbortedParts(message)` | Remove incomplete parts from a stopped response message |
372-
| `chat.stream` | Typed chat output stream — use `.writer()`, `.pipe()`, `.append()`, `.read()` |
372+
| `chat.response.write(chunk)` | Write a data part that streams to the frontend AND persists in `onTurnComplete`'s `responseMessage` |
373+
| `chat.stream` | Raw chat output stream — use `.writer()`, `.pipe()`, `.append()`, `.read()`. Chunks are NOT accumulated into the response. |
373374
| `chat.MessageAccumulator` | Class that accumulates conversation messages across turns |
374375
| `chat.withUIMessage(config?)` | Returns a [ChatBuilder](/ai-chat/types#chatbuilder) with a fixed `UIMessage` subtype. See [Types](/ai-chat/types) |
375376
| `chat.withClientData({ schema })` | Returns a [ChatBuilder](/ai-chat/types#chatbuilder) with a fixed client data schema. See [Types](/ai-chat/types#typed-client-data-with-chatwithclientdata) |

0 commit comments

Comments
 (0)