Skip to content

Commit de4f93a

Browse files
committed
Cover passing a custom message ID generator
1 parent 2e652fd commit de4f93a

File tree

2 files changed

+56
-10
lines changed

2 files changed

+56
-10
lines changed

docs/ai-chat/backend.mdx

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -917,6 +917,51 @@ export const myChat = chat.agent({
917917
});
918918
```
919919

920+
##### Custom message IDs
921+
922+
By default, response message IDs are generated using the AI SDK's built-in `generateId`. Pass a custom `generateMessageId` function to use your own ID format (e.g. UUID-v7):
923+
924+
```ts
925+
import { v7 as uuidv7 } from "uuid";
926+
927+
export const myChat = chat.agent({
928+
id: "my-chat",
929+
uiMessageStreamOptions: {
930+
generateMessageId: () => uuidv7(),
931+
},
932+
run: async ({ messages, signal }) => {
933+
return streamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
934+
},
935+
});
936+
```
937+
938+
With the `.withUIMessage()` builder, set it under `streamOptions`:
939+
940+
```ts
941+
import { v7 as uuidv7 } from "uuid";
942+
943+
export const myChat = chat
944+
.withUIMessage<MyChatUIMessage>({
945+
streamOptions: {
946+
generateMessageId: () => uuidv7(),
947+
sendReasoning: true,
948+
},
949+
})
950+
.agent({
951+
id: "my-chat",
952+
run: async ({ messages, signal }) => {
953+
return streamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
954+
},
955+
});
956+
```
957+
958+
<Info>
959+
The generated ID is sent to the frontend in the stream's `start` chunk, so frontend and backend
960+
always reference the same ID for each message. This is important for features like tool
961+
approvals, where the frontend resends an assistant message and the backend needs to match it
962+
by ID in the conversation accumulator.
963+
</Info>
964+
920965
##### Per-turn overrides
921966

922967
Override per-turn with `chat.setUIMessageStreamOptions()` — per-turn values merge with the static config (per-turn wins on conflicts). The override is cleared automatically after each turn.

docs/ai-chat/reference.mdx

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -433,16 +433,17 @@ Use with `useChat<Msg>({ transport })` when using [`chat.withUIMessage`](/ai-cha
433433

434434
Options for customizing `toUIMessageStream()`. Set as static defaults via `uiMessageStreamOptions` on `chat.agent()`, or override per-turn via `chat.setUIMessageStreamOptions()`. See [Stream options](/ai-chat/backend#stream-options) for usage examples.
435435

436-
Derived from the AI SDK's `UIMessageStreamOptions` with `onFinish`, `originalMessages`, and `generateMessageId` omitted (managed internally).
437-
438-
| Option | Type | Default | Description |
439-
| ----------------- | --------------------------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
440-
| `onError` | `(error: unknown) => string` | Raw error message | Called on LLM errors and tool execution errors. Return a sanitized string — sent as `{ type: "error", errorText }` to the frontend. |
441-
| `sendReasoning` | `boolean` | `true` | Send reasoning parts to the client |
442-
| `sendSources` | `boolean` | `false` | Send source parts to the client |
443-
| `sendFinish` | `boolean` | `true` | Send the finish event. Set to `false` when chaining multiple `streamText` calls. |
444-
| `sendStart` | `boolean` | `true` | Send the message start event. Set to `false` when chaining. |
445-
| `messageMetadata` | `(options: { part }) => metadata` || Extract message metadata to send to the client. Called on `start` and `finish` events. |
436+
Derived from the AI SDK's `UIMessageStreamOptions` with `onFinish` and `originalMessages` omitted (managed internally — `onFinish` for response capture, `originalMessages` for cross-turn message ID reuse).
437+
438+
| Option | Type | Default | Description |
439+
| ------------------- | --------------------------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
440+
| `onError` | `(error: unknown) => string` | Raw error message | Called on LLM errors and tool execution errors. Return a sanitized string — sent as `{ type: "error", errorText }` to the frontend. |
441+
| `sendReasoning` | `boolean` | `true` | Send reasoning parts to the client |
442+
| `sendSources` | `boolean` | `false` | Send source parts to the client |
443+
| `sendFinish` | `boolean` | `true` | Send the finish event. Set to `false` when chaining multiple `streamText` calls. |
444+
| `sendStart` | `boolean` | `true` | Send the message start event. Set to `false` when chaining. |
445+
| `messageMetadata` | `(options: { part }) => metadata` || Extract message metadata to send to the client. Called on `start` and `finish` events. |
446+
| `generateMessageId` | `() => string` | AI SDK's `generateId` | Custom message ID generator for response messages (e.g. UUID-v7). IDs are shared between frontend and backend via the stream's `start` chunk. |
446447

447448
## TriggerChatTransport options
448449

0 commit comments

Comments
 (0)