Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@
"label": "Overview",
"to": "getting-started/overview"
},
{
"label": "Installation",
"to": "getting-started/installation"
},
{
"label": "Quick Start",
"to": "getting-started/quick-start"
Expand Down
132 changes: 132 additions & 0 deletions docs/getting-started/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
---
title: Installation
id: installation
order: 2
---

Install TanStack AI along with a framework integration and an adapter for your preferred LLM provider.

## Core

Every project needs the core package:

```bash
npm install @tanstack/ai
# or
pnpm add @tanstack/ai
# or
yarn add @tanstack/ai
```

## React

```bash
npm install @tanstack/ai-react
```

The React integration provides the `useChat` hook for managing chat state. See the [@tanstack/ai-react API docs](../api/ai-react) for full details.

```typescript
import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";

function Chat() {
const { messages, sendMessage } = useChat({
connection: fetchServerSentEvents("/api/chat"),
});
// ...
}
```

## Solid

```bash
npm install @tanstack/ai-solid
```

The Solid integration provides the `useChat` primitive for managing chat state. See the [@tanstack/ai-solid API docs](../api/ai-solid) for full details.

```typescript
import { useChat, fetchServerSentEvents } from "@tanstack/ai-solid";

function Chat() {
const { messages, sendMessage } = useChat({
connection: fetchServerSentEvents("/api/chat"),
});
// ...
}
```

## Preact

```bash
npm install @tanstack/ai-preact
```

The Preact integration provides the `useChat` hook for managing chat state. See the [@tanstack/ai-preact API docs](../api/ai-preact) for full details.

```typescript
import { useChat, fetchServerSentEvents } from "@tanstack/ai-preact";

function Chat() {
const { messages, sendMessage } = useChat({
connection: fetchServerSentEvents("/api/chat"),
});
// ...
}
```

## Vue

```bash
npm install @tanstack/ai-vue
```

## Svelte

```bash
npm install @tanstack/ai-svelte
```

## Headless (Framework-Agnostic)

If you're using a framework without a dedicated integration, or building a custom solution, use the headless client directly:

```bash
npm install @tanstack/ai-client
```

See the [@tanstack/ai-client API docs](../api/ai-client) for full details.

## Adapters

You also need an adapter for your LLM provider. Install one (or more) of the following:

```bash
# OpenRouter (recommended — 300+ models with one API key)
npm install @tanstack/ai-openrouter

# OpenAI
npm install @tanstack/ai-openai

# Anthropic
npm install @tanstack/ai-anthropic

# Google Gemini
npm install @tanstack/ai-gemini

# Ollama (local models)
npm install @tanstack/ai-ollama

# Groq
npm install @tanstack/ai-groq

# Grok (xAI)
npm install @tanstack/ai-grok
```

See the [Adapters section](../adapters/openai) for provider-specific setup guides.
Comment on lines +100 to +127
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing Fal adapter from the adapters list.

The Fal adapter is listed in the navigation config (docs/config.json line 176-178) under Adapters, but is not included in the installation instructions here.

📝 Suggested addition
 # Grok (xAI)
 npm install `@tanstack/ai-grok`
+
+# Fal (image/video generation)
+npm install `@tanstack/ai-fal`
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @docs/getting-started/installation.md around lines 100 - 127, The Adapters
installation list is missing the Fal adapter referenced in the docs navigation;
update the "Adapters" section in installation.md to include the Fal adapter by
adding an entry for the Fal package (npm install @tanstack/ai-fal) alongside the
other providers and ensure the text still points readers to the Adapters section
for provider-specific setup; keep naming consistent with existing entries (e.g.,
"Fal") so it matches the navigation config.


</details>

<!-- fingerprinting:phantom:poseidon:ocelot -->

<!-- This is an auto-generated comment by CodeRabbit -->


## Next Steps

- [Quick Start Guide](./quick-start) - Build a chat app in minutes
- [Tools Guide](../guides/tools) - Learn about the isomorphic tool system
2 changes: 1 addition & 1 deletion docs/getting-started/quick-start.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Quick Start
id: quick-start
order: 2
order: 3
---

Get started with TanStack AI in minutes. This guide will walk you through creating a simple chat application using the React integration and OpenAI adapter.
Expand Down
3 changes: 2 additions & 1 deletion packages/typescript/ai-client/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,8 @@ export type ToolCallPart<TTools extends ReadonlyArray<AnyClientTool> = any> =
export interface ToolResultPart {
type: 'tool-result'
toolCallId: string
content: string
/** Tool result content. String for text results, or an array for multimodal results. */
content: string | Array<any>
Comment on lines +146 to +147
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for direct string method calls on ToolResultPart.content without type guards
rg -n --type=ts 'content\.(trim|split|substring|slice|replace|match|toLowerCase|toUpperCase)' packages/typescript/ai-client/

Repository: TanStack/ai

Length of output: 179


🏁 Script executed:

#!/bin/bash
# Read the types.ts file around lines 146-147 to see the actual definition
echo "=== ai-client/src/types.ts (lines 140-155) ==="
sed -n '140,155p' packages/typescript/ai-client/src/types.ts

echo -e "\n=== Check ai/src/types.ts for ToolResultPart definition ==="
grep -n -A 3 "ToolResultPart" packages/typescript/ai/src/types.ts | head -20

Repository: TanStack/ai

Length of output: 835


🏁 Script executed:

#!/bin/bash
# Search for other potential unsafe content usage patterns more broadly
echo "=== Search for .content usage in ai-client ==="
rg -n '\.content' packages/typescript/ai-client/src/ -A 2 -B 2 --type=ts | head -100

Repository: TanStack/ai

Length of output: 1953


🏁 Script executed:

#!/bin/bash
# Get the complete ToolResultPart definition from ai/src/types.ts
sed -n '298,310p' packages/typescript/ai/src/types.ts

Repository: TanStack/ai

Length of output: 428


🏁 Script executed:

#!/bin/bash
# Also search for ContentPart type definition to understand the type hierarchy
rg -n "interface ContentPart" packages/typescript/ -A 5

Repository: TanStack/ai

Length of output: 1586


🏁 Script executed:

#!/bin/bash
# Check the context around chat-client.ts:352 to see the full usage
sed -n '345,360p' packages/typescript/ai-client/src/chat-client.ts

Repository: TanStack/ai

Length of output: 548


🏁 Script executed:

#!/bin/bash
# Check if ContentPart is properly exported/imported in ai-client
rg -n "ContentPart|import.*types" packages/typescript/ai-client/src/types.ts | head -20

Repository: TanStack/ai

Length of output: 113


🏁 Script executed:

#!/bin/bash
# Also check what ContentPart actually is
rg -n "type ContentPart|interface ContentPart" packages/typescript/ai/src/types.ts -A 5

Repository: TanStack/ai

Length of output: 1049


Use Array<ContentPart> for type consistency with the main ai package.

The ToolResultPart.content type in ai-client uses Array<any>, while packages/typescript/ai/src/types.ts defines it as string | Array<ContentPart>. Since ContentPart is already imported in this file, use the properly-typed alternative to maintain consistency and provide better type safety across the codebase.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-client/src/types.ts` around lines 146 - 147, The
ToolResultPart.content property currently uses a loose Array<any>; update it to
use the specific ContentPart type for consistency and stronger typing by
changing the union from string | Array<any> to string | Array<ContentPart>
(modify the ToolResultPart.content declaration to reference ContentPart which is
already imported).

state: ToolResultState
error?: string // Error message if state is "error"
}
Expand Down
4 changes: 3 additions & 1 deletion packages/typescript/ai-devtools/src/store/ai-context.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -805,7 +805,9 @@ export const AIProvider: ParentComponent = (props) => {
return {
type: 'tool-result',
toolCallId: part.toolCallId,
content: part.content,
content: Array.isArray(part.content)
? JSON.stringify(part.content)
: part.content,
state: part.state,
error: part.error,
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,10 @@ export function devtoolsMiddleware(): ChatMiddleware {
...base,
messageId: localMessageId || undefined,
toolCallId: chunk.toolCallId,
result: chunk.result || '',
result:
typeof chunk.result === 'string'
? chunk.result
: JSON.stringify(chunk.result ?? ''),
Comment on lines +198 to +201
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Bug: Undefined results produce '""' instead of empty string.

When chunk.result is undefined or null, this evaluates to:

JSON.stringify('' /* from ?? '' */)  // produces '""' (a 2-character string)

This changes the behavior from producing '' (falsy, empty) to '""' (truthy, parseable as empty string). Downstream code checking truthiness or parsing will behave differently:

  • if (chunk.result) will now be true for '""'
  • JSON.parse('""') yields "" instead of failing

Consider preserving the original empty string behavior for undefined/null:

🐛 Proposed fix
           result:
             typeof chunk.result === 'string'
               ? chunk.result
-              : JSON.stringify(chunk.result ?? ''),
+              : chunk.result != null
+                ? JSON.stringify(chunk.result)
+                : '',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-event-client/src/devtools-middleware.ts` around lines
198 - 201, The current serialization for the result field uses
JSON.stringify(chunk.result ?? ''), which turns null/undefined into the
two-character string '" "' instead of an empty string; update the logic around
the result key (the expression using typeof chunk.result === 'string' ? ... :
...) to short-circuit null/undefined explicitly so that when chunk.result is
null or undefined you return '' (empty string), otherwise call JSON.stringify on
the non-string value; reference the existing chunk.result check and the result
property so the branch becomes: if it's a string return it, else if chunk.result
== null return '' else return JSON.stringify(chunk.result).

timestamp: Date.now(),
})
break
Expand Down
16 changes: 15 additions & 1 deletion packages/typescript/ai-event-client/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ export interface ToolCallPart {
export interface ToolResultPart {
type: 'tool-result'
toolCallId: string
content: string
content: string | Array<any>
state: ToolResultState
error?: string
}
Expand Down Expand Up @@ -646,6 +646,20 @@ export interface AIDevtoolsEventMap {
'client:stopped': ClientStoppedEvent
}

// Ensure a shared EventTarget exists on server environments (Node, Bun,
// Cloudflare Workers, etc.) so that emit() and on() operate on the same
// target. In browsers, `window` is used automatically.
// See https://github.com/TanStack/ai/issues/341
if (
typeof globalThis !== 'undefined' &&
!globalThis.__TANSTACK_EVENT_TARGET__ &&
typeof window === 'undefined'
) {
if (typeof EventTarget !== 'undefined') {
globalThis.__TANSTACK_EVENT_TARGET__ = new EventTarget()
}
}

class AiEventClient extends EventClient<AIDevtoolsEventMap> {
constructor() {
super({
Expand Down
25 changes: 23 additions & 2 deletions packages/typescript/ai-gemini/src/adapters/image.ts
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,13 @@ export class GeminiImageAdapter<
id: generateId(this.name),
model,
images,
usage: undefined,
usage: response.usageMetadata
? {
inputTokens: response.usageMetadata.promptTokenCount ?? 0,
outputTokens: response.usageMetadata.candidatesTokenCount ?? 0,
totalTokens: response.usageMetadata.totalTokenCount ?? 0,
}
: undefined,
}
}

Expand Down Expand Up @@ -196,11 +202,26 @@ export class GeminiImageAdapter<
}),
)

// GenerateImagesResponse may include usageMetadata in newer SDK versions
const usageMeta = (response as any).usageMetadata as
| {
promptTokenCount?: number
candidatesTokenCount?: number
totalTokenCount?: number
}
| undefined

return {
id: generateId(this.name),
model,
images,
usage: undefined,
usage: usageMeta
? {
inputTokens: usageMeta.promptTokenCount ?? 0,
outputTokens: usageMeta.candidatesTokenCount ?? 0,
totalTokens: usageMeta.totalTokenCount ?? 0,
}
: undefined,
}
}
}
Expand Down
7 changes: 5 additions & 2 deletions packages/typescript/ai-openai/src/adapters/text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -706,11 +706,14 @@ export class OpenAITextAdapter<
result.push({
type: 'function_call_output',
call_id: message.toolCallId || '',
// Support multimodal tool outputs (OpenAI Responses API accepts
// string or array of content parts for function_call_output).
output:
typeof message.content === 'string'
typeof message.content === 'string' ||
Array.isArray(message.content)
? message.content
: JSON.stringify(message.content),
})
} as any)
Comment on lines +709 to +716
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Does OpenAI Responses API function_call_output support array content multimodal?

💡 Result:

No. In the Responses API, a function_call_output item’s payload is the output string (free text—often a JSON-encoded string). It does not support the “array of content parts” multimodal content: [...] structure used by message items. [1], [2]

If you need multimodal data downstream, return structured JSON in output (e.g., with file/image URLs or IDs) and then include those as proper multimodal inputs via a subsequent message item with content parts. [1], [2]

Sources: [1], [2]


The multimodal tool output logic is incorrect and will fail at runtime.

The OpenAI Responses API function_call_output.output field only accepts strings, not arrays. The current code at lines 712–716 passes message.content directly if it's an array, which violates the API contract:

output:
  typeof message.content === 'string' ||
  Array.isArray(message.content)
    ? message.content  // ❌ Arrays not supported by output field
    : JSON.stringify(message.content),

Fix: Always stringify the output, including arrays:

output: typeof message.content === 'string'
  ? message.content
  : JSON.stringify(message.content),

If multimodal data is needed, encode it as structured JSON (e.g., with URLs or references) in the string and let downstream code parse it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 709 - 716,
The function_call_output.output assignment incorrectly allows arrays for
message.content which the OpenAI Responses API does not accept; update the logic
in the adapter where function_call_output.output is built (referencing
message.content and function_call_output.output) to always pass a string: if
message.content is already a string use it, otherwise
JSON.stringify(message.content) so arrays and objects become valid string
payloads; ensure any downstream consumers parse the JSON string if they expect
structured multimodal data.

continue
}

Expand Down
31 changes: 29 additions & 2 deletions packages/typescript/ai-openrouter/src/adapters/text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,25 @@ import type {
Message,
} from '@openrouter/sdk/models'

/**
* Convert snake_case keys to camelCase.
* The OpenRouter SDK's Zod transformer expects camelCase input and silently
* discards any snake_case fields. This helper normalises user-supplied
* modelOptions so that common snake_case variants (e.g. `tool_choice`)
* are accepted as well.
* See https://github.com/TanStack/ai/issues/314
*/
function snakeToCamelKeys<T extends Record<string, unknown>>(
obj: T,
): Record<string, unknown> {
const result: Record<string, unknown> = {}
for (const key of Object.keys(obj)) {
const camelKey = key.replace(/_([a-z])/g, (_, c: string) => c.toUpperCase())
result[camelKey] = obj[key]
}
return result
}

export interface OpenRouterConfig extends SDKOptions {}
export type OpenRouterTextModels = (typeof OPENROUTER_CHAT_MODELS)[number]

Expand Down Expand Up @@ -521,15 +540,23 @@ export class OpenRouterTextAdapter<
})
}

// Normalise snake_case keys to camelCase so the SDK's Zod transformer
// does not silently discard them (see #314).
const normalizedModelOptions = modelOptions
? snakeToCamelKeys(modelOptions as Record<string, unknown>)
: undefined

const request: ChatGenerationParams = {
model:
options.model +
(modelOptions?.variant ? `:${modelOptions.variant}` : ''),
(normalizedModelOptions?.variant
? `:${normalizedModelOptions.variant}`
: ''),
messages,
temperature: options.temperature,
maxTokens: options.maxTokens,
topP: options.topP,
...modelOptions,
...normalizedModelOptions,
tools: options.tools
? convertToolsToProviderFormat(options.tools)
: undefined,
Expand Down
7 changes: 6 additions & 1 deletion packages/typescript/ai/src/activities/chat/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1001,7 +1001,12 @@ class TextEngine<
const chunks: Array<StreamChunk> = []

for (const result of results) {
const content = JSON.stringify(result.result)
// Preserve arrays (e.g. multimodal content parts) and strings;
// stringify other values.
const content =
typeof result.result === 'string' || Array.isArray(result.result)
? result.result
: JSON.stringify(result.result)

chunks.push({
type: 'TOOL_CALL_END',
Expand Down
4 changes: 3 additions & 1 deletion packages/typescript/ai/src/activities/chat/messages.ts
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,9 @@ function buildAssistantMessages(uiMessage: UIMessage): Array<ModelMessage> {
if (part.output !== undefined && !emittedToolResultIds.has(part.id)) {
messageList.push({
role: 'tool',
content: JSON.stringify(part.output),
content: Array.isArray(part.output)
? part.output
: JSON.stringify(part.output),
Comment on lines +251 to +253
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve string tool outputs here as well (not only arrays).

This path still JSON.stringifys string outputs, so client tool results become quoted strings while server tool results stay raw strings. That mismatch can change model context interpretation.

🐛 Proposed fix
         role: 'tool',
-        content: Array.isArray(part.output)
-          ? part.output
-          : JSON.stringify(part.output),
+        content:
+          typeof part.output === 'string' || Array.isArray(part.output)
+            ? part.output
+            : JSON.stringify(part.output),
         toolCallId: part.id,
       })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
content: Array.isArray(part.output)
? part.output
: JSON.stringify(part.output),
role: 'tool',
content:
typeof part.output === 'string' || Array.isArray(part.output)
? part.output
: JSON.stringify(part.output),
toolCallId: part.id,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/chat/messages.ts` around lines 251 -
253, The code currently JSON.stringify's non-array outputs which turns string
tool outputs into quoted strings; change the content assignment for part.output
so strings are preserved: replace the ternary that sets content using
Array.isArray(part.output) ? part.output : JSON.stringify(part.output) with
logic that returns part.output when it's an array, returns part.output as-is
when typeof part.output === 'string', and only calls JSON.stringify for other
types (e.g., objects), referencing the same part.output expression in
messages.ts.

toolCallId: part.id,
})
emittedToolResultIds.add(part.id)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ export function updateToolResultPart(
messages: Array<UIMessage>,
messageId: string,
toolCallId: string,
content: string,
content: string | Array<any>,
state: ToolResultState,
error?: string,
): Array<UIMessage> {
Expand Down
16 changes: 12 additions & 4 deletions packages/typescript/ai/src/activities/chat/stream/processor.ts
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,11 @@ export class StreamProcessor {
)

// Step 2: Create a tool-result part (for LLM conversation history)
const content = typeof output === 'string' ? output : JSON.stringify(output)
// Preserve arrays (e.g. multimodal content parts) as-is.
const content =
typeof output === 'string' || Array.isArray(output)
? output
: JSON.stringify(output)
const toolResultState: ToolResultState = error ? 'error' : 'complete'

updatedMessages = updateToolResultPart(
Expand Down Expand Up @@ -715,10 +719,14 @@ export class StreamProcessor {
// Step 1: Update the tool-call part's output field (for UI consistency
// with client tools — see GitHub issue #176)
let output: unknown
try {
output = JSON.parse(chunk.result)
} catch {
if (Array.isArray(chunk.result)) {
output = chunk.result
} else {
try {
output = JSON.parse(chunk.result)
} catch {
output = chunk.result
}
}
this.messages = updateToolCallWithOutput(
this.messages,
Expand Down
Loading
Loading