-
-
Notifications
You must be signed in to change notification settings - Fork 147
fix: address issues #313, #314, #330, #341, #363 #380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,132 @@ | ||
| --- | ||
| title: Installation | ||
| id: installation | ||
| order: 2 | ||
| --- | ||
|
|
||
| Install TanStack AI along with a framework integration and an adapter for your preferred LLM provider. | ||
|
|
||
| ## Core | ||
|
|
||
| Every project needs the core package: | ||
|
|
||
| ```bash | ||
| npm install @tanstack/ai | ||
| # or | ||
| pnpm add @tanstack/ai | ||
| # or | ||
| yarn add @tanstack/ai | ||
| ``` | ||
|
|
||
| ## React | ||
|
|
||
| ```bash | ||
| npm install @tanstack/ai-react | ||
| ``` | ||
|
|
||
| The React integration provides the `useChat` hook for managing chat state. See the [@tanstack/ai-react API docs](../api/ai-react) for full details. | ||
|
|
||
| ```typescript | ||
| import { useChat, fetchServerSentEvents } from "@tanstack/ai-react"; | ||
|
|
||
| function Chat() { | ||
| const { messages, sendMessage } = useChat({ | ||
| connection: fetchServerSentEvents("/api/chat"), | ||
| }); | ||
| // ... | ||
| } | ||
| ``` | ||
|
|
||
| ## Solid | ||
|
|
||
| ```bash | ||
| npm install @tanstack/ai-solid | ||
| ``` | ||
|
|
||
| The Solid integration provides the `useChat` primitive for managing chat state. See the [@tanstack/ai-solid API docs](../api/ai-solid) for full details. | ||
|
|
||
| ```typescript | ||
| import { useChat, fetchServerSentEvents } from "@tanstack/ai-solid"; | ||
|
|
||
| function Chat() { | ||
| const { messages, sendMessage } = useChat({ | ||
| connection: fetchServerSentEvents("/api/chat"), | ||
| }); | ||
| // ... | ||
| } | ||
| ``` | ||
|
|
||
| ## Preact | ||
|
|
||
| ```bash | ||
| npm install @tanstack/ai-preact | ||
| ``` | ||
|
|
||
| The Preact integration provides the `useChat` hook for managing chat state. See the [@tanstack/ai-preact API docs](../api/ai-preact) for full details. | ||
|
|
||
| ```typescript | ||
| import { useChat, fetchServerSentEvents } from "@tanstack/ai-preact"; | ||
|
|
||
| function Chat() { | ||
| const { messages, sendMessage } = useChat({ | ||
| connection: fetchServerSentEvents("/api/chat"), | ||
| }); | ||
| // ... | ||
| } | ||
| ``` | ||
|
|
||
| ## Vue | ||
|
|
||
| ```bash | ||
| npm install @tanstack/ai-vue | ||
| ``` | ||
|
|
||
| ## Svelte | ||
|
|
||
| ```bash | ||
| npm install @tanstack/ai-svelte | ||
| ``` | ||
|
|
||
| ## Headless (Framework-Agnostic) | ||
|
|
||
| If you're using a framework without a dedicated integration, or building a custom solution, use the headless client directly: | ||
|
|
||
| ```bash | ||
| npm install @tanstack/ai-client | ||
| ``` | ||
|
|
||
| See the [@tanstack/ai-client API docs](../api/ai-client) for full details. | ||
|
|
||
| ## Adapters | ||
|
|
||
| You also need an adapter for your LLM provider. Install one (or more) of the following: | ||
|
|
||
| ```bash | ||
| # OpenRouter (recommended — 300+ models with one API key) | ||
| npm install @tanstack/ai-openrouter | ||
|
|
||
| # OpenAI | ||
| npm install @tanstack/ai-openai | ||
|
|
||
| # Anthropic | ||
| npm install @tanstack/ai-anthropic | ||
|
|
||
| # Google Gemini | ||
| npm install @tanstack/ai-gemini | ||
|
|
||
| # Ollama (local models) | ||
| npm install @tanstack/ai-ollama | ||
|
|
||
| # Groq | ||
| npm install @tanstack/ai-groq | ||
|
|
||
| # Grok (xAI) | ||
| npm install @tanstack/ai-grok | ||
| ``` | ||
|
|
||
| See the [Adapters section](../adapters/openai) for provider-specific setup guides. | ||
|
|
||
| ## Next Steps | ||
|
|
||
| - [Quick Start Guide](./quick-start) - Build a chat app in minutes | ||
| - [Tools Guide](../guides/tools) - Learn about the isomorphic tool system | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -143,7 +143,8 @@ export type ToolCallPart<TTools extends ReadonlyArray<AnyClientTool> = any> = | |
| export interface ToolResultPart { | ||
| type: 'tool-result' | ||
| toolCallId: string | ||
| content: string | ||
| /** Tool result content. String for text results, or an array for multimodal results. */ | ||
| content: string | Array<any> | ||
|
Comment on lines
+146
to
+147
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Search for direct string method calls on ToolResultPart.content without type guards
rg -n --type=ts 'content\.(trim|split|substring|slice|replace|match|toLowerCase|toUpperCase)' packages/typescript/ai-client/Repository: TanStack/ai Length of output: 179 🏁 Script executed: #!/bin/bash
# Read the types.ts file around lines 146-147 to see the actual definition
echo "=== ai-client/src/types.ts (lines 140-155) ==="
sed -n '140,155p' packages/typescript/ai-client/src/types.ts
echo -e "\n=== Check ai/src/types.ts for ToolResultPart definition ==="
grep -n -A 3 "ToolResultPart" packages/typescript/ai/src/types.ts | head -20Repository: TanStack/ai Length of output: 835 🏁 Script executed: #!/bin/bash
# Search for other potential unsafe content usage patterns more broadly
echo "=== Search for .content usage in ai-client ==="
rg -n '\.content' packages/typescript/ai-client/src/ -A 2 -B 2 --type=ts | head -100Repository: TanStack/ai Length of output: 1953 🏁 Script executed: #!/bin/bash
# Get the complete ToolResultPart definition from ai/src/types.ts
sed -n '298,310p' packages/typescript/ai/src/types.tsRepository: TanStack/ai Length of output: 428 🏁 Script executed: #!/bin/bash
# Also search for ContentPart type definition to understand the type hierarchy
rg -n "interface ContentPart" packages/typescript/ -A 5Repository: TanStack/ai Length of output: 1586 🏁 Script executed: #!/bin/bash
# Check the context around chat-client.ts:352 to see the full usage
sed -n '345,360p' packages/typescript/ai-client/src/chat-client.tsRepository: TanStack/ai Length of output: 548 🏁 Script executed: #!/bin/bash
# Check if ContentPart is properly exported/imported in ai-client
rg -n "ContentPart|import.*types" packages/typescript/ai-client/src/types.ts | head -20Repository: TanStack/ai Length of output: 113 🏁 Script executed: #!/bin/bash
# Also check what ContentPart actually is
rg -n "type ContentPart|interface ContentPart" packages/typescript/ai/src/types.ts -A 5Repository: TanStack/ai Length of output: 1049 Use The 🤖 Prompt for AI Agents |
||
| state: ToolResultState | ||
| error?: string // Error message if state is "error" | ||
| } | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -195,7 +195,10 @@ export function devtoolsMiddleware(): ChatMiddleware { | |
| ...base, | ||
| messageId: localMessageId || undefined, | ||
| toolCallId: chunk.toolCallId, | ||
| result: chunk.result || '', | ||
| result: | ||
| typeof chunk.result === 'string' | ||
| ? chunk.result | ||
| : JSON.stringify(chunk.result ?? ''), | ||
|
Comment on lines
+198
to
+201
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Bug: Undefined results produce When JSON.stringify('' /* from ?? '' */) // produces '""' (a 2-character string)This changes the behavior from producing
Consider preserving the original empty string behavior for undefined/null: 🐛 Proposed fix result:
typeof chunk.result === 'string'
? chunk.result
- : JSON.stringify(chunk.result ?? ''),
+ : chunk.result != null
+ ? JSON.stringify(chunk.result)
+ : '',🤖 Prompt for AI Agents |
||
| timestamp: Date.now(), | ||
| }) | ||
| break | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -706,11 +706,14 @@ export class OpenAITextAdapter< | |
| result.push({ | ||
| type: 'function_call_output', | ||
| call_id: message.toolCallId || '', | ||
| // Support multimodal tool outputs (OpenAI Responses API accepts | ||
| // string or array of content parts for function_call_output). | ||
| output: | ||
| typeof message.content === 'string' | ||
| typeof message.content === 'string' || | ||
| Array.isArray(message.content) | ||
| ? message.content | ||
| : JSON.stringify(message.content), | ||
| }) | ||
| } as any) | ||
|
Comment on lines
+709
to
+716
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🌐 Web query:
💡 Result: No. In the Responses API, a If you need multimodal data downstream, return structured JSON in Sources: [1], [2] The multimodal tool output logic is incorrect and will fail at runtime. The OpenAI Responses API Fix: Always stringify the output, including arrays: If multimodal data is needed, encode it as structured JSON (e.g., with URLs or references) in the string and let downstream code parse it. 🤖 Prompt for AI Agents |
||
| continue | ||
| } | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -248,7 +248,9 @@ function buildAssistantMessages(uiMessage: UIMessage): Array<ModelMessage> { | |||||||||||||||||||
| if (part.output !== undefined && !emittedToolResultIds.has(part.id)) { | ||||||||||||||||||||
| messageList.push({ | ||||||||||||||||||||
| role: 'tool', | ||||||||||||||||||||
| content: JSON.stringify(part.output), | ||||||||||||||||||||
| content: Array.isArray(part.output) | ||||||||||||||||||||
| ? part.output | ||||||||||||||||||||
| : JSON.stringify(part.output), | ||||||||||||||||||||
|
Comment on lines
+251
to
+253
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Preserve string tool outputs here as well (not only arrays). This path still 🐛 Proposed fix role: 'tool',
- content: Array.isArray(part.output)
- ? part.output
- : JSON.stringify(part.output),
+ content:
+ typeof part.output === 'string' || Array.isArray(part.output)
+ ? part.output
+ : JSON.stringify(part.output),
toolCallId: part.id,
})📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||
| toolCallId: part.id, | ||||||||||||||||||||
| }) | ||||||||||||||||||||
| emittedToolResultIds.add(part.id) | ||||||||||||||||||||
|
|
||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing Fal adapter from the adapters list.
The Fal adapter is listed in the navigation config (
docs/config.jsonline 176-178) under Adapters, but is not included in the installation instructions here.📝 Suggested addition
Verify each finding against the current code and only fix it if needed.
In
@docs/getting-started/installation.mdaround lines 100 - 127, The Adaptersinstallation list is missing the Fal adapter referenced in the docs navigation;
update the "Adapters" section in installation.md to include the Fal adapter by
adding an entry for the Fal package (npm install
@tanstack/ai-fal) alongside theother providers and ensure the text still points readers to the Adapters section
for provider-specific setup; keep naming consistent with existing entries (e.g.,
"Fal") so it matches the navigation config.