Skip to content

feat: dual ESM+CJS builds + toJSONResponse/fetchJSON for non-streaming runtimes#478

Open
AlemTuzlak wants to merge 12 commits intomainfrom
fix/cjs-output-and-json-response
Open

feat: dual ESM+CJS builds + toJSONResponse/fetchJSON for non-streaming runtimes#478
AlemTuzlak wants to merge 12 commits intomainfrom
fix/cjs-output-and-json-response

Conversation

@AlemTuzlak
Copy link
Copy Markdown
Contributor

@AlemTuzlak AlemTuzlak commented Apr 20, 2026

Summary

Two related fixes for Expo / Metro / non-streaming runtimes.

#308 — dual ESM + CJS output

`@tanstack/ai`, `@tanstack/ai-client`, and `@tanstack/ai-event-client` were ESM-only (`import` condition only, no `require` / `default`). Metro can't resolve that configuration, even with `unstable_enablePackageExports: true`, so consumers saw `Cannot resolve @tanstack/ai/adapters` etc.

Changes:

  • `vite.config.ts` for all three packages flipped to `cjs: true` — emits `dist/esm//*.js` + `.d.ts` and `dist/cjs//*.cjs` + `.d.cts`.
  • `package.json` `exports` for `.`, `./adapters`, `./middlewares` now use the nested `import`/`require` shape with type-aware conditions.
  • Added `main` pointing at the CJS entry so legacy resolvers still work.
  • `publint --strict` passes.

#309 — `toJSONResponse` + `fetchJSON`

Expo's `@expo/server` can't emit `ReadableStream` responses, so `toServerSentEventsResponse` / `toHttpResponse` crash with `Cannot read properties of undefined (reading 'statusText')`.

Changes:

  • `@tanstack/ai`: `toJSONResponse(stream, init?)` — drains the stream, returns `new Response(JSON.stringify(chunks), { Content-Type: application/json })`. Honours caller-provided headers / status; aborts a supplied `abortController` if the upstream throws.
  • `@tanstack/ai-client`: `fetchJSON(url, options?)` — matching connection adapter. POSTs `{ messages, data }`, parses the response as a `StreamChunk[]`, yields each chunk into the normal `ChatClient` pipeline.

Trade-off: you lose incremental rendering — the UI sees everything at once when the request resolves. Docs in both JSDoc blocks call this out and tell users to prefer SSE / HTTP-stream when the runtime supports them.

Test plan

  • Unit tests for `toJSONResponse`: defaults, header passthrough, custom `Content-Type`, abort-on-upstream-error
  • `pnpm test` across all 41 projects
  • `test:build` (publint strict) green on all three packages — dual exports map validated

Summary by CodeRabbit

  • New Features

    • Dual ESM + CommonJS distributions for packages to improve CJS compatibility
    • Added toJSONResponse — server helper returning a chat stream as a single JSON-array Response
    • Added fetchJSON — client adapter that fetches JSON-array chat responses and replays them into the chat pipeline
  • Documentation

    • Guides for non-streaming runtimes (React Native / Expo) and JSON-adapter usage
  • Tests

    • Tests covering fetchJSON behavior, error cases, headers, and abort handling

Fixes #308 and #309.

- @tanstack/ai, @tanstack/ai-client, @tanstack/ai-event-client now emit
  both dist/esm/*.js and dist/cjs/*.cjs with matching .d.cts files.
  package.json exports gained nested import/require conditions plus a
  `main` field so Metro / Expo / other CJS-only resolvers can find
  the subpath exports (`./adapters`, `./middlewares`, etc.).

- New toJSONResponse(stream, init?) on @tanstack/ai: drains the stream
  and returns a JSON-array Response. For runtimes that can't stream
  ReadableStream bodies (Expo's @expo/server, edge proxies).

- New fetchJSON(url, options?) connection adapter on @tanstack/ai-client:
  the client-side counterpart — fetches the JSON array and replays each
  chunk into the normal ChatClient pipeline.

- Trade-off documented in both: you lose incremental rendering; use
  SSE / HTTP-stream responses when the runtime supports them.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 20, 2026

🚀 Changeset Version Preview

10 package(s) bumped directly, 23 bumped as dependents.

🟥 Major bumps

Package Version Reason
@tanstack/ai-anthropic 0.8.3 → 1.0.0 Changeset
@tanstack/ai-event-client 0.2.8 → 1.0.0 Changeset
@tanstack/ai-preact 0.6.20 → 1.0.0 Changeset
@tanstack/ai-react 0.8.0 → 1.0.0 Changeset
@tanstack/ai-solid 0.7.0 → 1.0.0 Changeset
@tanstack/ai-svelte 0.7.0 → 1.0.0 Changeset
@tanstack/ai-vue 0.7.0 → 1.0.0 Changeset
@tanstack/ai-code-mode 0.1.8 → 1.0.0 Dependent
@tanstack/ai-code-mode-skills 0.1.8 → 1.0.0 Dependent
@tanstack/ai-elevenlabs 0.2.0 → 1.0.0 Dependent
@tanstack/ai-fal 0.7.0 → 1.0.0 Dependent
@tanstack/ai-gemini 0.10.0 → 1.0.0 Dependent
@tanstack/ai-grok 0.7.0 → 1.0.0 Dependent
@tanstack/ai-groq 0.1.8 → 1.0.0 Dependent
@tanstack/ai-isolate-node 0.1.8 → 1.0.0 Dependent
@tanstack/ai-isolate-quickjs 0.1.8 → 1.0.0 Dependent
@tanstack/ai-ollama 0.6.10 → 1.0.0 Dependent
@tanstack/ai-openai 0.8.2 → 1.0.0 Dependent
@tanstack/ai-openrouter 0.8.2 → 1.0.0 Dependent
@tanstack/ai-react-ui 0.6.2 → 1.0.0 Dependent
@tanstack/ai-solid-ui 0.6.2 → 1.0.0 Dependent

🟨 Minor bumps

Package Version Reason
@tanstack/ai 0.14.0 → 0.15.0 Changeset
@tanstack/ai-client 0.8.0 → 0.9.0 Changeset
@tanstack/ai-isolate-cloudflare 0.1.8 → 0.2.0 Changeset

🟩 Patch bumps

Package Version Reason
@tanstack/ai-code-mode-models-eval 0.0.12 → 0.0.13 Dependent
@tanstack/ai-devtools-core 0.3.25 → 0.3.26 Dependent
@tanstack/ai-vue-ui 0.1.31 → 0.1.32 Dependent
@tanstack/preact-ai-devtools 0.1.29 → 0.1.30 Dependent
@tanstack/react-ai-devtools 0.2.29 → 0.2.30 Dependent
@tanstack/solid-ai-devtools 0.2.29 → 0.2.30 Dependent
ts-svelte-chat 0.1.38 → 0.1.39 Dependent
ts-vue-chat 0.1.38 → 0.1.39 Dependent
vanilla-chat 0.0.35 → 0.0.36 Dependent

@nx-cloud
Copy link
Copy Markdown

nx-cloud Bot commented Apr 20, 2026

View your CI Pipeline Execution ↗ for commit 1a26a98

Command Status Duration Result
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 1s View ↗

☁️ Nx Cloud last updated this comment at 2026-05-08 10:12:46 UTC

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 20, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds dual ESM/CJS package entrypoints for several TanStack AI packages and introduces server-side toJSONResponse(stream, init?) (drains a stream → single JSON Response) and client-side fetchJSON(url, options?) (POST → JSON array → replayed StreamChunks) with accompanying tests and docs.

Changes

Single Cohort: Dual-package outputs + non-streaming fallback

Layer / File(s) Summary
Package metadata / Exports
packages/typescript/ai/package.json, packages/typescript/ai-client/package.json, packages/typescript/ai-event-client/package.json
Add top-level "main": "./dist/cjs/index.cjs" and change exports to conditional "import" (ESM) / "require" (CJS) targets with corresponding .d.ts / .d.cts typings.
Build config
packages/typescript/ai/vite.config.ts, packages/typescript/ai-client/vite.config.ts, packages/typescript/ai-event-client/vite.config.ts
Enable CJS output by setting cjs: true in the tanstack/Vite build config.
Core server helper
packages/typescript/ai/src/stream-to-response.ts, packages/typescript/ai/src/index.ts
Add toJSONResponse(stream, init?): drains an AsyncIterable into an in-memory array, returns a single Response with JSON body, handles aborts/errors, preserves init (sets Content-Type: application/json when absent), and re-export it at package top level.
Client adapter
packages/typescript/ai-client/src/connection-adapters.ts, packages/typescript/ai-client/src/index.ts
Add fetchJSON(url, options?) adapter: POSTs { messages, data }, parses full JSON body expected as StreamChunk[], validates array shape, yields each element as StreamChunk via async generator; re-export at package top level.
Tests
packages/typescript/ai/tests/stream-to-response.test.ts, packages/typescript/ai-client/tests/connection-adapters.test.ts
New tests for toJSONResponse (draining, headers, abort/error semantics) and fetchJSON (successful drain, non-2xx handling, non-array body error, fn/async resolution of url/options, body merging, custom fetch client, signal forwarding).
Docs & Nav
docs/api/ai-client.md, docs/api/ai.md, docs/chat/connection-adapters.md, docs/chat/non-streaming-runtimes.md, docs/chat/streaming.md, docs/config.json
Document fetchJSON and toJSONResponse, add non-streaming runtimes guide and navigation entry, add examples and runtime guidance (non-incremental rendering trade-off).
Changeset
.changeset/cjs-output-and-json-response.md
Release note summarizing packaging changes and new toJSONResponse() / fetchJSON() capabilities.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant ChatClient as ChatClient
  participant Server as Server
  participant StreamProducer as StreamProducer

  Client->>ChatClient: call fetchJSON(url, options)
  ChatClient->>Server: POST { messages, data }
  Server->>StreamProducer: start async chat stream
  StreamProducer-->>Server: yield StreamChunk...
  Server->>Server: toJSONResponse(stream) drains all chunks -> JSON array
  Server-->>ChatClient: HTTP 200 body: [StreamChunk, ...]
  ChatClient->>ChatClient: parse array, replay chunks into pipeline
  ChatClient->>Client: deliver reconstructed events (non-incremental)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 I nibbled bytes and gathered the stream,
Piled every chunk into one bright dream.
CJS and ESM now hop side by side,
JSON returns where streams can't ride.
Fetch, replay — a tidy, cozy stride.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the two main changes: dual ESM+CJS builds and new toJSONResponse/fetchJSON utilities for non-streaming runtimes.
Description check ✅ Passed The PR description comprehensively covers the changes, motivation, and test plan. However, the author did not fill in the template's required checklist items and release impact section.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/cjs-output-and-json-response

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new Bot commented Apr 20, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@478

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@478

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@478

@tanstack/ai-code-mode

npm i https://pkg.pr.new/@tanstack/ai-code-mode@478

@tanstack/ai-code-mode-skills

npm i https://pkg.pr.new/@tanstack/ai-code-mode-skills@478

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@478

@tanstack/ai-elevenlabs

npm i https://pkg.pr.new/@tanstack/ai-elevenlabs@478

@tanstack/ai-event-client

npm i https://pkg.pr.new/@tanstack/ai-event-client@478

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@478

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@478

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@478

@tanstack/ai-groq

npm i https://pkg.pr.new/@tanstack/ai-groq@478

@tanstack/ai-isolate-cloudflare

npm i https://pkg.pr.new/@tanstack/ai-isolate-cloudflare@478

@tanstack/ai-isolate-node

npm i https://pkg.pr.new/@tanstack/ai-isolate-node@478

@tanstack/ai-isolate-quickjs

npm i https://pkg.pr.new/@tanstack/ai-isolate-quickjs@478

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@478

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@478

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@478

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@478

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@478

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@478

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@478

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@478

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@478

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@478

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@478

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@478

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@478

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@478

commit: 9e42324

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
packages/typescript/ai-client/src/connection-adapters.ts (1)

495-497: Optional: honor abortSignal while replaying chunks.

Since the whole payload is already in memory, the loop ignores abortSignal during replay. If the consumer aborts late (e.g., user navigates away before chunks are drained by the pipeline), chunks will continue to flow. Consider a cheap check to bail out early:

♻️ Suggested tweak
       for (const chunk of payload) {
+        if (abortSignal?.aborted) break
         yield chunk as StreamChunk
       }

Not a blocker given the buffered/non-streaming nature of this adapter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-client/src/connection-adapters.ts` around lines 495 -
497, The replay loop that yields chunks from the in-memory payload currently
ignores abortSignal and continues pushing chunks even after cancellation; inside
the loop that iterates over payload and yields each item (the block yielding
chunk as StreamChunk), check the provided abortSignal (e.g.,
abortSignal?.aborted) on each iteration and bail out immediately (return/stop
iteration) when aborted so the generator stops producing further StreamChunk
values.
packages/typescript/ai-client/package.json (1)

21-35: Dual exports look correct; consider exposing ./package.json.

The nested import/require conditions with type-aware types keys are the recommended Node resolution shape, and main.cjs pairs correctly with "type": "module" (Node uses the extension to disambiguate). publint strict passing is a good signal.

Optional: add "./package.json": "./package.json" to exports so tools that probe the manifest (some bundlers, version resolvers) don't get blocked by the closed export map. Same applies to packages/typescript/ai/package.json and packages/typescript/ai-event-client/package.json.

Proposed addition
   "exports": {
     ".": {
       "import": {
         "types": "./dist/esm/index.d.ts",
         "default": "./dist/esm/index.js"
       },
       "require": {
         "types": "./dist/cjs/index.d.cts",
         "default": "./dist/cjs/index.cjs"
       }
-    }
+    },
+    "./package.json": "./package.json"
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-client/package.json` around lines 21 - 35, Add an
explicit export entry for the package manifest so consumers and tooling can read
it: update the package.json "exports" object to include the key "./package.json"
mapping to "./package.json" (mirror this change in
packages/typescript/ai/package.json and
packages/typescript/ai-event-client/package.json as well); locate the "exports"
block that currently defines "." with "import"/"require" and add the
"./package.json": "./package.json" mapping alongside those entries.
packages/typescript/ai/tests/stream-to-response.test.ts (1)

875-945: LGTM — good coverage for toJSONResponse.

Tests cover the four meaningful branches (defaults, custom init/headers, explicit Content-Type passthrough, and abort-on-upstream-error with rethrow). Nice use of toHaveBeenCalledOnce() to assert abort happens exactly once.

One optional addition worth considering: a test that asserts the controller is not aborted when the stream drains successfully, to lock in that behavior against regressions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-to-response.test.ts` around lines 875 -
945, Add a test to ensure the provided AbortController is NOT aborted when the
stream drains successfully: create an AbortController, spy on its abort method
(vi.spyOn(abortController, 'abort')), call toJSONResponse with
createMockStream([...successful chunks...]) and the abortController in options,
await the response.json() (or response completion), then assert abortSpy was not
called (toHaveBeenCalledTimes(0) / not.toHaveBeenCalled()). Reference
toJSONResponse, createMockStream, and AbortController/abort in the test so
behavior is covered alongside the existing abort-on-error test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/typescript/ai-client/package.json`:
- Around line 21-35: Add an explicit export entry for the package manifest so
consumers and tooling can read it: update the package.json "exports" object to
include the key "./package.json" mapping to "./package.json" (mirror this change
in packages/typescript/ai/package.json and
packages/typescript/ai-event-client/package.json as well); locate the "exports"
block that currently defines "." with "import"/"require" and add the
"./package.json": "./package.json" mapping alongside those entries.

In `@packages/typescript/ai-client/src/connection-adapters.ts`:
- Around line 495-497: The replay loop that yields chunks from the in-memory
payload currently ignores abortSignal and continues pushing chunks even after
cancellation; inside the loop that iterates over payload and yields each item
(the block yielding chunk as StreamChunk), check the provided abortSignal (e.g.,
abortSignal?.aborted) on each iteration and bail out immediately (return/stop
iteration) when aborted so the generator stops producing further StreamChunk
values.

In `@packages/typescript/ai/tests/stream-to-response.test.ts`:
- Around line 875-945: Add a test to ensure the provided AbortController is NOT
aborted when the stream drains successfully: create an AbortController, spy on
its abort method (vi.spyOn(abortController, 'abort')), call toJSONResponse with
createMockStream([...successful chunks...]) and the abortController in options,
await the response.json() (or response completion), then assert abortSpy was not
called (toHaveBeenCalledTimes(0) / not.toHaveBeenCalled()). Reference
toJSONResponse, createMockStream, and AbortController/abort in the test so
behavior is covered alongside the existing abort-on-error test.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d831b837-151e-456f-8d68-13b77d844f5a

📥 Commits

Reviewing files that changed from the base of the PR and between 1d6f3be and 1bbc932.

📒 Files selected for processing (12)
  • .changeset/cjs-output-and-json-response.md
  • packages/typescript/ai-client/package.json
  • packages/typescript/ai-client/src/connection-adapters.ts
  • packages/typescript/ai-client/src/index.ts
  • packages/typescript/ai-client/vite.config.ts
  • packages/typescript/ai-event-client/package.json
  • packages/typescript/ai-event-client/vite.config.ts
  • packages/typescript/ai/package.json
  • packages/typescript/ai/src/index.ts
  • packages/typescript/ai/src/stream-to-response.ts
  • packages/typescript/ai/tests/stream-to-response.test.ts
  • packages/typescript/ai/vite.config.ts

…rences

Serves three personas: Expo/RN builders hitting streaming-response crashes,
builders on other non-streaming runtimes (edge proxies, legacy serverless),
and evaluators checking whether TanStack AI supports RN/Expo.

- New journey page at docs/chat/non-streaming-runtimes.md titled 'React
  Native & Expo'. A → B: Expo API route crashing on streaming response →
  working chat via toJSONResponse + fetchJSON.
- Cross-linked from chat/streaming.md (callout near
  toServerSentEventsResponse) and chat/connection-adapters.md (new
  'JSON Array (non-streaming runtimes)' subsection).
- Added the new entries to the API references: toJSONResponse in
  docs/api/ai.md and fetchJSON in docs/api/ai-client.md, each pointing
  back to the walkthrough.
- Registered the new page in docs/config.json under 'Chat & Streaming',
  sequenced right after Connection Adapters.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
docs/api/ai-client.md (1)

145-184: Consider varying sentence structure to improve readability.

Three connection adapter sections in succession begin with "Creates," making the documentation slightly repetitive. Consider varying the opening phrase for better flow.

✍️ Suggested rewording
 ### `fetchServerSentEvents(url, options?)`
 
-Creates an SSE connection adapter.
+Establishes an SSE connection adapter for server-sent events streaming.

or

 ### `fetchJSON(url, options?)`
 
-Creates a connection adapter for non-streaming runtimes — pair with [`toJSONResponse`](./ai#tojsonresponsestream-init) on the server.
+Provides a connection adapter for non-streaming runtimes — pair with [`toJSONResponse`](./ai#tojsonresponsestream-init) on the server.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/api/ai-client.md` around lines 145 - 184, The three adapter descriptions
(fetchServerSentEvents, fetchHttpStream, fetchJSON) all start with the same verb
"Creates," making the copy repetitive; update the lead sentence for one or two
of these functions to vary phrasing (e.g., "Opens an SSE connection adapter
for...", "Provides an HTTP stream adapter that...", or "Returns a JSON-based
adapter for non-streaming runtimes...") while keeping the technical details
intact (include options example and the note about POSTing { messages, data }
for fetchJSON and the trade-off about no incremental rendering), and ensure the
function names fetchServerSentEvents, fetchHttpStream, and fetchJSON remain
present so readers can locate the API.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@docs/api/ai-client.md`:
- Around line 145-184: The three adapter descriptions (fetchServerSentEvents,
fetchHttpStream, fetchJSON) all start with the same verb "Creates," making the
copy repetitive; update the lead sentence for one or two of these functions to
vary phrasing (e.g., "Opens an SSE connection adapter for...", "Provides an HTTP
stream adapter that...", or "Returns a JSON-based adapter for non-streaming
runtimes...") while keeping the technical details intact (include options
example and the note about POSTing { messages, data } for fetchJSON and the
trade-off about no incremental rendering), and ensure the function names
fetchServerSentEvents, fetchHttpStream, and fetchJSON remain present so readers
can locate the API.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c65a6675-dcd6-4072-9b42-d6a27f6233ca

📥 Commits

Reviewing files that changed from the base of the PR and between 1bbc932 and ee3f393.

📒 Files selected for processing (6)
  • docs/api/ai-client.md
  • docs/api/ai.md
  • docs/chat/connection-adapters.md
  • docs/chat/non-streaming-runtimes.md
  • docs/chat/streaming.md
  • docs/config.json
✅ Files skipped from review due to trivial changes (4)
  • docs/config.json
  • docs/chat/non-streaming-runtimes.md
  • docs/chat/streaming.md
  • docs/chat/connection-adapters.md

AlemTuzlak and others added 4 commits April 23, 2026 14:02
…on-response

# Conflicts:
#	packages/typescript/ai/package.json
…JSON

Address CR findings:

- toJSONResponse now checks `abortController.signal.aborted` on entry
  (throws the signal's reason without draining the upstream) and inside
  the drain loop (breaks early if aborted mid-stream), matching the
  semantics of toServerSentEventsStream and toHttpStream. Previously the
  signal was only consulted from the error-path catch handler, so a
  pre-aborted controller drained the full stream anyway and a mid-drain
  abort was silently ignored.
- Add two new tests covering pre-abort (infinite stream never pulled)
  and mid-drain abort (bounded pulls after abort fires).
- Add 8 fetchJSON tests covering happy path, non-2xx, non-array body
  with descriptive error, url-as-function, options-as-async-function,
  options.body merging, custom fetchClient override, and AbortSignal
  propagation — the adapter previously had zero direct test coverage.
@tombeckenham tombeckenham self-requested a review May 7, 2026 08:16
Copy link
Copy Markdown
Contributor

@tombeckenham tombeckenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of small changes to make would make this more solid

)
}
for (const chunk of payload) {
yield chunk as StreamChunk
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You never check for the abort signal in this yield loop despite adding it in toJSONResponse

)
}

const payload = (await response.json()) as unknown
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to wrap this in try catch otherwise the user will get an annoying "Unexpected token < in JSON at position 0" error instead of what probably happened e.g. a gateway error or something that returned html

Suggested change
const payload = (await response.json()) as unknown
let payload: unknown
try {
payload = await response.json()
} catch (err) {
const cause = err instanceof Error ? err.message : String(err)
throw new Error(
`fetchJSON: failed to parse response body as JSON from ${resolvedUrl} (status ${response.status}):
${cause}`,
{ cause: err },
)
}

signal: abortSignal || resolvedOptions.signal,
})

if (!response.ok) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

servers usually put the actual diagnostic information in the body. e.g. OpenAI/Anthropic upstream rate limit: {"error":{"type":"rate_limit_error","message":"Rate limit exceeded for...","retryAfter":42}}

Suggested change
if (!response.ok) {
if (!response.ok) {
let bodySnippet = ''
try {
const text = await response.text()
bodySnippet = text.length > 500 ? `${text.slice(0, 500)}…` : text
} catch {
// body unreadable, fall through with status only
}
throw new Error(
`HTTP error! status: ${response.status} ${response.statusText}${
bodySnippet ? ` — ${bodySnippet}` : ''
}`,
)
}

* const client = new ChatClient({ connection })
* ```
*/
export function fetchJSON(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should add an e2e test for the roundtrip toJSONResponse → fetchJSON

AlemTuzlak and others added 5 commits May 8, 2026 11:05
…ort, e2e

Address review feedback on the toJSONResponse / fetchJSON pair:

- fetchJSON now bails out of the chunk replay loop when abortSignal fires
  mid-replay so a late consumer abort stops the buffered payload from being
  pushed into an abandoned pipeline.
- Wrap response.json() in try/catch so non-JSON bodies (HTML gateway error
  pages, etc.) produce an actionable error citing URL, status, and cause
  instead of "Unexpected token < in JSON at position 0".
- On non-2xx responses, read the body (truncated to 500 chars) and include
  it in the thrown error so upstream diagnostic info (OpenAI/Anthropic
  rate-limit JSON, gateway snippets) survives.
- Add Playwright e2e spec covering the toJSONResponse → fetchJSON
  roundtrip via a new mode=json on the chat feature backed by /api/chat-json.
- Re-export fetchJSON from ai-react / ai-preact / ai-solid / ai-vue /
  ai-svelte for parity with fetchHttpStream / fetchServerSentEvents.
- Expose ./package.json in exports for ai, ai-client, ai-event-client so
  tooling that probes the manifest isn't blocked by a closed export map.
- Lock in toJSONResponse not aborting the controller on a successful drain.
Add a sibling /tanchat-json route demonstrating the non-streaming chat
transport for runtimes that can't emit ReadableStream responses (Expo's
@expo/server, certain edge proxies). The server route mirrors
api.tanchat.ts exactly except it returns toJSONResponse(stream) instead
of toServerSentEventsResponse(stream); the page uses fetchJSON instead
of fetchServerSentEvents and is otherwise a slim chat UI with a banner
calling out the trade-off (no incremental rendering — the UI sees every
chunk only after the request resolves).

A "JSON mode" link in the existing chat header makes the showcase
discoverable from the main demo.
The non-streaming-runtimes guide now serves two extra personas:

- Devs evaluating the JSON-array path before swapping their own app —
  add a contextual callout near the top pointing at the new
  /tanchat-json route in examples/ts-react-chat (also linked from the
  JSON Array section in connection-adapters).
- Devs debugging a failed JSON request — add a Troubleshooting section
  that surfaces the exact error strings fetchJSON now produces (HTML
  gateway pages, upstream rate-limit JSON, missing toJSONResponse on
  the server, indefinite buffering) so a Cmd-F search lands on the
  fix instead of a debugger trip.

No structural changes — meta.json / config.json untouched. Runs clean
through pnpm test:docs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants