Skip to content

fix: preserve reasoning_content for DeepSeek chat completions#3290

Open
lizzielizzielizzie wants to merge 2 commits intowavetermdev:mainfrom
lizzielizzielizzie:copilot/fix-deepseek-reasoning-content
Open

fix: preserve reasoning_content for DeepSeek chat completions#3290
lizzielizzielizzie wants to merge 2 commits intowavetermdev:mainfrom
lizzielizzielizzie:copilot/fix-deepseek-reasoning-content

Conversation

@lizzielizzielizzie
Copy link
Copy Markdown

@lizzielizzielizzie lizzielizzielizzie commented May 5, 2026

DeepSeek V4 enables thinking mode by default and requires that the reasoning_content field be passed back unchanged in assistant messages during multi-turn conversations. The openai-chat backend, however, does not support this. This PR adds that support, roughly following the pattern that is employed by the anthropic backend.

Fixes #3266

Flow

reasoning-start, delta, and end SSE events are captured, sent to the frontend, and included in subsequent API calls

When a stream chunk contains reasoning_content in its delta, we now:

  1. Emit reasoning-start / reasoning-delta / reasoning-end SSE events to the frontend
  2. Capture the full reasoning string on the stored assistant message
  3. Round-trip it in subsequent API calls via reasoning_content on the wire format

Non-reasoning providers are unaffected — the stream field is absent, the Go field defaults to "", and the omitempty JSON tag keeps it off the wire.

Automated Testing

  • tests for round-trip, omitempty, stream chunk parsing, clean, and partial extraction

Manual Testing

  • I manually validated that the issue was resolved
  • I validated that the three other chat modes that come with Wave still work as expected
  • I validated that tool calls work
  • I did not validate any other openai-style custom chat providers other than deepseek
    • Sorry, I don't have an API key for this 😔
    • I believe the changes are well-guarded, but it seems worth calling out as an area for extra attention

Is there anything else I should cover?

Screenshots

Before screenshot, demonstrating issue reproduction
Screenshot 2026-05-04 003906
Afterscreenshot, demonstrating issue resolution
Screenshot 2026-05-04 003935

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented May 5, 2026

CLA assistant check
All committers have signed the CLA.

@lizzielizzielizzie lizzielizzielizzie marked this pull request as draft May 5, 2026 01:54
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

Walkthrough

The diff adds separate tracking for streaming "reasoning" deltas: ChatRequestMessage and ContentDelta gain a ReasoningContent field (with JSON wire mapping). processChatStream accumulates reasoning, emits SSE lifecycle events (AiMsgReasoningStart/AiMsgReasoningDelta/AiMsgReasoningEnd), ensures reasoning is ended if necessary, and includes reasoning in the final assistant message. extractPartialTextMessage now accepts both text and reasoning and returns nil only when both are empty. Tests were added to validate JSON round-trips, streaming chunk parsing, clean() behavior, and partial-message extraction involving reasoning.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and accurately summarizes the main change: adding support for preserving reasoning_content in DeepSeek chat completions.
Linked Issues check ✅ Passed The PR fully addresses issue #3266 by capturing reasoning-start/delta/end events, storing reasoning on assistant messages, and round-tripping reasoning_content in API calls.
Out of Scope Changes check ✅ Passed All changes are scoped to adding reasoning_content support in the openai-chat backend; no extraneous modifications were introduced outside the stated objectives.
Description check ✅ Passed The PR description clearly relates to the changeset, detailing how reasoning_content support was added to the openai-chat backend to fix DeepSeek V4 multi-turn conversations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@pkg/aiusechat/openaichat/openaichat-backend.go`:
- Around line 264-269: Currently the code waits until after the generation loop
to call sseHandler.AiMsgReasoningEnd(msgID), causing reasoning-end to be emitted
after AiMsgTextEnd; modify the loop that switches from reasoning deltas to text
deltas to detect the reasoning→text transition, call
sseHandler.AiMsgReasoningEnd(msgID) immediately at that point and set
reasoningStarted = false (so the post-loop block won't emit it again), ensuring
the existing post-loop branch still covers the reasoning-only case; update
references: reasoningStarted, textStarted, sseHandler.AiMsgReasoningEnd(msgID),
sseHandler.AiMsgTextEnd(textID).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: b1e3572d-e024-4956-9e24-e0184e214682

📥 Commits

Reviewing files that changed from the base of the PR and between 021db67 and 77d0cf1.

📒 Files selected for processing (3)
  • pkg/aiusechat/openaichat/openaichat-backend.go
  • pkg/aiusechat/openaichat/openaichat-backend_test.go
  • pkg/aiusechat/openaichat/openaichat-types.go

Comment thread pkg/aiusechat/openaichat/openaichat-backend.go Outdated
DeepSeek V4 requires reasoning_content to be passed back unchanged in
assistant messages on multi-turn conversations. Previously it was dropped
at every stage: not captured from the stream delta, not stored, and not
re-serialized in follow-up requests, causing HTTP 400 on the second API call.
Close AiMsgReasoningEnd inline on first content delta instead of
deferring to post-loop cleanup. This ensures SSE event order matches
the Anthropic backend: reasoning-start → delta×N → reasoning-end →
text-start → delta×M → text-end.

Fallback reasoning-end kept for the edge case where the stream ends
during reasoning with no content ever arriving (e.g. max_tokens).
@lizzielizzielizzie lizzielizzielizzie force-pushed the copilot/fix-deepseek-reasoning-content branch from 5289304 to d8fed0d Compare May 5, 2026 03:14
@lizzielizzielizzie lizzielizzielizzie marked this pull request as ready for review May 5, 2026 03:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: DeepSeek V4 fails with "reasoning_content must be passed back to the API" in multi-turn conversations

2 participants