Skip to content

Commit 6b1b06f

Browse files
committed
injection guide comments
1 parent df0e534 commit 6b1b06f

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

lib/messages/inject.ts

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -193,14 +193,18 @@ export const insertPruneToolContext = (
193193
(msg) => !(msg.info.role === "user" && isIgnoredUserMessage(msg)),
194194
)
195195

196-
// It's not safe to inject assistant role messages following a user message, as models such
196+
// It's not safe to inject assistant role messages following a user message as models such
197197
// as Claude expect the assistant "turn" to start with reasoning parts. Reasoning parts in many
198198
// cases also cannot be faked as they may be encrypted by the model.
199+
// Gemini only accepts synth reasoning text if it is "skip_thought_signature_validator"
199200
if (!lastNonIgnoredMessage || lastNonIgnoredMessage.info.role === "user") {
200201
messages.push(createSyntheticUserMessage(lastUserMessage, combinedContent, variant))
201202
} else {
202-
// For DeepSeek and Kimi, append tool part to existing message, it seems they only allow assistant
203-
// messages to not have reasoning parts if they only have tool parts.
203+
// For DeepSeek and Kimi, append tool part to existing message, for some reason they don't
204+
// output reasoning parts following an assistant injection containing either just text part,
205+
// or text part with synth reasoning, and there's no docs on how their reasoning encryption
206+
// works as far as I can find. IDK what's going on here, seems like the only possible ways
207+
// to inject for them is a user role message, or a tool part apeended to last assistant message.
204208
const providerID = userInfo.model?.providerID || ""
205209
const modelID = userInfo.model?.modelID || ""
206210
if (isDeepSeekOrKimi(providerID, modelID)) {

0 commit comments

Comments
 (0)