You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use Anthropic's native structured output API instead of the tool-use workaround
6
+
7
+
Upgrades `@anthropic-ai/sdk` from ^0.71.2 to ^0.74.0 and migrates structured output to use the GA `output_config.format` with `json_schema` type. Previously, structured output was emulated by forcing a tool call and extracting the input — this now uses Anthropic's first-class structured output support for more reliable schema-constrained responses.
8
+
9
+
Also migrates streaming and tool types from `client.beta.messages` to the stable `client.messages` API, replacing beta type imports (`BetaToolChoiceAuto`, `BetaToolBash20241022`, `BetaRawMessageStreamEvent`, etc.) with their GA equivalents.
10
+
11
+
**No breaking changes to the public API** — this is an internal implementation change. Users who pass `providerOptions` with tool choice types should note the types are now imported from `@anthropic-ai/sdk/resources/messages` instead of the beta namespace.
* Custom text sequences that will cause the model to stop generating.
65
-
66
-
Anthropic models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn".
67
-
68
-
If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence.
63
+
*
64
+
* Anthropic models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn".
65
+
*
66
+
* If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence.
69
67
*/
70
68
stop_sequences?: Array<string>
71
69
}
72
70
73
71
exportinterfaceAnthropicThinkingOptions{
74
72
/**
75
73
* Configuration for enabling Claude's extended thinking.
76
-
77
-
When enabled, responses include thinking content blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your max_tokens limit.
74
+
*
75
+
* When enabled, responses include thinking content blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your max_tokens limit.
78
76
*/
79
77
thinking?:
80
78
|{
81
79
/**
82
80
* Determines how many tokens Claude can use for its internal reasoning process. Larger budgets can enable more thorough analysis for complex problems, improving response quality.
83
-
84
-
Must be ≥1024 and less than max_tokens
81
+
*
82
+
* Must be ≥1024 and less than max_tokens
85
83
*/
86
84
budget_tokens: number
87
85
@@ -93,17 +91,17 @@ Must be ≥1024 and less than max_tokens
In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.
150
+
*
151
+
* In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.
0 commit comments