Skip to content

fix: correct prompt_cache_retention literal from "in-memory" to "in_memory"#2923

Open
carlosinfantes wants to merge 2 commits intoopenai:mainfrom
carlosinfantes:fix/prompt-cache-retention-literal
Open

fix: correct prompt_cache_retention literal from "in-memory" to "in_memory"#2923
carlosinfantes wants to merge 2 commits intoopenai:mainfrom
carlosinfantes:fix/prompt-cache-retention-literal

Conversation

@carlosinfantes
Copy link

Summary

The SDK declares prompt_cache_retention with Literal["in-memory", "24h"] (hyphen), but the API rejects "in-memory" with a 400 and only accepts "in_memory" (underscore). Using the SDK-typed value as-is always fails.

Changes

Updated all occurrences of Literal["in-memory", "24h"] to Literal["in_memory", "24h"] across 7 files:

  • src/openai/types/responses/response_create_params.py
  • src/openai/types/responses/response.py
  • src/openai/types/responses/responses_client_event.py
  • src/openai/types/responses/responses_client_event_param.py
  • src/openai/types/chat/completion_create_params.py
  • src/openai/resources/responses/responses.py
  • src/openai/resources/chat/completions/completions.py

Fixes #2883

…emory"

The SDK declares prompt_cache_retention with Literal["in-memory", "24h"]
(hyphen), but the API rejects "in-memory" with a 400 and only accepts
"in_memory" (underscore).

Fixes openai#2883
@carlosinfantes carlosinfantes requested a review from a team as a code owner March 3, 2026 19:05
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 16b7359d5b

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

prompt: Optional[ResponsePromptParam] | Omit = omit,
prompt_cache_key: str | Omit = omit,
prompt_cache_retention: Optional[Literal["in-memory", "24h"]] | Omit = omit,
prompt_cache_retention: Optional[Literal["in_memory", "24h"]] | Omit = omit,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Regenerate tests for new prompt_cache_retention literal

After changing the method/type signatures to Literal["in_memory", "24h"], the generated tests still pass "in-memory", which now fails static typing (pyright -p pyproject.toml reports argument-type errors at tests/api_resources/chat/test_completions.py:76,210,519,653 and tests/api_resources/test_responses.py:55,142,464,551). This introduces a typecheck regression in the repo state unless the test call sites are updated alongside the API surface change.

Useful? React with 👍 / 👎.

Update test call sites to match the corrected Literal type.

Fixes openai#2883
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

prompt_cache_retention type declares "in-memory" but API expects "in_memory"

1 participant