Skip to content

fix: auto-normalize array of strings in message content to openai spec#2

Open
lkapadiya-DO wants to merge 2 commits into
litellm_internal_stagingfrom
lkapadiya/normalize-content-arrays
Open

fix: auto-normalize array of strings in message content to openai spec#2
lkapadiya-DO wants to merge 2 commits into
litellm_internal_stagingfrom
lkapadiya/normalize-content-arrays

Conversation

@lkapadiya-DO
Copy link
Copy Markdown
Collaborator

Relevant issues

Fixes an observed issue where sending an array of strings in the content block (e.g., ["what is the capital of France?"]) causes an internal AttributeError: 'str' object has no attribute 'get'. This previously bubbled up as a 500 Internal Server Error, confusingly attaching the extracted llm_provider to the crash response.

Linear ticket

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/test_litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem
  • I have requested a Greptile review by commenting @greptileai and received a Confidence Score of at least 4/5 before requesting a maintainer review

Delays in PR merge?

If you're seeing a delay in your PR being merged, ping the LiteLLM Team on Slack (#pr-review).

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:

  • CI run for the last commit
    Link:

  • Merge / cherry-pick CI run
    Links:

Screenshots / Proof of Fix

Before:
Sending an array of strings (e.g "content": ["what is the capital of France?"]) resulted in an internal crash that returned a 500 error containing internal proxy metadata:

{
  "error": {
    "message": "'str' object has no attribute 'get'",
    "type": "server_error",
    "param": null,
    "code": null,
    "llm_provider": "openai"
  }
}

After:
The array of strings is successfully caught and normalized into the strict OpenAI multi-modal dictionary spec before downstream processing, resulting in a successful 200 response:
[{"type": "text", "text": "what is the capital of France?"}]

(Tests passing successfully locally)

Type

🐛 Bug Fix

Changes

  • Updated validate_and_fix_openai_messages in litellm/utils.py to auto-normalize array-of-strings content into standard OpenAI multi-modal dictionaries.
  • Added a pytest function to ensure strings within arrays are properly converted while existing dictionaries are preserved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant