Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
5cee69e
ref(openai): Separate output handling
alexander-alderman-webb Feb 26, 2026
d94f953
.
alexander-alderman-webb Feb 26, 2026
4ab8e74
ref(openai): Extract input in API-specific functions
alexander-alderman-webb Feb 26, 2026
fb12a44
.
alexander-alderman-webb Feb 26, 2026
2aa4b25
Merge branch 'webb/openai/separate-output-handling-by-api' into webb/…
alexander-alderman-webb Feb 26, 2026
4f84f98
.
alexander-alderman-webb Feb 26, 2026
9d89b90
ref(openai): Only handle streamed results when applicable
alexander-alderman-webb Feb 26, 2026
3fe3499
.
alexander-alderman-webb Feb 26, 2026
ece696a
fix(openai): Attach response model with streamed Responses API
alexander-alderman-webb Feb 26, 2026
6c5f254
.
alexander-alderman-webb Feb 26, 2026
9683ef6
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 26, 2026
5a36e3d
.
alexander-alderman-webb Feb 26, 2026
0ce3879
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 26, 2026
731754f
add assertion
alexander-alderman-webb Feb 26, 2026
f1fdaf9
.
alexander-alderman-webb Feb 26, 2026
f93c208
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 26, 2026
63a9c71
.
alexander-alderman-webb Feb 26, 2026
f7c5356
.
alexander-alderman-webb Feb 26, 2026
1cdc590
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 26, 2026
edf0a4f
remove unused parameter
alexander-alderman-webb Feb 26, 2026
0bd6503
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 26, 2026
89baf7d
fix(openai): Attach response model with streamed Completions API
alexander-alderman-webb Feb 26, 2026
85e1791
.
alexander-alderman-webb Feb 26, 2026
071d1f0
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 26, 2026
f12cf5d
Merge branch 'webb/openai/add-response-model' into webb/openai/add-re…
alexander-alderman-webb Feb 26, 2026
3fb20f0
remove manual token counting for completions
alexander-alderman-webb Feb 27, 2026
4dade58
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 27, 2026
b07d1df
Merge branch 'webb/openai/add-response-model' into webb/openai/add-re…
alexander-alderman-webb Feb 27, 2026
82843e2
add unconditional manual token counting for completions
alexander-alderman-webb Feb 27, 2026
ed9596e
Merge branch 'webb/openai/streaming-response' into webb/openai/add-re…
alexander-alderman-webb Feb 27, 2026
e7c5c5a
Merge branch 'webb/openai/add-response-model' into webb/openai/add-re…
alexander-alderman-webb Feb 27, 2026
2ed1a94
merge master
alexander-alderman-webb Mar 2, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions sentry_sdk/integrations/openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -612,6 +612,8 @@
def new_iterator() -> "Iterator[ChatCompletionChunk]":
nonlocal ttft
for x in old_iterator:
span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, x.model)

Check warning on line 615 in sentry_sdk/integrations/openai.py

View workflow job for this annotation

GitHub Actions / warden: code-review

Access to x.model outside capture_internal_exceptions block can cause runtime errors

The new `span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, x.model)` call is placed outside the `capture_internal_exceptions()` block, unlike similar operations in this same function (e.g., accessing `x.choices`). If a streaming chunk lacks the `model` attribute, an `AttributeError` will be raised and propagate to the user's code, breaking the stream iteration. The existing pattern at lines 757-758 (in the Responses API streaming function) places this same operation inside the exception handler.

Check warning on line 615 in sentry_sdk/integrations/openai.py

View workflow job for this annotation

GitHub Actions / warden: find-bugs

Unprotected attribute access to x.model may raise exception and break iteration

The new code `span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, x.model)` directly accesses `x.model` without checking if the attribute exists, and it's placed OUTSIDE the `capture_internal_exceptions()` block. If `x.model` is missing or raises an exception (e.g., with API changes or non-standard responses), this will propagate to user code and break their stream iteration. This is inconsistent with the existing pattern at line 472 which uses `hasattr(response, "model")` before access, and with the responses API handler at line 758 which wraps similar access inside `capture_internal_exceptions()`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ungarded streamed model access can raise

Medium Severity

x.model is read directly in the streaming iterators before capture_internal_exceptions(). If a streamed chunk does not expose model (or uses a dict-like/event variant), this raises and interrupts iteration, making instrumentation break the caller’s stream instead of failing safely.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


with capture_internal_exceptions():
if hasattr(x, "choices"):
choice_index = 0
Expand Down Expand Up @@ -654,6 +656,8 @@
async def new_iterator_async() -> "AsyncIterator[ChatCompletionChunk]":
nonlocal ttft
async for x in old_iterator:
span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, x.model)

with capture_internal_exceptions():
if hasattr(x, "choices"):
choice_index = 0
Expand Down
8 changes: 8 additions & 0 deletions tests/integrations/openai/test_openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -511,6 +511,8 @@ def test_streaming_chat_completion_no_prompts(
span = tx["spans"][0]
assert span["op"] == "gen_ai.chat"

assert span["data"][SPANDATA.GEN_AI_RESPONSE_MODEL] == "model-id"

assert SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS not in span["data"]
assert SPANDATA.GEN_AI_REQUEST_MESSAGES not in span["data"]
assert SPANDATA.GEN_AI_RESPONSE_TEXT not in span["data"]
Expand Down Expand Up @@ -656,6 +658,8 @@ def test_streaming_chat_completion(sentry_init, capture_events, messages, reques
},
]

assert span["data"][SPANDATA.GEN_AI_RESPONSE_MODEL] == "model-id"

assert "hello" in span["data"][SPANDATA.GEN_AI_REQUEST_MESSAGES]
assert "hello world" in span["data"][SPANDATA.GEN_AI_RESPONSE_TEXT]

Expand Down Expand Up @@ -762,6 +766,8 @@ async def test_streaming_chat_completion_async_no_prompts(
span = tx["spans"][0]
assert span["op"] == "gen_ai.chat"

assert span["data"][SPANDATA.GEN_AI_RESPONSE_MODEL] == "model-id"

assert SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS not in span["data"]
assert SPANDATA.GEN_AI_REQUEST_MESSAGES not in span["data"]
assert SPANDATA.GEN_AI_RESPONSE_TEXT not in span["data"]
Expand Down Expand Up @@ -897,6 +903,8 @@ async def test_streaming_chat_completion_async(
span = tx["spans"][0]
assert span["op"] == "gen_ai.chat"

assert span["data"][SPANDATA.GEN_AI_RESPONSE_MODEL] == "model-id"

param_id = request.node.callspec.id
if "blocks" in param_id:
assert json.loads(span["data"][SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS]) == [
Expand Down
Loading