Skip to content

[do not merge] feat: Span streaming & new span API #6201

[do not merge] feat: Span streaming & new span API

[do not merge] feat: Span streaming & new span API #6201

Triggered via pull request March 2, 2026 09:34
Status Success
Total duration 3m 47s
Artifacts 2

test-integrations-flags.yml

on: pull_request
Matrix: Flags
All Flags tests passed
2s
All Flags tests passed
Fit to window
Zoom out
Zoom in

Annotations

9 errors and 30 warnings
API incompatibility: sentry_sdk.traces.start_span has different signature than legacy span functions: sentry_sdk/ai/utils.py#L542
When span streaming is enabled, `get_start_span_function()` returns `sentry_sdk.traces.start_span` which only accepts `name` (positional), `attributes`, and `parent_span` parameters. However, all callers (e.g., anthropic.py:408, mcp.py:440, google_genai:76, langchain.py:982) invoke the returned function with `op=...`, `name=...`, `origin=...` kwargs which are not accepted by the new streaming API. This will cause a `TypeError` at runtime when span streaming mode is enabled.
[T46-GMD] API incompatibility: sentry_sdk.traces.start_span has different signature than legacy span functions (additional location): sentry_sdk/integrations/anthropic.py#L610
When span streaming is enabled, `get_start_span_function()` returns `sentry_sdk.traces.start_span` which only accepts `name` (positional), `attributes`, and `parent_span` parameters. However, all callers (e.g., anthropic.py:408, mcp.py:440, google_genai:76, langchain.py:982) invoke the returned function with `op=...`, `name=...`, `origin=...` kwargs which are not accepted by the new streaming API. This will cause a `TypeError` at runtime when span streaming mode is enabled.
[T46-GMD] API incompatibility: sentry_sdk.traces.start_span has different signature than legacy span functions (additional location): sentry_sdk/integrations/celery/__init__.py#L104
When span streaming is enabled, `get_start_span_function()` returns `sentry_sdk.traces.start_span` which only accepts `name` (positional), `attributes`, and `parent_span` parameters. However, all callers (e.g., anthropic.py:408, mcp.py:440, google_genai:76, langchain.py:982) invoke the returned function with `op=...`, `name=...`, `origin=...` kwargs which are not accepted by the new streaming API. This will cause a `TypeError` at runtime when span streaming mode is enabled.
UnboundLocalError: 'value' accessed in finally block when exception occurs: sentry_sdk/integrations/redis/_sync_common.py#L148
When `old_execute_command` raises an exception, the `value` variable at line 142 is never assigned. However, the `finally` block at lines 143-150 always executes and attempts to use `value` in `_set_cache_data(cache_span, self, cache_properties, value)` at line 148. This causes an `UnboundLocalError`, masking the original Redis exception and preventing proper error propagation to the caller.
[FM9-74D] UnboundLocalError: 'value' accessed in finally block when exception occurs (additional location): sentry_sdk/integrations/redis/_async_common.py#L135
When `old_execute_command` raises an exception, the `value` variable at line 142 is never assigned. However, the `finally` block at lines 143-150 always executes and attempts to use `value` in `_set_cache_data(cache_span, self, cache_properties, value)` at line 148. This causes an `UnboundLocalError`, masking the original Redis exception and preventing proper error propagation to the caller.
[FM9-74D] UnboundLocalError: 'value' accessed in finally block when exception occurs (additional location): sentry_sdk/integrations/stdlib.py#L175
When `old_execute_command` raises an exception, the `value` variable at line 142 is never assigned. However, the `finally` block at lines 143-150 always executes and attempts to use `value` in `_set_cache_data(cache_span, self, cache_properties, value)` at line 148. This causes an `UnboundLocalError`, masking the original Redis exception and preventing proper error propagation to the caller.
StreamedSpan created but never started in on_operation - span will not be sent: sentry_sdk/integrations/strawberry.py#L191
The `graphql_span` is created with `sentry_sdk.traces.start_span()` but neither `start()` is called nor is it used as a context manager before `yield`. When `end()` is called after the yield, it calls `__exit__()` which tries to access `self._context_manager_state` which was never set by `__enter__()`. This raises an `AttributeError` that's silently caught by `capture_internal_exceptions()`, causing `_end()` to never be called and the span to never be sent.
[PDP-ELU] StreamedSpan created but never started in on_operation - span will not be sent (additional location): sentry_sdk/integrations/strawberry.py#L243
The `graphql_span` is created with `sentry_sdk.traces.start_span()` but neither `start()` is called nor is it used as a context manager before `yield`. When `end()` is called after the yield, it calls `__exit__()` which tries to access `self._context_manager_state` which was never set by `__enter__()`. This raises an `AttributeError` that's silently caught by `capture_internal_exceptions()`, causing `_end()` to never be called and the span to never be sent.
[PDP-ELU] StreamedSpan created but never started in on_operation - span will not be sent (additional location): sentry_sdk/integrations/strawberry.py#L269
The `graphql_span` is created with `sentry_sdk.traces.start_span()` but neither `start()` is called nor is it used as a context manager before `yield`. When `end()` is called after the yield, it calls `__exit__()` which tries to access `self._context_manager_state` which was never set by `__enter__()`. This raises an `AttributeError` that's silently caught by `capture_internal_exceptions()`, causing `_end()` to never be called and the span to never be sent.
Flags (3.9, ubuntu-22.04)
❌ Patch coverage check failed: 11.65% < target 80%
Flags (3.7, ubuntu-22.04)
❌ Patch coverage check failed: 11.65% < target 80%
Flags (3.7, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.7, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.10, ubuntu-22.04)
❌ Patch coverage check failed: 11.73% < target 80%
Flags (3.10, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.10, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.8, ubuntu-22.04)
❌ Patch coverage check failed: 11.65% < target 80%
Flags (3.8, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.8, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.12, ubuntu-22.04)
❌ Patch coverage check failed: 11.73% < target 80%
Flags (3.12, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.12, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.14t, ubuntu-22.04)
❌ Patch coverage check failed: 11.73% < target 80%
Flags (3.14t, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.14t, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.14, ubuntu-22.04)
❌ Patch coverage check failed: 11.73% < target 80%
Flags (3.14, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.14, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.13, ubuntu-22.04)
❌ Patch coverage check failed: 11.73% < target 80%
Flags (3.13, ubuntu-22.04)
Failed to upload coverage artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Flags (3.13, ubuntu-22.04)
Failed to upload test artifact: Failed to CreateArtifact: Received non-retryable error: Failed request: (409) Conflict: an artifact with this name already exists on the workflow run
Missing try/finally causes span leak when Redis command raises exception: sentry_sdk/integrations/redis/_async_common.py#L137
In `_sentry_execute_command`, the async version does not wrap the `await old_execute_command()` call in a try/finally block, unlike the sync version in `_sync_common.py`. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` will never be called, causing the spans to remain unclosed. This could lead to resource leaks and corrupted tracing state.
[U65-QZ7] Missing try/finally causes span leak when Redis command raises exception (additional location): sentry_sdk/integrations/redis/_sync_common.py#L148
In `_sentry_execute_command`, the async version does not wrap the `await old_execute_command()` call in a try/finally block, unlike the sync version in `_sync_common.py`. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` will never be called, causing the spans to remain unclosed. This could lead to resource leaks and corrupted tracing state.
Scope corruption when real_putrequest raises exception in streaming mode: sentry_sdk/integrations/stdlib.py#L127
In the span streaming code path (lines 109-127), `span.start()` is called which sets the span as active on the scope and saves the old span in `_context_manager_state`. If `real_putrequest()` at line 148 raises an exception, `span.end()` in `getresponse` is never called, leaving the scope's `span` attribute pointing to an orphaned span and never restoring the previous span. This corrupts the scope state for subsequent operations in the same request/thread.
dynamic_sampling_context() crashes when called on NoOpStreamedSpan: sentry_sdk/traces.py#L509
The `dynamic_sampling_context()` method in `StreamedSpan` (line 509-510) accesses `self.segment.get_baggage()` without null checking. Since `NoOpStreamedSpan` sets `self.segment = None` (line 666) and does not override `dynamic_sampling_context()`, calling this method on a `NoOpStreamedSpan` instance will raise `AttributeError: 'NoneType' object has no attribute 'get_baggage'`. While internal SDK code only calls this on sampled spans, user code could call this method on any span returned from `start_span()`, causing an unexpected crash.
Dict rules with unrecognized keys in ignore_spans config silently ignore ALL spans: sentry_sdk/tracing_utils.py#L1498
When `ignore_spans` contains a dict with only unrecognized keys (e.g., a typo like `{"nme": "/health"}` instead of `{"name": "/health"}`), both `name_matches` and `attributes_match` default to `True`, causing the rule to match ALL spans. This could silently drop all trace data due to a simple configuration mistake.
Exception status not recorded for StreamedSpan when GraphQL operation fails: sentry_sdk/integrations/graphene.py#L168
When an exception occurs during a GraphQL operation, the `finally` block calls `_graphql_span.end()` which internally invokes `__exit__(None, None, None)`. The `StreamedSpan.__exit__` method checks `if value is not None and should_be_treated_as_error(ty, value)` to set the span status to ERROR, but since `end()` always passes `None` for the exception info, the span status will remain OK even when the operation fails with an exception. The legacy code path (`finish()`) has the same limitation, but this is a missed opportunity to fix it for the new API.
[NE3-W6T] Exception status not recorded for StreamedSpan when GraphQL operation fails (additional location): sentry_sdk/integrations/celery/__init__.py#L104
When an exception occurs during a GraphQL operation, the `finally` block calls `_graphql_span.end()` which internally invokes `__exit__(None, None, None)`. The `StreamedSpan.__exit__` method checks `if value is not None and should_be_treated_as_error(ty, value)` to set the span status to ERROR, but since `end()` always passes `None` for the exception info, the span status will remain OK even when the operation fails with an exception. The legacy code path (`finish()`) has the same limitation, but this is a missed opportunity to fix it for the new API.
Missing test coverage for span streaming mode in httpx integration: sentry_sdk/integrations/httpx.py#L64
The httpx integration has been updated to support the new span streaming mode (`trace_lifecycle: 'stream'`), but the test file `tests/integrations/httpx/test_httpx.py` only tests the legacy mode using `start_transaction()`. There are no tests verifying that the httpx integration works correctly with `StreamedSpan` and the new streaming API.

Artifacts

Produced during runtime
Name Size Digest
codecov-coverage-results-feat-span-first-test-flags
109 KB
sha256:2cb8b7e9f4e50662983d0d2e237efd02474eda1e68f5d67d008241689f02ff2c
codecov-test-results-feat-span-first-test-flags
229 Bytes
sha256:6563c01f66cb0dfe50ab1c33e9db5f874ce3d18e1a7ac4087c0ddff83cf2d0b6