Skip to content

filter out celery heartbeat

5f33d30
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Closed

[do not merge] feat: Span streaming & new span API #5317

filter out celery heartbeat
5f33d30
Select commit
Loading
Failed to load commit list.
GitHub Actions / warden: find-bugs completed Feb 26, 2026 in 39m 13s

6 issues

find-bugs: Found 6 issues (1 high, 2 medium, 3 low)

High

StreamedSpan never started/activated in on_operation, on_validate, and on_parse methods - `sentry_sdk/integrations/strawberry.py:191-208`

When span streaming is enabled, sentry_sdk.traces.start_span() creates a StreamedSpan but does NOT automatically activate it. The span must be explicitly started via .start() or used as a context manager (with span:). In on_operation(), on_validate(), and on_parse(), the created StreamedSpan objects are never started/entered, causing: (1) spans to never be set as the current scope's span, (2) sampling decisions to never be made for segment spans, (3) child spans to not properly inherit from parent spans. Compare with graphene.py line 158 which correctly calls _graphql_span.start() after creating the span.

Medium

StreamedSpan instances with INTERNAL_ERROR status are never cleaned up - `sentry_sdk/integrations/anthropic.py:572-574`

The code checks isinstance(span, Span) before calling span.__exit__() for error cleanup. However, when streaming mode is enabled (_experiments={"trace_lifecycle": "stream"}), get_start_span_function() returns a function that creates StreamedSpan instances instead of Span. Since StreamedSpan is not a subclass of Span, this check will always be False for streamed spans, causing the error cleanup code to be silently skipped. This may lead to spans with INTERNAL_ERROR status not being properly closed when using streaming mode.

Also found at:

  • sentry_sdk/integrations/anthropic.py:610-612
Spans not closed on exception in async Redis execute_command - `sentry_sdk/integrations/redis/_async_common.py:135-137`

The _sentry_execute_command function in the async Redis client uses manual __enter__() and __exit__() calls for spans, but unlike the sync version in _sync_common.py, it lacks a try/finally block around await old_execute_command(). If the Redis command raises an exception, db_span.__exit__() and cache_span.__exit__() are never called, leaving spans open. This causes span leakage and incorrect tracing data. The sync version correctly uses try/finally to ensure spans are always closed.

Also found at:

  • sentry_sdk/integrations/redis/_sync_common.py:143-150

Low

Unused json import left in file - `sentry_sdk/_span_batcher.py:1`

The json module is imported on line 1 but never used in the file. The _estimate_size method uses str(span_dict) instead of json.dumps() for size approximation. This is likely leftover code from development and won't cause runtime issues, but indicates potential incomplete implementation or dead code.

Also found at:

  • sentry_sdk/tracing_utils.py:1058
Inconsistent NoOpStreamedSpan creation missing scope parameter - `sentry_sdk/scope.py:1273`

At line 1273, NoOpStreamedSpan() is returned without the scope=self parameter, unlike the other two NoOpStreamedSpan instances created at lines 1237 and 1255 which pass scope=self. This causes the NoOpStreamedSpan to not participate in scope management (setting/restoring active span on scope context entry/exit), leading to inconsistent behavior compared to other code paths that also return NoOpStreamedSpan.

Race condition in _end() allows duplicate span capture - `sentry_sdk/traces.py:428-444`

The _end() method checks self._finished is True before setting self._finished = True at the end. This check-then-act pattern is not atomic. If two threads call end() concurrently on the same span, both could pass the _finished check before either sets it to True, causing the span to be captured twice via scope._capture_span(self). This is a TOCTOU (time-of-check-time-of-use) race condition.


Duration: 39m 8s · Tokens: 21.7M in / 182.5k out · Cost: $30.94 (+extraction: $0.02, +merge: $0.00)

Annotations

Check failure on line 208 in sentry_sdk/integrations/strawberry.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

StreamedSpan never started/activated in on_operation, on_validate, and on_parse methods

When span streaming is enabled, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but does NOT automatically activate it. The span must be explicitly started via `.start()` or used as a context manager (`with span:`). In `on_operation()`, `on_validate()`, and `on_parse()`, the created `StreamedSpan` objects are never started/entered, causing: (1) spans to never be set as the current scope's span, (2) sampling decisions to never be made for segment spans, (3) child spans to not properly inherit from parent spans. Compare with `graphene.py` line 158 which correctly calls `_graphql_span.start()` after creating the span.

Check warning on line 574 in sentry_sdk/integrations/anthropic.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

StreamedSpan instances with INTERNAL_ERROR status are never cleaned up

The code checks `isinstance(span, Span)` before calling `span.__exit__()` for error cleanup. However, when streaming mode is enabled (`_experiments={"trace_lifecycle": "stream"}`), `get_start_span_function()` returns a function that creates `StreamedSpan` instances instead of `Span`. Since `StreamedSpan` is not a subclass of `Span`, this check will always be False for streamed spans, causing the error cleanup code to be silently skipped. This may lead to spans with `INTERNAL_ERROR` status not being properly closed when using streaming mode.

Check warning on line 612 in sentry_sdk/integrations/anthropic.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

[JJB-QWM] StreamedSpan instances with INTERNAL_ERROR status are never cleaned up (additional location)

The code checks `isinstance(span, Span)` before calling `span.__exit__()` for error cleanup. However, when streaming mode is enabled (`_experiments={"trace_lifecycle": "stream"}`), `get_start_span_function()` returns a function that creates `StreamedSpan` instances instead of `Span`. Since `StreamedSpan` is not a subclass of `Span`, this check will always be False for streamed spans, causing the error cleanup code to be silently skipped. This may lead to spans with `INTERNAL_ERROR` status not being properly closed when using streaming mode.

Check warning on line 137 in sentry_sdk/integrations/redis/_async_common.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

Spans not closed on exception in async Redis execute_command

The `_sentry_execute_command` function in the async Redis client uses manual `__enter__()` and `__exit__()` calls for spans, but unlike the sync version in `_sync_common.py`, it lacks a `try/finally` block around `await old_execute_command()`. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` are never called, leaving spans open. This causes span leakage and incorrect tracing data. The sync version correctly uses `try/finally` to ensure spans are always closed.

Check warning on line 150 in sentry_sdk/integrations/redis/_sync_common.py

See this annotation in the file changed.

@github-actions github-actions / warden: find-bugs

[485-8WU] Spans not closed on exception in async Redis execute_command (additional location)

The `_sentry_execute_command` function in the async Redis client uses manual `__enter__()` and `__exit__()` calls for spans, but unlike the sync version in `_sync_common.py`, it lacks a `try/finally` block around `await old_execute_command()`. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` are never called, leaving spans open. This causes span leakage and incorrect tracing data. The sync version correctly uses `try/finally` to ensure spans are always closed.