Skip to content

Commit 11ad337

Browse files
adinauerclaudegetsentry-bot
authored
feat(core): Queue Instrumentation for Kafka (#5249)
* collection: Queue Instrumentation * feat(core): Add enableQueueTracing option and messaging span data conventions Add enableQueueTracing boolean to SentryOptions (default false) and ExternalOptions (nullable Boolean) with merge support. Add messaging.* keys to SpanDataConvention for queue instrumentation span data. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * feat(samples): Add Kafka producer and consumer to Spring Boot 3 sample app Add spring-kafka dependency and a simple Kafka producer/consumer setup behind a 'kafka' Spring profile. Includes a REST endpoint to produce messages and a KafkaListener that consumes them. Kafka auto-configuration is excluded by default and only activated when the 'kafka' profile is enabled. Co-Authored-By: Claude <noreply@anthropic.com> * feat(spring-jakarta): Add Kafka producer instrumentation Add SentryKafkaProducerWrapper that overrides doSend to create queue.publish spans for all KafkaTemplate send operations. Injects sentry-trace, baggage, and sentry-task-enqueued-time headers for distributed tracing and receive latency calculation. Add SentryKafkaProducerBeanPostProcessor to automatically wrap KafkaTemplate beans. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * feat(spring-jakarta): Add Kafka consumer instrumentation Add SentryKafkaRecordInterceptor that creates queue.process transactions for incoming Kafka records. Forks scopes per record, extracts sentry-trace and baggage headers for distributed tracing via continueTrace, and calculates messaging.message.receive.latency from the enqueued-time header. Composes with existing RecordInterceptor via delegation. Span lifecycle is managed through success/failure callbacks. Add SentryKafkaConsumerBeanPostProcessor to register the interceptor on ConcurrentKafkaListenerContainerFactory beans. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * feat(spring-boot-jakarta): Add Kafka queue auto-configuration Register SentryKafkaProducerBeanPostProcessor and SentryKafkaConsumerBeanPostProcessor when spring-kafka is on the classpath and sentry.enable-queue-tracing=true. Follows the same pattern as SentryCacheConfiguration. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * test(samples): Add Kafka queue system tests for Spring Boot 3 Add KafkaQueueSystemTest with e2e tests for: - Producer endpoint creates queue.publish span - Consumer creates queue.process transaction - Distributed tracing (producer and consumer share same trace) - Messaging attributes on publish span and process transaction Also add produceKafkaMessage to RestTestClient and enable sentry.enable-queue-tracing in the kafka profile properties. Requires a running Kafka broker at localhost:9092 and the sample app started with --spring.profiles.active=kafka. Co-Authored-By: Claude <noreply@anthropic.com> * docs: Add rule against force-pushing stack branches Force-pushing a stack branch can cause GitHub to auto-merge or auto-close other PRs in the stack. Add explicit guidance to never use --force, --force-with-lease, or amend+push on stack branches. * docs: Also prohibit --amend on stack branches * feat(samples): Add Kafka producer and consumer to Spring Boot 3 OTel sample apps Add Kafka queue tracing support to both the OTel agent and agentless Spring Boot 3 sample applications. Each sample gets a KafkaController for producing messages and a KafkaConsumer listener, activated via the 'kafka' Spring profile. Kafka auto-configuration is excluded by default and only enabled when the kafka profile is active. * fix(spring-boot-jakarta): Disable Sentry Kafka instrumentation when OTel is active Skip registration of SentryKafkaProducerBeanPostProcessor and SentryKafkaConsumerBeanPostProcessor when a Sentry OpenTelemetry integration (agent or agentless) is on the classpath. OpenTelemetry provides its own Kafka instrumentation, so Sentry's would create duplicate spans. * fix(core): Add Kafka span origins to ignored list for OpenTelemetry Add auto.queue.spring_jakarta.kafka.producer and auto.queue.spring_jakarta.kafka.consumer to the ignored span origins when running with OTel agent or agentless-spring. Prevents duplicate spans when both Sentry and OTel Kafka instrumentation are active. * ref(spring-jakarta): Replace SentryKafkaProducerWrapper with SentryProducerInterceptor Replace the KafkaTemplate subclass approach with a Kafka-native ProducerInterceptor. The BeanPostProcessor now sets the interceptor on the existing KafkaTemplate instead of replacing the bean, which preserves any custom configuration on the template. Existing customer interceptors are composed using Spring's CompositeProducerInterceptor. If reflection fails to read the existing interceptor, a warning is logged. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): Update consumer references and add reflection warning log Update SentryKafkaRecordInterceptor and its test to reference SentryProducerInterceptor instead of the removed SentryKafkaProducerWrapper. Add a warning log in SentryKafkaConsumerBeanPostProcessor when reflection fails to read the existing RecordInterceptor, so users know their custom interceptor may not be chained. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): Initialize Sentry in SentryProducerInterceptorTest TransactionContext constructor requires ScopesAdapter.getOptions() to be non-null for thread checker access. Add initForTest/close to ensure Sentry is properly initialized during tests. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): Initialize Sentry in consumer test, fix API file ordering Add initForTest/close to SentryKafkaRecordInterceptorTest to fix NPE from TransactionContext constructor requiring initialized Sentry. Regenerate API file to fix alphabetical ordering of SentryProducerInterceptor entry. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): Clean up stale ThreadLocal context in Kafka consumer interceptor Implement clearThreadState() and defensive cleanup in intercept() to prevent ThreadLocal leaks of SentryRecordContext. Spring Kafka calls clearThreadState() in the poll loop's finally block, making it the most reliable cleanup hook for edge cases where success()/failure() callbacks are skipped (e.g. Error thrown by listener). Also add defensive cleanup at the start of intercept() to handle any stale context from a previous record that was not properly cleaned up. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): Fork root scopes and skip when OTel is active in Kafka consumer interceptor Use Sentry.forkedRootScopes() instead of scopes.forkedScopes() so each Kafka message starts with a clean scope from root, matching the pattern used by SentryWebFilter for reactive request boundaries. Add isIgnored() check using SpanUtils.isIgnored() on the trace origin so the interceptor no-ops when OTel is active and the origin is in the ignored span origins list, consistent with SentryTracingFilter. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): Guard entire span lifecycle in Kafka producer interceptor Wrap all span operations (startChild, setData, injectHeaders, finish) in a single try-catch so instrumentation can never break the customer's Kafka send. The record is always returned regardless of any exception in Sentry code. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): [Queue Instrumentation 12] Add Kafka retry count attribute Set messaging.message.retry.count on queue.process transactions when the Spring Kafka delivery attempt header is present. This keeps retry context on consumer traces without changing transaction lifecycle behavior. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): [Queue Instrumentation 13] Align enqueue time with Python Store sentry-task-enqueued-time as epoch seconds and compute receive latency from seconds on the consumer side. This aligns Java Kafka queue instrumentation with sentry-python Celery behavior for cross-SDK interoperability. Co-Authored-By: Claude <noreply@anthropic.com> * ref(kafka): Extract sentry-kafka module from spring-jakarta Move Kafka producer interceptor to a new sentry-kafka module and rename to SentryKafkaProducerInterceptor. Add SentryKafkaConsumerInterceptor for vanilla kafka-clients users. Spring integration now depends on sentry-kafka and passes a Spring-specific trace origin. This allows non-Spring applications to use Kafka queue instrumentation directly via kafka-clients interceptor config. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * feat(kafka): Add no-arg producer interceptor for Kafka config Allow kafka-clients to instantiate SentryKafkaProducerInterceptor via interceptor.classes by adding a no-arg constructor that uses ScopesAdapter. This makes native Kafka interceptor wiring work out of the box in applications and samples.\n\nAlso add a Kafka tracing example to the console sample with a transaction-scoped producer send, and cover no-arg constructor behavior in sentry-kafka tests. Co-Authored-By: Claude <noreply@anthropic.com> * feat(kafka): Add consumer demo to console sample Show end-to-end Kafka queue tracing in the console sample by running a background consumer thread, producing a message, and waiting for consume before exit.\n\nAdd a no-arg constructor to SentryKafkaConsumerInterceptor so kafka-clients can instantiate it from interceptor.classes, and add test coverage for that constructor. Co-Authored-By: Claude <noreply@anthropic.com> * ref(samples): Extract Kafka console showcase into dedicated class Move Kafka producer/consumer showcase logic out of Main into KafkaShowcase to make the sample easier to read and follow. Keep runtime behavior unchanged by preserving the same demo entry point and flow. Co-Authored-By: Claude <noreply@anthropic.com> * feat(samples): Add opt-in Kafka console e2e coverage Gate the console Kafka showcase behind SENTRY_SAMPLE_KAFKA_BOOTSTRAP_SERVERS so Kafka behavior is enabled only when configured. Keep the showcase isolated in KafkaShowcase and use fail-fast Kafka client timeouts for local runs.\n\nExtend console system tests to assert producer and consumer queue tracing when Kafka is enabled. Update system-test-runner to provision or reuse a local Kafka broker for the console module and clean up runner-managed resources. Co-Authored-By: Claude <noreply@anthropic.com> * ref(samples): Move KafkaShowcase to kafka subpackage Move KafkaShowcase under io.sentry.samples.console.kafka and update Main to import the relocated class. This keeps Kafka-specific sample code grouped in a dedicated package without changing runtime behavior. Co-Authored-By: Claude <noreply@anthropic.com> * Update KafkaShowcase.java extract constant * Update KafkaShowcase.java extract methods * Update KafkaShowcase.java refactor * Format code * fix * ref(samples): Clarify Kafka setup in console showcase Restructure KafkaShowcase to highlight the required Sentry interceptor configuration for producer and consumer setups. Split property construction into explicit helper methods and rename the entrypoint to make customer integration requirements easier to follow without changing behavior. Co-Authored-By: Claude <noreply@anthropic.com> * fix(test): Enable Kafka profile for Spring Kafka system tests Make the system test runner configure Kafka requirements by module. Start Kafka and set SPRING_PROFILES_ACTIVE=kafka for modules that need Kafka-backed Spring endpoints so queue system tests run with the expected routing and broker configuration. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring): Guard Kafka auto-config on sentry-kafka Require the sentry-kafka producer interceptor class before activating Spring Boot Jakarta queue auto-configuration. This keeps sentry-kafka optional for customers who only use the starter without Kafka queue tracing support on the classpath. Add a regression test that hides sentry-kafka from the classloader and verifies the Kafka bean post-processors are skipped instead of being registered. Co-Authored-By: Claude <noreply@anthropic.com> * feat(kafka): [Queue Instrumentation 17] Add manual consumer tracing helper Add an experimental helper for wrapping raw Kafka consumer record processing in queue.process transactions. This exposes Kafka consumer tracing outside interceptor-based integrations. Capture messaging metadata and distributed tracing context in the helper so future queue instrumentation can reuse the same behavior. Co-Authored-By: Claude <noreply@anthropic.com> * ref(kafka): Remove raw consumer interceptor Remove the raw Kafka consumer interceptor from sentry-kafka and update the console sample to use the manual consumer tracing helper instead. Keep producer tracing on the interceptor path and move consumer tracing to explicit record processing. Co-Authored-By: Claude <noreply@anthropic.com> * ref(samples): Clarify Kafka consumer tracing sample Print the consumed Kafka record inside the manual consumer tracing callback so the sample shows where application processing happens. Update the console system test to assert the manual queue.process transaction and its manual consumer origin. Co-Authored-By: Claude <noreply@anthropic.com> * fix(kafka): Honor ignored producer span origins Short-circuit the raw Kafka producer interceptor when its trace origin is configured in ignoredSpanOrigins. This lets customers disable the integration quickly without relying on the later no-op span path, and keeps the interceptor from injecting tracing headers when the origin is ignored. Co-Authored-By: Claude <noreply@anthropic.com> * ref(spring): Use injected scopes in Kafka interceptor Stop the Spring Kafka record interceptor from reaching through the static Sentry API when forking root scopes. This keeps the raw Kafka and Spring Kafka paths aligned and makes the interceptor easier to test. Co-Authored-By: Claude <noreply@anthropic.com> * ref(samples): [Queue Instrumentation 18] Move Kafka sources into queues.kafka package Move KafkaConsumer and KafkaController in the three Spring Boot Jakarta samples (jakarta, jakarta-opentelemetry, jakarta-opentelemetry-noagent) into a queues.kafka sub-package. No behavior change. Groups the Kafka-specific sample sources so future queue integrations can sit next to them under queues. Co-Authored-By: Claude <noreply@anthropic.com> * ref(samples): [Queue Instrumentation 19] Drop Kafka auto-config exclude from Spring Boot samples Remove `spring.autoconfigure.exclude=KafkaAutoConfiguration` from the default `application.properties` and the matching empty override from `application-kafka.properties` in the three Spring Boot Jakarta samples. `spring.autoconfigure.exclude` is a single list property, so overriding it in a profile replaces the whole list rather than merging. Adding a sibling `rabbitmq` profile with the same pattern would not compose — activating one profile would unsilence the other's auto-config. The `@Profile("kafka")` annotations already on `KafkaConsumer` and `KafkaController` gate the actual listener container and endpoint, so no broker connection is attempted when the profile is inactive. `KafkaAutoConfiguration` still runs and creates an unused `KafkaTemplate` bean in that case, which is harmless. Sentry's own Kafka auto-config remains gated on `sentry.enable-queue-tracing=true`, which is only set in `application-kafka.properties`, so Sentry instrumentation behavior is unchanged. * ref(kafka): [Queue Instrumentation 20] Log Kafka instrumentation failures Previously `SentryKafkaProducerInterceptor.onSend(...)` and `SentryKafkaConsumerTracing` silently swallowed any `Throwable` thrown while instrumenting a Kafka record. That protects customer Kafka I/O from breakage, but makes instrumentation bugs invisible. Log each caught `Throwable` to the SDK logger at `SentryLevel.ERROR` (matching the existing pattern in `RequestPayloadExtractor`) before continuing the fail-open path: - `SentryKafkaProducerInterceptor`: producer span creation / header injection - `SentryKafkaConsumerTracing`: scope fork + `makeCurrent`, transaction start, transaction finish No behavior change for customer callbacks or Kafka send/receive: the catches still swallow the throwable, they now just surface it via the SDK's own logger. `SentryKafkaRecordInterceptor` (Spring) was reviewed and intentionally left as-is — it does not wrap its instrumentation in `catch (Throwable)` blocks, so there is nothing silent to log. The `NumberFormatException` branches on malformed `sentry-task-enqueued-time` headers are expected input, not instrumentation faults, and remain silent. * fix(kafka): [Queue Instrumentation 21] Preserve third-party baggage on Kafka producer records `SentryKafkaProducerInterceptor.injectHeaders(...)` previously removed and overwrote the outgoing `baggage` header on every record, discarding any third-party baggage entries already present (e.g. set by another vendor's instrumentation or the application itself). Read the existing `baggage` header values off the `ProducerRecord` and pass them to `TracingUtils.trace(...)`. The downstream `BaggageHeader.fromBaggageAndOutgoingHeader` preserves non-`sentry-*` entries in the outgoing header while Sentry continues to own its own keys. Co-Authored-By: Claude <noreply@anthropic.com> * test(spring-boot-jakarta): [Queue Instrumentation 22] Cover spring-kafka class-absence gate `SentryKafkaQueueConfiguration` in `SentryAutoConfiguration` gates the Kafka BPPs on both `org.springframework.kafka.core.KafkaTemplate` and `io.sentry.kafka.SentryKafkaProducerInterceptor` being present on the classpath. Only the latter was covered by a test. Add a `FilteredClassLoader(KafkaTemplate::class.java)` test that asserts neither `SentryKafkaProducerBeanPostProcessor` nor `SentryKafkaConsumerBeanPostProcessor` is registered when spring-kafka is missing, even with `sentry.enable-queue-tracing=true`. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): [Queue Instrumentation 23] Install Kafka context before trace setup Store the lifecycle token in the thread-local context immediately after makeCurrent() so Spring's failure and clearThreadState callbacks can always clean it up. Previously, exceptions from trace continuation or transaction setup could happen before the context was published, leaving cleanup dependent on later stale-context handling instead of the normal interceptor callback path. * fix(kafka): [Queue Instrumentation 24] Read all baggage headers on consumers Pass every Kafka baggage header through trace continuation in both the raw Kafka helper and the Spring Kafka record interceptor. Previously both consumer paths used lastHeader("baggage"), which dropped all earlier baggage values and could break interop with upstream OTel or other W3C baggage producers. Reading the full header list preserves the existing baggage context during queue trace continuation. * fix(kafka): [Queue Instrumentation 25] Finish producer spans on failures Keep a local producer child span reference and always finish it when instrumentation fails after span creation. This preserves fail-open send behavior without leaking unfinished queue.publish spans. Add a regression test covering header injection failures. Co-Authored-By: Claude <noreply@anthropic.com> * fix(kafka): [Queue Instrumentation 26] Mark producer interceptor experimental The raw kafka producer path requires customers to reference SentryKafkaProducerInterceptor directly by class name, so it should not be marked internal. Align it with the customer-facing queue tracing surface by marking it experimental instead. Audit the remaining Kafka classes still marked internal and keep them as-is: the Spring bean post processors and Spring record interceptor remain framework wiring internals rather than direct customer entry points. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): [Queue Instrumentation 27] Delegate Kafka record thread-state hooks SentryKafkaRecordInterceptor wraps an existing customer RecordInterceptor when one is present on the listener container factory, but it previously only delegated intercept, success, failure, and afterRecord. setupThreadState was not overridden, so the default no-op from ThreadStateProcessor shadowed any delegate implementation. clearThreadState performed Sentry cleanup but never forwarded to the delegate either. Customers relying on these hooks for MDC, security context, or other thread-local state on Kafka listener threads would silently lose that behavior once Sentry auto-wrapped their interceptor. Delegate setupThreadState to the wrapped interceptor, and in clearThreadState run Sentry cleanup inside try and delegate to the wrapped interceptor in finally so delegate cleanup still executes if Sentry cleanup throws. Co-Authored-By: Claude <noreply@anthropic.com> * test(samples): Cover OTel Jakarta Kafka coexistence end-to-end Enable the Kafka Spring profile (and Kafka broker) for the two OTel Spring Boot 3 Jakarta sample modules in the system-test runner, and add a Kafka system test in each that produces a message and asserts no Sentry-style `queue.publish` / `queue.process` span/transaction is emitted. SentryKafkaQueueConfiguration is guarded by @ConditionalOnMissingClass("io.sentry.opentelemetry.SentryAutoConfigurationCustomizerProvider"), so the Sentry Kafka bean post-processors must not be wired when the Sentry OTel integration is present. The new assertions lock that suppression into CI for both the agent and noagent OTel Jakarta samples. Addresses review finding F-011. * fix(spring-jakarta): [Queue Instrumentation 29] Set body_size on Spring Kafka consumer transaction The Spring Kafka consumer path (`SentryKafkaRecordInterceptor`) never set `messaging.message.body_size`, while the raw Kafka consumer helper (`SentryKafkaConsumerTracing`) already sets it from `ConsumerRecord#serializedValueSize()`. Both are first-party Kafka consumer integrations shipped in the same stack and should emit the same messaging schema so dashboards and queries remain consistent across Spring vs. raw Kafka setups. Mirror the raw helper: set `SpanDataConvention.MESSAGING_MESSAGE_BODY_SIZE` on the `queue.process` transaction when `serializedValueSize() >= 0`. Add regression tests for both the positive and the -1 (unknown) cases. #skip-changelog * test(spring-jakarta): [Queue Instrumentation 30] Cover Kafka record interceptor lifecycle edge cases Add three regression tests for SentryKafkaRecordInterceptor that pin down the lifecycle contract around clearThreadState cleanup: - full lifecycle intercept -> success -> clearThreadState closes the lifecycle token exactly once and does not double-finish the transaction - when a delegating interceptor returns null from intercept (filtering the record), the safety net in clearThreadState still finishes the transaction and closes the token - when a delegating interceptor throws from intercept, clearThreadState still finishes the transaction and closes the token after the exception has propagated Addresses review finding R6-F001. Co-Authored-By: Claude <noreply@anthropic.com> * fix(kafka): [Queue Instrumentation 31] Write enqueued-time header as plain decimal The sentry-task-enqueued-time Kafka header was serialized via String.valueOf(double), which emits scientific notation (e.g. 1.776933649613E9) for epoch-seconds values. Cross-SDK consumers (sentry-python, -ruby, -php, -dotnet) expect a plain decimal like 1776938295.692000 and could not parse the Java output, defeating the cross-SDK alignment goal of #5283. Route the value through DateUtils.doubleToBigDecimal(...).toString(), the same helper already used to serialize epoch-seconds timestamps in SentryTransaction, SentrySpan, SentryLogEvent, etc. At the pinned scale of 6, BigDecimal.toString() produces plain decimal form for all realistic epoch-seconds magnitudes. Add regression assertions that reject scientific notation and pin the plain-decimal format in SentryKafkaProducerInterceptorTest. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * test(spring-boot-jakarta): [Queue Instrumentation 32] Filter OTel in Kafka auto-config negative tests The regression tests "does not register Kafka BPPs when sentry-kafka is not present" and "...when spring-kafka is not present" previously passed for the wrong reason: OTel's SentryAutoConfigurationCustomizerProvider is on the test classpath as a testImplementation dependency, so the @ConditionalOnMissingClass(OTel) gate on SentryKafkaQueueConfiguration was already blocking the beans independent of the @ConditionalOnClass check the tests were meant to validate. Make noSentryKafkaClassLoader and noSpringKafkaClassLoader additionally filter SentryAutoConfigurationCustomizerProvider so only the gate under test can be the blocker. Verified by temporarily removing SentryKafkaProducerInterceptor from the @ConditionalOnClass list: the test now correctly fails, proving it actually guards against the regression it is named for. Co-Authored-By: Claude <noreply@anthropic.com> * feat(opentelemetry): [Queue Instrumentation 33] Map OTel messaging spans to Sentry queue ops Wire OTel messaging spans into the Sentry Queues product when `sentry.enable-queue-tracing=true` so OTel-only setups (e.g. the agentless Spring Boot Jakarta sample) populate queue dashboards without needing the Sentry-native Kafka interceptors. `SpanDescriptionExtractor` now recognizes spans carrying `messaging.system` and maps them to `queue.publish` / `queue.process` / `queue.receive` ops, using the destination name as the description and `TransactionNameSource.TASK`. Op selection prefers `messaging.operation.type` (current OTel semconv), falls back to the deprecated `messaging.operation`, and only as a last resort consults `SpanKind` — `SpanKind.CONSUMER` is overloaded for both `receive` and `process`, so attribute-driven mapping is required to disambiguate. The extractor takes `SentryOptions` so the mapping stays gated; when the flag is off, behavior is unchanged. `SentrySpanExporter` additionally transfers the messaging attributes (`system`, `destination.name`, `operation.type`, `message.id`, `message.body.size`, `message.envelope.size`) onto root transactions. Root transactions don't bulk-copy OTel attributes the way child spans do, but the Queues product reads `trace.data.messaging.*`, so consumer root transactions need them propagated explicitly. These are operational metadata only (no payload contents), so the transfer is unconditional. Add `MESSAGING_OPERATION_TYPE` and `MESSAGING_MESSAGE_ENVELOPE_SIZE` to `SpanDataConvention` for use by the exporter and downstream integrations. Document the OTel-mode behavior in the two Jakarta OTel sample `application-kafka.properties` so users know the flag activates the OTel remapping path here, not the Sentry-native Kafka auto-config (which stays suppressed by its `@ConditionalOnMissingClass` OTel guard). * fix(otel): Prefer messaging over http mapping when queue tracing enabled Some OTel instrumentations (notably aws-sdk-2.2 SQS) attach both `http.request.method` and `messaging.system` to the same span. With the previous gate order, those spans resolved to http.client and the Sentry Queues product never lit up for one of the most common OTel-coexistence targets. When `enableQueueTracing` is true and `messaging.system` is present, map to a queue.* op before the http and db checks. When the flag is off, the existing http-first ordering is preserved. Co-Authored-By: Claude <noreply@anthropic.com> * fix(otel): Map messaging "create" to queue.create instead of queue.publish The OTel messaging semconv defines "create" and "publish" as distinct operations: "create" represents message construction, "publish" the network send. Folding both into queue.publish risks double-counting producer transactions on instrumentations that emit a separate create span (per OTel semconv guidance). Per the Sentry Queues telemetry spec (https://develop.sentry.dev/sdk/telemetry/traces/modules/queues/), queue.create is a canonical op distinct from queue.publish, so map "create" to its spec-correct destination rather than dropping it. Empirically, current Kafka OTel instrumentation does not emit a separate create span, so this is a no-op for Kafka users today; the change future-proofs other systems and any future Kafka OTel version. Co-Authored-By: Claude <noreply@anthropic.com> * docs(options): Clarify enableQueueTracing covers native + OTel paths The setEnableQueueTracing Javadoc said only "Whether queue operations (publish, process) should be traced." — silent on the fact that the flag also drives OTel messaging-span transformation when sentry-opentelemetry is on the classpath. Reword on both the getter and setter to make explicit that the flag both emits Sentry-native queue spans and transforms OTel messaging spans to match Sentry's queue conventions, so customers grepping their IDE see what the flag does in either integration mode. Co-Authored-By: Claude <noreply@anthropic.com> * fix(otel): Map messaging "settle" to queue.settle OTel messaging semconv defines messaging.operation.type=settle for consumer ack/nack/reject spans (JMS, RabbitMQ, Pulsar acknowledge). The switch had no case for "settle", so settle spans on SpanKind.CONSUMER were falling through to the SpanKind fallback and becoming queue.process — duplicating the real process span — while on SpanKind.CLIENT they became the generic "queue" default. queue.settle is one of the canonical Queues telemetry ops per https://develop.sentry.dev/sdk/telemetry/traces/modules/queues/, so add the explicit mapping. Co-Authored-By: Claude <noreply@anthropic.com> * chore(samples): Drop verbose comment above sentry.enable-queue-tracing The OTel Kafka sample properties carried a 10-line comment explaining the OTel->Sentry remapping mechanism and SentryKafkaQueueConfiguration suppression behavior. That belongs in the SDK docs, not in a sample config — drop it so the property line speaks for itself. Co-Authored-By: Claude <noreply@anthropic.com> * feat(kafka): [Queue Instrumentation 34] Wrap Producer for send spans Replace SentryKafkaProducerInterceptor with SentryKafkaProducer, a Producer<K,V> wrapper that records a queue.publish span around each send and finishes it when the broker ack callback fires. The span now reflects the full async send lifecycle, not just the synchronous onSend window. For Spring Boot, the SentryKafkaProducerBeanPostProcessor switches from patching KafkaTemplate.setProducerInterceptor(...) to installing a ProducerPostProcessor on every ProducerFactory bean via ProducerFactory.addPostProcessor(...). KafkaTemplate beans are no longer touched, so all customer-configured listeners, interceptors and observation settings are preserved. The console sample now wraps the raw KafkaProducer instead of setting INTERCEPTOR_CLASSES_CONFIG. Spring Boot samples need no change — the auto-configured ProducerPostProcessor is transparent. Co-Authored-By: Claude <noreply@anthropic.com> * fix(kafka): Inject trace headers even without active span Decouple header injection from span creation in SentryKafkaProducer so that distributed tracing works for background workers, @scheduled jobs, and startup publishers that have no active span. Restructure send() to match the SentryFeignClient/OkHttp pattern: - isIgnored: pure delegate, no headers, no span - No active span: inject headers from PropagationContext, no span - Active span: start child span, inject headers, wrap callback Also simplify the implementation: - Rename injectHeaders to maybeInjectHeaders with encapsulated try/catch (matches Feign's maybeAddTracingHeaders pattern) - Remove outer try/catch around span setup - Remove redundant span.isNoOp() early-return branch - Remove redundant isFinished() guards before finish() calls Co-Authored-By: Claude <noreply@anthropic.com> * changelog * ref(kafka): Reimplement SentryKafkaProducer as a dynamic Proxy Replace the concrete `implements Producer<K,V>` class with a `Proxy.newProxyInstance`-based wrapper that intercepts only the two `send()` overloads and forwards every other method reflectively to the delegate. The concrete class required explicitly delegating every method on the `Producer` interface, coupling the wrapper to a specific Kafka version: `clientInstanceId(Duration)` was added in Kafka 3.7, and the deprecated `sendOffsetsToTransaction(Map, String)` was removed in Kafka 4.0. The dynamic proxy has no such coupling — new or removed interface methods are handled automatically, giving full compatibility across all Kafka client versions. Public API change: `SentryKafkaProducer` is now a utility class with static `wrap()` overloads instead of constructors. Callers wrap a producer with `SentryKafkaProducer.wrap(producer)`. The Spring BPP and console sample are updated accordingly. Co-Authored-By: Claude <noreply@anthropic.com> * fix(spring-jakarta): Warn when Kafka producer tracing silently fails When ProducerFactory.addPostProcessor() is a no-op (the interface default), the Sentry post-processor is silently dropped and the customer gets zero producer tracing with no signal. Verify registration succeeded via getPostProcessors() after each addPostProcessor() call, and log a WARNING naming the factory bean and pointing toward SentryKafkaProducer.wrap() as the manual fallback. Co-Authored-By: Claude <noreply@anthropic.com> * fix(kafka): Preserve existing consumer interceptor on reflection failure If reading recordInterceptor via reflection fails, leave the container\nfactory untouched instead of installing Sentry's interceptor with a\nnull delegate. This avoids silently dropping customer-configured\ninterceptors for DLQ routing, auditing, or other message handling\nconcerns.\n\nAdd tests that preserve customer interceptors both when chaining\nsucceeds and when reflection cannot safely determine the existing\ninterceptor.\n\nCo-Authored-By: Claude <noreply@anthropic.com> * fix(spring-boot-jakarta): Skip Kafka autoconfig for OTel agent * fix(spring-jakarta): Close leaked Kafka interceptor scope Store the lifecycle token in the thread-local before trace continuation or transaction startup can throw. This keeps the cleanup path reachable and closes the forked scopes even when interceptor preparation fails. Also log the preparation failure instead of letting the interceptor break customer processing. * fix(test): Remove stale Kafka container before startup Always remove the named Kafka system-test container before starting a new broker. This avoids docker name conflicts after crashed or interrupted runs while still keeping stop_kafka_broker ownership-aware for reused brokers. Co-Authored-By: Claude <noreply@anthropic.com> * test(otel): Add send and deliver mapping coverage * test(kafka): Add no-op producer span coverage * fix(kafka): Pass consumer interceptor log throwable correctly * test(kafka): Exercise consumer interceptor reflection failure Force the reflection-failure path in the consumer bean post processor test so it proves customer interceptors remain untouched when Sentry skips installation. Co-Authored-By: Claude <noreply@anthropic.com> * fix(test): Set SENTRY_ENABLE_QUEUE_TRACING for Kafka system tests When SENTRY_AUTO_INIT=true with the OTel agent, Sentry is initialized early by SentryAutoConfigurationCustomizerProvider before Spring Boot loads application-kafka.properties. Without the env var, queue tracing stays disabled and OTel messaging spans are not mapped to queue.publish/queue.process ops, causing KafkaOtelCoexistenceSystemTest to fail. Co-Authored-By: Claude <noreply@anthropic.com> * feat(spring): Add Kafka queue tracing for Spring Boot 4 Port the Spring Boot 3 Kafka queue tracing support to the Spring 7 and Spring Boot 4 modules. Add Spring Kafka bean post-processors, Boot 4 auto-configuration, and matching sample system-test coverage. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * feat(spring): Add Kafka queue tracing for Spring Boot 2 Port Kafka queue tracing to the Spring and Spring Boot 2 modules. Add Spring Kafka bean post-processors, Boot 2 auto-configuration, and matching sample system-test coverage. Co-Authored-By: Claude <noreply@anthropic.com> * docs(rules): Add queue tracing cursor rules Document when to load queue-specific Cursor rules and summarize how Sentry Queues data is produced by the Java SDK Kafka instrumentation. Co-Authored-By: Claude <noreply@anthropic.com> * changelog * build(samples): Use Spring Boot Kafka starter in Boot 4 samples * fix(queue): Apply queue instrumentation review changes * test(spring): Address Kafka tracing review comments Simplify Kafka interceptor test delegates and rely on Kotlin type inference in Spring Kafka tests. Co-Authored-By: Claude <noreply@anthropic.com> * test(spring): Initialize Sentry in Kafka BPP tests Initialize Sentry before each Kafka bean post-processor test and close it afterwards so logging paths do not depend on test execution order. This prevents failures when earlier tests close the SDK before these tests run. Co-Authored-By: Claude <noreply@anthropic.com> * test(spring): Address Kafka review comments Simplify Spring Kafka test interceptors and cover intercepting records without a consumer. Co-Authored-By: Claude <noreply@anthropic.com> * test(spring): Isolate capture exception advice scopes Initialize Sentry before installing the mocked scopes used by the capture exception parameter advice test. Close Sentry after the test so the mocked scopes do not leak into later tests. Co-Authored-By: Claude <noreply@anthropic.com> * changelog entry * fix README changes * test(otel): Relax Kafka coexistence span assertion Avoid requiring the async Kafka producer span to be embedded in the HTTP transaction. OTel can finish and export the producer span after the request transaction, so this assertion flakes while the test still verifies OTel instrumentation suppresses Spring Kafka integration. Refs #5373 Co-Authored-By: Claude <noreply@anthropic.com> * fix(kafka): Make producer proxy equality reflexive Return true when the Kafka producer proxy is compared with itself. This preserves existing delegate equality behavior for other comparisons while satisfying the equals contract. Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Sentry Github Bot <bot+github-bot@sentry.io>
1 parent 0188f48 commit 11ad337

113 files changed

Lines changed: 7335 additions & 21 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.cursor/rules/overview_dev.mdc

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,15 @@ Use the `fetch_rules` tool to include these rules when working on specific areas
6666
- `SentryMetricsEvent`, `SentryMetricsEvents`
6767
- `SentryOptions.getMetrics()`, `beforeSend` callback
6868

69+
- **`queues`**: Use when working with:
70+
- Sentry Queues product data or messaging span conventions
71+
- Queue tracing spans/transactions (`queue.publish`, `queue.process`)
72+
- `enableQueueTracing` option and `sentry.enable-queue-tracing`
73+
- Kafka instrumentation (`sentry-kafka`, `SentryKafkaProducer`, `SentryKafkaConsumerTracing`)
74+
- Spring Kafka queue auto-instrumentation and `SentryKafkaRecordInterceptor`
75+
- Messaging span data (`messaging.system`, `messaging.destination.name`, receive latency, retry count)
76+
- `sentry-task-enqueued-time` header and distributed trace propagation through queues
77+
6978
- **`continuous_profiling_jvm`**: Use when working with:
7079
- JVM continuous profiling (`sentry-async-profiler` module)
7180
- `IContinuousProfiler`, `JavaContinuousProfiler`
@@ -118,6 +127,7 @@ Use the `fetch_rules` tool to include these rules when working on specific areas
118127
- System test/e2e/sample → `e2e_tests`
119128
- Feature flag/addFeatureFlag/flag evaluation → `feature_flags`
120129
- Metrics/count/distribution/gauge → `metrics`
130+
- Queues/queue tracing/Kafka/Spring Kafka/queue.publish/queue.process/enableQueueTracing/messaging spans → `queues`
121131
- PR/pull request/stacked PR/stack → `pr`
122132
- JVM continuous profiling/async-profiler/JFR/ProfileChunk → `continuous_profiling_jvm`
123133
- Android continuous profiling/AndroidProfiler/frame metrics/method tracing → no dedicated rule yet; inspect the code directly

.cursor/rules/pr.mdc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -258,3 +258,5 @@ git push
258258
**Never merge into the collection branch.** Syncing only happens between stack PR branches. The collection branch is untouched until the user merges PRs through GitHub.
259259

260260
Prefer merge over rebase — it preserves commit history, doesn't invalidate existing review comments, and avoids the need for force-pushing. Only rebase if explicitly requested.
261+
262+
**Never amend or force-push stack branches.** Do not use `git commit --amend`, `--force`, or `--force-with-lease` on branches that are part of a stack. Amending a pushed commit requires a force-push, which can cause GitHub to auto-merge or auto-close other PRs in the stack. If a commit needs fixing, add a new fixup commit instead.

.cursor/rules/queues.mdc

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
---
2+
alwaysApply: false
3+
description: Sentry Queues module and Java SDK queue tracing
4+
---
5+
# Sentry Queues and Java SDK Queue Tracing
6+
7+
## Product model
8+
9+
Sentry Queues is built from tracing data. SDKs mark queue work with queue-specific span operations and messaging span data so Sentry can identify producers, consumers, destinations, latency, and failures.
10+
11+
The important concepts are:
12+
- `queue.publish`: a span for enqueueing/publishing a message to a queue or topic.
13+
- `queue.process`: a transaction for processing a dequeued message.
14+
- Messaging span data, especially:
15+
- `messaging.system` (for example `kafka`)
16+
- `messaging.destination.name` (queue/topic name)
17+
- `messaging.message.id`
18+
- `messaging.message.retry.count`
19+
- `messaging.message.body.size`
20+
- `messaging.message.envelope.size`
21+
- `messaging.message.receive.latency`
22+
- Distributed tracing headers (`sentry-trace` and `baggage`) link producer-side work to consumer-side processing.
23+
- Queue receive latency is the time a message spent waiting between publish/enqueue and processing. For Java Kafka, this comes from the `sentry-task-enqueued-time` header that the producer writes and the consumer reads.
24+
25+
The Queues UI is not backed by a separate Java event type. The Java SDK contributes data through spans/transactions with the expected operations, trace context, statuses, and messaging attributes.
26+
27+
## Java SDK implementation
28+
29+
Queue tracing is opt-in. `SentryOptions.isEnableQueueTracing()` defaults to `false` and can be enabled with `setEnableQueueTracing(true)` or external config key `enable-queue-tracing` (`sentry.enable-queue-tracing` in Spring Boot). Captured queue spans/transactions still depend on tracing being enabled and sampled.
30+
31+
Kafka support lives in `sentry-kafka`:
32+
- `SentryKafkaProducer.wrap(Producer)` wraps Kafka `Producer.send(...)` calls.
33+
- Creates a `queue.publish` child span when there is an active span.
34+
- Sets `messaging.system=kafka` and `messaging.destination.name=<topic>`.
35+
- Injects `sentry-trace`, `baggage`, and `sentry-task-enqueued-time` headers.
36+
- Still injects tracing/enqueued-time headers when queue tracing is enabled but there is no active span, so background producers can link to consumers.
37+
- Finishes the span from the Kafka callback with `OK` or `INTERNAL_ERROR`.
38+
- `SentryKafkaConsumerTracing.withTracing(record, callback)` is the manual raw-Kafka consumer helper.
39+
- Forks root scopes for the processing lifecycle and makes them current.
40+
- Continues the trace from Kafka headers.
41+
- Starts a `queue.process` transaction bound to scope when tracing is enabled.
42+
- Sets Kafka messaging data, body size, retry count, and receive latency when available.
43+
- Finishes with `OK` or `INTERNAL_ERROR` and never lets instrumentation failures break customer processing.
44+
45+
Spring Kafka support lives in `sentry-spring`, `sentry-spring-jakarta`, and `sentry-spring-7`:
46+
- `SentryKafkaProducerBeanPostProcessor` installs a producer post-processor on `DefaultKafkaProducerFactory` and wraps created producers with `SentryKafkaProducer.wrap(...)`.
47+
- `SentryKafkaConsumerBeanPostProcessor` installs `SentryKafkaRecordInterceptor` on listener container factories.
48+
- `SentryKafkaRecordInterceptor` starts/finishes `queue.process` transactions around listener processing, continues traces from headers, forks scopes for the record lifecycle, and preserves any existing delegate interceptor.
49+
- Spring Boot auto-configuration registers both post-processors only when Spring Kafka and `sentry-kafka` are present and `sentry.enable-queue-tracing=true`.
50+
- Spring Boot queue auto-configuration is disabled when Sentry OpenTelemetry integration classes are present to avoid duplicate Kafka instrumentation.
51+
52+
## Trace origins and suppression
53+
54+
Queue instrumentation sets span origins so it can be identified and suppressed with `ignoredSpanOrigins`:
55+
- Raw Kafka producer: `auto.queue.kafka.producer`
56+
- Raw Kafka consumer helper: `manual.queue.kafka.consumer`
57+
- Spring Kafka producer: `auto.queue.spring.kafka.producer`, `auto.queue.spring_jakarta.kafka.producer`, `auto.queue.spring7.kafka.producer`
58+
- Spring Kafka consumer: `auto.queue.spring.kafka.consumer`, `auto.queue.spring_jakarta.kafka.consumer`, `auto.queue.spring7.kafka.consumer`
59+
60+
## Files to inspect when changing queue tracing
61+
62+
- Core option and conventions:
63+
- `sentry/src/main/java/io/sentry/SentryOptions.java`
64+
- `sentry/src/main/java/io/sentry/ExternalOptions.java`
65+
- `sentry/src/main/java/io/sentry/SpanDataConvention.java`
66+
- Raw Kafka:
67+
- `sentry-kafka/src/main/java/io/sentry/kafka/SentryKafkaProducer.java`
68+
- `sentry-kafka/src/main/java/io/sentry/kafka/SentryKafkaConsumerTracing.java`
69+
- `sentry-kafka/src/test/kotlin/io/sentry/kafka/*Test.kt`
70+
- Spring Kafka:
71+
- `sentry-spring*/src/main/java/io/sentry/**/kafka/*`
72+
- `sentry-spring*/src/test/kotlin/io/sentry/**/kafka/*Test.kt`
73+
- `sentry-spring-boot*/src/main/java/io/sentry/**/SentryAutoConfiguration.java`
74+
- `sentry-spring-boot*/src/test/kotlin/io/sentry/**/SentryKafkaAutoConfigurationTest.kt`
75+
76+
## Related rules
77+
78+
Also fetch:
79+
- `options` when changing `enableQueueTracing` or configuration surfaces.
80+
- `scopes` when changing consumer scope forking/lifecycle.
81+
- `opentelemetry` when changing coexistence with OTel auto-instrumentation.
82+
- `api` when changing public Kafka APIs or option methods.

CHANGELOG.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,16 @@
1919
.configurator { it.isUseShakeGesture = true }
2020
.create()
2121
```
22+
- Add support for Kafka ([#5249](https://github.com/getsentry/sentry-java/pull/5249))
23+
- You will need to add the `sentry-kafka` dependency and opt-in via the new option.
24+
- Set `options.setEnableQueueTracing(true)` on `Sentry.init`
25+
- Or set `sentry.enable-queue-tracing=true` in `application.properties`
26+
- For Spring Boot Kafka is auto instrumented and no further configuration is needed.
27+
- also see https://docs.sentry.io/platforms/java/guides/spring-boot/integrations/kafka/
28+
- When using `kafka-clients` directly
29+
- you need to wrap your `KafkaProducer` via `SentryKafkaProducer.wrap(kafkaProducer)` to get `queue.publish` spans
30+
- and you may use our `SentryKafkaConsumerTracing.withTracing` helper to instrument the consumer side manually.
31+
- also see https://docs.sentry.io/platforms/java/integrations/kafka/
2232

2333
### Fixes
2434

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ Sentry SDK for Java and Android
3535
| sentry | [![Maven Central Version](https://img.shields.io/maven-central/v/io.sentry/sentry?style=for-the-badge&logo=sentry&color=green)](https://central.sonatype.com/artifact/io.sentry/sentry) | 21 |
3636
| sentry-jul | [![Maven Central Version](https://img.shields.io/maven-central/v/io.sentry/sentry-jul?style=for-the-badge&logo=sentry&color=green)](https://central.sonatype.com/artifact/io.sentry/sentry-jul) |
3737
| sentry-jdbc | [![Maven Central Version](https://img.shields.io/maven-central/v/io.sentry/sentry-jdbc?style=for-the-badge&logo=sentry&color=green)](https://central.sonatype.com/artifact/io.sentry/sentry-jdbc) |
38+
| sentry-kafka | [![Maven Central Version](https://img.shields.io/maven-central/v/io.sentry/sentry-kafka?style=for-the-badge&logo=sentry&color=green)](https://central.sonatype.com/artifact/io.sentry/sentry-kafka) |
3839
| sentry-apollo | [![Maven Central Version](https://img.shields.io/maven-central/v/io.sentry/sentry-apollo?style=for-the-badge&logo=sentry&color=green)](https://central.sonatype.com/artifact/io.sentry/sentry-apollo) | 21 |
3940
| sentry-apollo-3 | [![Maven Central Version](https://img.shields.io/maven-central/v/io.sentry/sentry-apollo-3?style=for-the-badge&logo=sentry&color=green)](https://central.sonatype.com/artifact/io.sentry/sentry-apollo-3) | 21 |
4041
| sentry-apollo-4 | [![Maven Central Version](https://img.shields.io/maven-central/v/io.sentry/sentry-apollo-4?style=for-the-badge&logo=sentry&color=green)](https://central.sonatype.com/artifact/io.sentry/sentry-apollo-4) | 21 |

buildSrc/src/main/java/Config.kt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,7 @@ object Config {
8080
val SENTRY_JCACHE_SDK_NAME = "$SENTRY_JAVA_SDK_NAME.jcache"
8181
val SENTRY_QUARTZ_SDK_NAME = "$SENTRY_JAVA_SDK_NAME.quartz"
8282
val SENTRY_JDBC_SDK_NAME = "$SENTRY_JAVA_SDK_NAME.jdbc"
83+
val SENTRY_KAFKA_SDK_NAME = "$SENTRY_JAVA_SDK_NAME.kafka"
8384
val SENTRY_OPENFEATURE_SDK_NAME = "$SENTRY_JAVA_SDK_NAME.openfeature"
8485
val SENTRY_LAUNCHDARKLY_SERVER_SDK_NAME = "$SENTRY_JAVA_SDK_NAME.launchdarkly-server"
8586
val SENTRY_LAUNCHDARKLY_ANDROID_SDK_NAME = "$SENTRY_ANDROID_SDK_NAME.launchdarkly"

gradle/libs.versions.toml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -184,6 +184,10 @@ springboot3-starter-security = { module = "org.springframework.boot:spring-boot-
184184
springboot3-starter-jdbc = { module = "org.springframework.boot:spring-boot-starter-jdbc", version.ref = "springboot3" }
185185
springboot3-starter-actuator = { module = "org.springframework.boot:spring-boot-starter-actuator", version.ref = "springboot3" }
186186
springboot3-starter-cache = { module = "org.springframework.boot:spring-boot-starter-cache", version.ref = "springboot3" }
187+
spring-kafka2 = { module = "org.springframework.kafka:spring-kafka", version = "2.8.11" }
188+
spring-kafka3 = { module = "org.springframework.kafka:spring-kafka", version = "3.3.5" }
189+
spring-kafka4 = { module = "org.springframework.kafka:spring-kafka" }
190+
kafka-clients = { module = "org.apache.kafka:kafka-clients", version = "3.8.1" }
187191
springboot4-otel = { module = "io.opentelemetry.instrumentation:opentelemetry-spring-boot-starter", version.ref = "otelInstrumentation" }
188192
springboot4-resttestclient = { module = "org.springframework.boot:spring-boot-resttestclient", version.ref = "springboot4" }
189193
springboot4-starter = { module = "org.springframework.boot:spring-boot-starter", version.ref = "springboot4" }
@@ -200,6 +204,7 @@ springboot4-starter-webclient = { module = "org.springframework.boot:spring-boot
200204
springboot4-starter-jdbc = { module = "org.springframework.boot:spring-boot-starter-jdbc", version.ref = "springboot4" }
201205
springboot4-starter-actuator = { module = "org.springframework.boot:spring-boot-starter-actuator", version.ref = "springboot4" }
202206
springboot4-starter-cache = { module = "org.springframework.boot:spring-boot-starter-cache", version.ref = "springboot4" }
207+
springboot4-starter-kafka = { module = "org.springframework.boot:spring-boot-starter-kafka", version.ref = "springboot4" }
203208
timber = { module = "com.jakewharton.timber:timber", version = "4.7.1" }
204209

205210
# Animalsniffer signature

sentry-kafka/README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# sentry-kafka
2+
3+
This module provides Kafka-native queue instrumentation for applications using `kafka-clients` directly.
4+
5+
Spring users should use the Sentry Spring (Boot) SDKs, which provide higher-fidelity consumer instrumentation via Spring Kafka hooks.

sentry-kafka/api/sentry-kafka.api

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
public final class io/sentry/kafka/BuildConfig {
2+
public static final field SENTRY_KAFKA_SDK_NAME Ljava/lang/String;
3+
public static final field VERSION_NAME Ljava/lang/String;
4+
}
5+
6+
public final class io/sentry/kafka/SentryKafkaConsumerTracing {
7+
public static final field TRACE_ORIGIN Ljava/lang/String;
8+
public static fun withTracing (Lorg/apache/kafka/clients/consumer/ConsumerRecord;Ljava/lang/Runnable;)V
9+
public static fun withTracing (Lorg/apache/kafka/clients/consumer/ConsumerRecord;Ljava/util/concurrent/Callable;)Ljava/lang/Object;
10+
}
11+
12+
public final class io/sentry/kafka/SentryKafkaProducer {
13+
public static final field SENTRY_ENQUEUED_TIME_HEADER Ljava/lang/String;
14+
public static final field TRACE_ORIGIN Ljava/lang/String;
15+
public static fun wrap (Lorg/apache/kafka/clients/producer/Producer;)Lorg/apache/kafka/clients/producer/Producer;
16+
public static fun wrap (Lorg/apache/kafka/clients/producer/Producer;Lio/sentry/IScopes;)Lorg/apache/kafka/clients/producer/Producer;
17+
public static fun wrap (Lorg/apache/kafka/clients/producer/Producer;Lio/sentry/IScopes;Ljava/lang/String;)Lorg/apache/kafka/clients/producer/Producer;
18+
}
19+

sentry-kafka/build.gradle.kts

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
import net.ltgt.gradle.errorprone.errorprone
2+
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
3+
4+
plugins {
5+
`java-library`
6+
id("io.sentry.javadoc")
7+
alias(libs.plugins.kotlin.jvm)
8+
jacoco
9+
alias(libs.plugins.errorprone)
10+
alias(libs.plugins.gradle.versions)
11+
alias(libs.plugins.buildconfig)
12+
}
13+
14+
tasks.withType<KotlinCompile>().configureEach {
15+
compilerOptions.jvmTarget = org.jetbrains.kotlin.gradle.dsl.JvmTarget.JVM_1_8
16+
}
17+
18+
dependencies {
19+
api(projects.sentry)
20+
compileOnly(libs.kafka.clients)
21+
compileOnly(libs.jetbrains.annotations)
22+
compileOnly(libs.nopen.annotations)
23+
24+
errorprone(libs.errorprone.core)
25+
errorprone(libs.nopen.checker)
26+
errorprone(libs.nullaway)
27+
28+
// tests
29+
testImplementation(projects.sentryTestSupport)
30+
testImplementation(kotlin(Config.kotlinStdLib))
31+
testImplementation(libs.kotlin.test.junit)
32+
testImplementation(libs.mockito.kotlin)
33+
testImplementation(libs.mockito.inline)
34+
testImplementation(libs.kafka.clients)
35+
}
36+
37+
configure<SourceSetContainer> { test { java.srcDir("src/test/java") } }
38+
39+
jacoco { toolVersion = libs.versions.jacoco.get() }
40+
41+
tasks.jacocoTestReport {
42+
reports {
43+
xml.required.set(true)
44+
html.required.set(false)
45+
}
46+
}
47+
48+
tasks {
49+
jacocoTestCoverageVerification {
50+
violationRules { rule { limit { minimum = Config.QualityPlugins.Jacoco.minimumCoverage } } }
51+
}
52+
check {
53+
dependsOn(jacocoTestCoverageVerification)
54+
dependsOn(jacocoTestReport)
55+
}
56+
}
57+
58+
tasks.withType<JavaCompile>().configureEach {
59+
options.errorprone {
60+
check("NullAway", net.ltgt.gradle.errorprone.CheckSeverity.ERROR)
61+
option("NullAway:AnnotatedPackages", "io.sentry")
62+
}
63+
}
64+
65+
buildConfig {
66+
useJavaOutput()
67+
packageName("io.sentry.kafka")
68+
buildConfigField("String", "SENTRY_KAFKA_SDK_NAME", "\"${Config.Sentry.SENTRY_KAFKA_SDK_NAME}\"")
69+
buildConfigField("String", "VERSION_NAME", "\"${project.version}\"")
70+
}
71+
72+
tasks.jar {
73+
manifest {
74+
attributes(
75+
"Sentry-Version-Name" to project.version,
76+
"Sentry-SDK-Name" to Config.Sentry.SENTRY_KAFKA_SDK_NAME,
77+
"Sentry-SDK-Package-Name" to "maven:io.sentry:sentry-kafka",
78+
"Implementation-Vendor" to "Sentry",
79+
"Implementation-Title" to project.name,
80+
"Implementation-Version" to project.version,
81+
)
82+
}
83+
}

0 commit comments

Comments
 (0)