Skip to content

Comments

feat: process context publication#1585

Draft
yannham wants to merge 11 commits intomainfrom
yannham/process-context-sharing
Draft

feat: process context publication#1585
yannham wants to merge 11 commits intomainfrom
yannham/process-context-sharing

Conversation

@yannham
Copy link

@yannham yannham commented Feb 17, 2026

What does this PR do?

This PR implements the publication protocol of the process context sharing proposal.

This is intended as a minimally viable starting point. Next steps are kept for follow-up PRs, which could include for example:

  • expose the additional functions in the FFI
  • add the missing update protocol
  • add a function taking a structured object and encode it, instead of assuming a raw, already encoded payload

Motivation

This feature allows a process to expose data to an external process, typically an eBPF profiler. Please refer to the OTEP linked above for a detailed motivation.

Additional Notes

Some notes on dependencies:

  • This PR needs a handful of linux syscalls. I used rustix for that since it's already pulled as a transitive dependency (with the same major version bucket), and is nicely higher-level than libc.
  • I see that there's already a small MemFd wrapper crate used (e.g here
    pub enum AnonymousFileHandle {
    ). Unfortunately, it doesn't handle some options like NOEXEC, and doesn't really bring much over the basic rustix wrappers, so I didn't use that (question: should we take the occasion to kill a dependency and use rustix everywhere?)

There are a number of design choices or assumptions that might be interesting to discuss further:

  • on paper, a reader might be concurrently reading what the process is currently publishing, which could lead to race conditions (even more since the reader is another, uncontrolled process). However, the reader doesn't access the memory directly, but use /proc/<pid>/maps and syscalls to do so, so the concurrency model is a bit unclear. We basically settled on the mental model being that we use atomics as if the reader was another thread of the same program, which sounds like the best we can do and should prevent re-ordering at least on the writer side (using OS-level sync is another solution, but was deemed too costly for the upcoming thread-level context).
  • we must manage the following resources: the mmaped region and the payload, avoiding either leaking them or releasing them too early (nullifying the capacity to read, or worse making it read garbage). We settled with @ivoanjo on hiding that from the API user, by using a static. We use a mutex here instead of smarter lock-free style because it's mostly to please the Rust typesystem; we don't expect publications to happen often, and it should most likely be done from the same thread. The memory is guaranteed to be preserved, but the user can still free it explicitly if needed.
  • Regarding the payload, there might be room to make the internal interface safer (e.g. using Pin<Box<[u8]>> ?), and maybe offer the option - or do it automatically, depending on the size - of moving the payload directly after the header, as allowed by the spec. This is left for future work.

How to test the change?

TODO

@github-actions
Copy link

github-actions bot commented Feb 17, 2026

📚 Documentation Check Results

⚠️ 37 documentation warning(s) found

📦 libdd-library-config - 37 warning(s)


Updated: 2026-02-23 15:35:40 UTC | Commit: 8f61cf1 | missing-docs job results

@github-actions
Copy link

github-actions bot commented Feb 17, 2026

Clippy Allow Annotation Report

Comparing clippy allow annotations between branches:

  • Base Branch: origin/main
  • PR Branch: origin/yannham/process-context-sharing

Summary by Rule

Rule Base Branch PR Branch Change

Annotation Counts by File

File Base Branch PR Branch Change

Annotation Stats by Crate

Crate Base Branch PR Branch Change
clippy-annotation-reporter 5 5 No change (0%)
datadog-ffe-ffi 1 1 No change (0%)
datadog-ipc 27 27 No change (0%)
datadog-live-debugger 6 6 No change (0%)
datadog-live-debugger-ffi 10 10 No change (0%)
datadog-profiling-replayer 4 4 No change (0%)
datadog-remote-config 3 3 No change (0%)
datadog-sidecar 59 59 No change (0%)
libdd-common 10 10 No change (0%)
libdd-common-ffi 12 12 No change (0%)
libdd-crashtracker 12 12 No change (0%)
libdd-data-pipeline 5 5 No change (0%)
libdd-ddsketch 2 2 No change (0%)
libdd-dogstatsd-client 1 1 No change (0%)
libdd-profiling 13 13 No change (0%)
libdd-telemetry 19 19 No change (0%)
libdd-tinybytes 4 4 No change (0%)
libdd-trace-normalization 2 2 No change (0%)
libdd-trace-obfuscation 9 9 No change (0%)
libdd-trace-utils 15 15 No change (0%)
Total 219 219 No change (0%)

About This Report

This report tracks Clippy allow annotations for specific rules, showing how they've changed in this PR. Decreasing the number of these annotations generally improves code quality.

@github-actions
Copy link

github-actions bot commented Feb 17, 2026

🔒 Cargo Deny Results

No issues found!

📦 libdd-library-config - ✅ No issues


Updated: 2026-02-23 15:39:06 UTC | Commit: 8f61cf1 | dependency-check job results

@pr-commenter
Copy link

pr-commenter bot commented Feb 17, 2026

Benchmarks

Comparison

Benchmark execution time: 2026-02-23 15:42:27

Comparing candidate commit 18eef42 in PR branch yannham/process-context-sharing with baseline commit c8121f4 in branch main.

Found 5 performance improvements and 8 performance regressions! Performance is the same for 44 metrics, 2 unstable metrics.

scenario:credit_card/is_card_number/37828224631000521389798

  • 🟥 execution_time [+6.312µs; +6.343µs] or [+13.804%; +13.871%]
  • 🟥 throughput [-2665515.119op/s; -2651644.215op/s] or [-12.188%; -12.124%]

scenario:credit_card/is_card_number/x371413321323331

  • 🟩 execution_time [-589.031ns; -582.877ns] or [-8.894%; -8.801%]
  • 🟩 throughput [+14580148.692op/s; +14731827.034op/s] or [+9.656%; +9.756%]

scenario:credit_card/is_card_number_no_luhn/ 378282246310005

  • 🟥 execution_time [+4.789µs; +4.840µs] or [+8.962%; +9.056%]
  • 🟥 throughput [-1553754.124op/s; -1538736.303op/s] or [-8.304%; -8.224%]

scenario:credit_card/is_card_number_no_luhn/378282246310005

  • 🟥 execution_time [+4.370µs; +4.419µs] or [+8.704%; +8.800%]
  • 🟥 throughput [-1610669.332op/s; -1594442.574op/s] or [-8.088%; -8.006%]

scenario:credit_card/is_card_number_no_luhn/37828224631000521389798

  • 🟥 execution_time [+6.292µs; +6.321µs] or [+13.763%; +13.827%]
  • 🟥 throughput [-2658383.654op/s; -2645155.726op/s] or [-12.153%; -12.093%]

scenario:credit_card/is_card_number_no_luhn/x371413321323331

  • 🟩 execution_time [-589.940ns; -584.590ns] or [-8.905%; -8.824%]
  • 🟩 throughput [+14617359.093op/s; +14748362.295op/s] or [+9.683%; +9.770%]

scenario:tags/replace_trace_tags

  • 🟩 execution_time [-113.403ns; -106.028ns] or [-4.550%; -4.255%]

Candidate

Candidate benchmark details

Group 1

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
ip_address/quantize_peer_ip_address_benchmark execution_time 5.037µs 5.117µs ± 0.034µs 5.122µs ± 0.026µs 5.144µs 5.170µs 5.172µs 5.178µs 1.09% -0.266 -0.758 0.66% 0.002µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
ip_address/quantize_peer_ip_address_benchmark execution_time [5.113µs; 5.122µs] or [-0.092%; +0.092%] None None None

Group 2

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
profile_add_sample2_frames_x1000 execution_time 725.462µs 729.234µs ± 1.371µs 729.176µs ± 0.973µs 730.244µs 731.415µs 732.382µs 733.173µs 0.55% 0.055 -0.065 0.19% 0.097µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
profile_add_sample2_frames_x1000 execution_time [729.044µs; 729.424µs] or [-0.026%; +0.026%] None None None

Group 3

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_trace/test_trace execution_time 245.365ns 255.966ns ± 12.417ns 250.502ns ± 3.163ns 258.244ns 279.245ns 301.415ns 302.712ns 20.84% 2.008 3.731 4.84% 0.878ns 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_trace/test_trace execution_time [254.246ns; 257.687ns] or [-0.672%; +0.672%] None None None

Group 4

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching deserializing traces from msgpack to their internal representation execution_time 48.217ms 48.680ms ± 1.443ms 48.443ms ± 0.063ms 48.502ms 48.688ms 56.668ms 60.971ms 25.86% 6.617 44.870 2.96% 0.102ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching deserializing traces from msgpack to their internal representation execution_time [48.480ms; 48.880ms] or [-0.411%; +0.411%] None None None

Group 5

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time 494.152µs 494.908µs ± 0.397µs 494.864µs ± 0.215µs 495.109µs 495.490µs 496.121µs 497.611µs 0.56% 1.990 10.132 0.08% 0.028µs 1 200
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput 2009600.980op/s 2020577.086op/s ± 1619.067op/s 2020756.764op/s ± 876.849op/s 2021609.603op/s 2022759.312op/s 2023182.466op/s 2023670.651op/s 0.14% -1.971 9.975 0.08% 114.485op/s 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time 370.848µs 371.647µs ± 0.358µs 371.628µs ± 0.207µs 371.849µs 372.146µs 372.357µs 374.393µs 0.74% 2.228 15.847 0.10% 0.025µs 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput 2670992.591op/s 2690728.635op/s ± 2588.241op/s 2690862.592op/s ± 1499.001op/s 2692289.853op/s 2694357.414op/s 2695851.834op/s 2696522.881op/s 0.21% -2.191 15.507 0.10% 183.016op/s 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time 167.627µs 168.037µs ± 0.491µs 167.981µs ± 0.099µs 168.097µs 168.230µs 168.542µs 172.956µs 2.96% 8.570 79.253 0.29% 0.035µs 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput 5781811.121op/s 5951113.895op/s ± 16964.156op/s 5953045.561op/s ± 3512.537op/s 5956175.500op/s 5961094.391op/s 5963158.437op/s 5965614.476op/s 0.21% -8.502 78.361 0.28% 1199.547op/s 1 200
normalization/normalize_service/normalize_service/[empty string] execution_time 36.619µs 36.751µs ± 0.056µs 36.742µs ± 0.033µs 36.782µs 36.853µs 36.915µs 36.926µs 0.50% 0.632 0.296 0.15% 0.004µs 1 200
normalization/normalize_service/normalize_service/[empty string] throughput 27081539.050op/s 27210361.797op/s ± 41649.129op/s 27216714.354op/s ± 24454.852op/s 27238813.731op/s 27269612.167op/s 27284057.236op/s 27308332.026op/s 0.34% -0.623 0.282 0.15% 2945.038op/s 1 200
normalization/normalize_service/normalize_service/test_ASCII execution_time 45.415µs 45.499µs ± 0.103µs 45.483µs ± 0.027µs 45.520µs 45.578µs 45.613µs 46.809µs 2.91% 10.373 128.902 0.23% 0.007µs 1 200
normalization/normalize_service/normalize_service/test_ASCII throughput 21363516.076op/s 21978779.437op/s ± 48660.448op/s 21986143.061op/s ± 13121.733op/s 21998174.625op/s 22008629.916op/s 22015227.428op/s 22019365.203op/s 0.15% -10.215 126.189 0.22% 3440.813op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time [494.853µs; 494.963µs] or [-0.011%; +0.011%] None None None
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput [2020352.699op/s; 2020801.473op/s] or [-0.011%; +0.011%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time [371.597µs; 371.697µs] or [-0.013%; +0.013%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput [2690369.930op/s; 2691087.340op/s] or [-0.013%; +0.013%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time [167.969µs; 168.105µs] or [-0.040%; +0.040%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput [5948762.827op/s; 5953464.964op/s] or [-0.040%; +0.040%] None None None
normalization/normalize_service/normalize_service/[empty string] execution_time [36.743µs; 36.759µs] or [-0.021%; +0.021%] None None None
normalization/normalize_service/normalize_service/[empty string] throughput [27204589.628op/s; 27216133.966op/s] or [-0.021%; +0.021%] None None None
normalization/normalize_service/normalize_service/test_ASCII execution_time [45.484µs; 45.513µs] or [-0.031%; +0.031%] None None None
normalization/normalize_service/normalize_service/test_ASCII throughput [21972035.567op/s; 21985523.307op/s] or [-0.031%; +0.031%] None None None

Group 6

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
redis/obfuscate_redis_string execution_time 33.289µs 34.191µs ± 1.197µs 33.466µs ± 0.113µs 35.649µs 36.331µs 36.571µs 36.779µs 9.90% 0.949 -0.980 3.49% 0.085µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
redis/obfuscate_redis_string execution_time [34.026µs; 34.357µs] or [-0.485%; +0.485%] None None None

Group 7

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching string interning on wordpress profile execution_time 160.781µs 161.452µs ± 0.395µs 161.408µs ± 0.113µs 161.527µs 161.839µs 162.612µs 165.482µs 2.52% 6.059 55.178 0.24% 0.028µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching string interning on wordpress profile execution_time [161.397µs; 161.507µs] or [-0.034%; +0.034%] None None None

Group 8

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
concentrator/add_spans_to_concentrator execution_time 10.629ms 10.660ms ± 0.016ms 10.658ms ± 0.010ms 10.670ms 10.689ms 10.701ms 10.729ms 0.67% 0.810 1.357 0.15% 0.001ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
concentrator/add_spans_to_concentrator execution_time [10.658ms; 10.662ms] or [-0.020%; +0.020%] None None None

Group 9

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
sql/obfuscate_sql_string execution_time 85.835µs 86.236µs ± 0.271µs 86.221µs ± 0.110µs 86.331µs 86.469µs 86.532µs 89.387µs 3.67% 7.922 90.223 0.31% 0.019µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
sql/obfuscate_sql_string execution_time [86.198µs; 86.273µs] or [-0.043%; +0.043%] None None None

Group 10

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
credit_card/is_card_number/ execution_time 3.896µs 3.913µs ± 0.003µs 3.912µs ± 0.002µs 3.914µs 3.918µs 3.920µs 3.921µs 0.23% -0.191 4.079 0.07% 0.000µs 1 200
credit_card/is_card_number/ throughput 255011713.478op/s 255579540.813op/s ± 192142.350op/s 255600958.511op/s ± 110359.585op/s 255699089.793op/s 255826239.940op/s 255891064.605op/s 256645496.760op/s 0.41% 0.205 4.141 0.07% 13586.516op/s 1 200
credit_card/is_card_number/ 3782-8224-6310-005 execution_time 76.524µs 77.785µs ± 0.598µs 77.753µs ± 0.386µs 78.141µs 78.871µs 79.277µs 79.641µs 2.43% 0.504 0.142 0.77% 0.042µs 1 200
credit_card/is_card_number/ 3782-8224-6310-005 throughput 12556323.384op/s 12856732.872op/s ± 98553.794op/s 12861165.073op/s ± 63897.948op/s 12924741.857op/s 12993255.720op/s 13057268.481op/s 13067725.564op/s 1.61% -0.460 0.090 0.76% 6968.806op/s 1 200
credit_card/is_card_number/ 378282246310005 execution_time 69.946µs 70.974µs ± 0.599µs 70.870µs ± 0.410µs 71.401µs 71.976µs 72.354µs 72.718µs 2.61% 0.475 -0.484 0.84% 0.042µs 1 200
credit_card/is_card_number/ 378282246310005 throughput 13751760.007op/s 14090588.125op/s ± 118437.000op/s 14110433.320op/s ± 82037.594op/s 14184797.638op/s 14260311.462op/s 14287601.139op/s 14296710.095op/s 1.32% -0.443 -0.528 0.84% 8374.761op/s 1 200
credit_card/is_card_number/37828224631 execution_time 3.896µs 3.912µs ± 0.003µs 3.912µs ± 0.001µs 3.914µs 3.917µs 3.919µs 3.919µs 0.18% -0.697 6.648 0.06% 0.000µs 1 200
credit_card/is_card_number/37828224631 throughput 255175439.104op/s 255606961.935op/s ± 166577.444op/s 255629921.485op/s ± 89104.738op/s 255701916.400op/s 255800175.173op/s 255927356.755op/s 256644878.293op/s 0.40% 0.713 6.744 0.07% 11778.804op/s 1 200
credit_card/is_card_number/378282246310005 execution_time 66.922µs 68.439µs ± 0.677µs 68.385µs ± 0.495µs 68.898µs 69.535µs 70.129µs 70.455µs 3.03% 0.334 -0.221 0.99% 0.048µs 1 200
credit_card/is_card_number/378282246310005 throughput 14193398.286op/s 14612983.475op/s ± 144171.021op/s 14623149.135op/s ± 105015.783op/s 14722685.293op/s 14831158.711op/s 14889211.172op/s 14942859.716op/s 2.19% -0.285 -0.276 0.98% 10194.431op/s 1 200
credit_card/is_card_number/37828224631000521389798 execution_time 51.910µs 52.051µs ± 0.060µs 52.048µs ± 0.042µs 52.091µs 52.151µs 52.185µs 52.224µs 0.34% 0.135 -0.357 0.11% 0.004µs 1 200
credit_card/is_card_number/37828224631000521389798 throughput 19148383.103op/s 19212093.951op/s ± 22114.233op/s 19213047.511op/s ± 15485.700op/s 19227331.742op/s 19246856.940op/s 19255089.935op/s 19264026.357op/s 0.27% -0.129 -0.361 0.11% 1563.712op/s 1 200
credit_card/is_card_number/x371413321323331 execution_time 6.028µs 6.037µs ± 0.014µs 6.034µs ± 0.002µs 6.037µs 6.046µs 6.077µs 6.163µs 2.14% 6.611 50.560 0.23% 0.001µs 1 200
credit_card/is_card_number/x371413321323331 throughput 162253794.310op/s 165651592.031op/s ± 383525.607op/s 165730428.111op/s ± 62092.406op/s 165775183.191op/s 165844151.737op/s 165868352.295op/s 165904095.218op/s 0.10% -6.549 49.727 0.23% 27119.356op/s 1 200
credit_card/is_card_number_no_luhn/ execution_time 3.893µs 3.913µs ± 0.003µs 3.912µs ± 0.002µs 3.914µs 3.917µs 3.919µs 3.919µs 0.18% -1.239 10.009 0.07% 0.000µs 1 200
credit_card/is_card_number_no_luhn/ throughput 255139626.059op/s 255589648.961op/s ± 186767.611op/s 255610697.317op/s ± 105502.514op/s 255699339.193op/s 255827993.909op/s 255875229.014op/s 256879723.406op/s 0.50% 1.262 10.182 0.07% 13206.464op/s 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time 63.834µs 64.262µs ± 0.196µs 64.246µs ± 0.124µs 64.359µs 64.609µs 64.820µs 64.913µs 1.04% 0.601 0.648 0.30% 0.014µs 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput 15405178.411op/s 15561399.187op/s ± 47257.490op/s 15565217.975op/s ± 30066.521op/s 15596008.205op/s 15633426.533op/s 15655516.041op/s 15665609.138op/s 0.64% -0.580 0.610 0.30% 3341.609op/s 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time 57.792µs 58.259µs ± 0.179µs 58.241µs ± 0.101µs 58.341µs 58.601µs 58.744µs 59.029µs 1.35% 0.837 1.918 0.31% 0.013µs 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 throughput 16940742.268op/s 17165017.421op/s ± 52496.576op/s 17169927.730op/s ± 29840.933op/s 17199881.321op/s 17234349.811op/s 17280244.015op/s 17303326.816op/s 0.78% -0.808 1.847 0.31% 3712.068op/s 1 200
credit_card/is_card_number_no_luhn/37828224631 execution_time 3.894µs 3.913µs ± 0.003µs 3.913µs ± 0.002µs 3.915µs 3.917µs 3.919µs 3.920µs 0.17% -1.591 10.458 0.07% 0.000µs 1 200
credit_card/is_card_number_no_luhn/37828224631 throughput 255087789.306op/s 255535066.270op/s ± 184937.693op/s 255529472.407op/s ± 127845.851op/s 255652755.371op/s 255784914.197op/s 255849843.869op/s 256833199.443op/s 0.51% 1.613 10.635 0.07% 13077.070op/s 1 200
credit_card/is_card_number_no_luhn/378282246310005 execution_time 54.279µs 54.607µs ± 0.172µs 54.578µs ± 0.097µs 54.690µs 54.927µs 55.191µs 55.382µs 1.47% 1.265 2.689 0.31% 0.012µs 1 200
credit_card/is_card_number_no_luhn/378282246310005 throughput 18056433.026op/s 18312863.354op/s ± 57541.816op/s 18322365.391op/s ± 32414.340op/s 18350527.981op/s 18386991.480op/s 18408028.486op/s 18423434.609op/s 0.55% -1.236 2.574 0.31% 4068.821op/s 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time 51.879µs 52.023µs ± 0.058µs 52.022µs ± 0.037µs 52.063µs 52.112µs 52.165µs 52.224µs 0.39% 0.185 0.303 0.11% 0.004µs 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput 19148366.004op/s 19222316.540op/s ± 21250.882op/s 19222576.268op/s ± 13781.205op/s 19235219.152op/s 19254313.753op/s 19270353.911op/s 19275572.068op/s 0.28% -0.178 0.295 0.11% 1502.664op/s 1 200
credit_card/is_card_number_no_luhn/x371413321323331 execution_time 6.028µs 6.037µs ± 0.011µs 6.035µs ± 0.002µs 6.037µs 6.070µs 6.077µs 6.119µs 1.40% 4.301 21.534 0.19% 0.001µs 1 200
credit_card/is_card_number_no_luhn/x371413321323331 throughput 163420448.168op/s 165634258.041op/s ± 312940.060op/s 165710083.459op/s ± 59261.510op/s 165761418.750op/s 165831223.906op/s 165870675.621op/s 165892850.439op/s 0.11% -4.273 21.225 0.19% 22128.204op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
credit_card/is_card_number/ execution_time [3.912µs; 3.913µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ throughput [255552911.731op/s; 255606169.895op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 execution_time [77.702µs; 77.868µs] or [-0.107%; +0.107%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 throughput [12843074.264op/s; 12870391.480op/s] or [-0.106%; +0.106%] None None None
credit_card/is_card_number/ 378282246310005 execution_time [70.891µs; 71.057µs] or [-0.117%; +0.117%] None None None
credit_card/is_card_number/ 378282246310005 throughput [14074173.896op/s; 14107002.354op/s] or [-0.116%; +0.116%] None None None
credit_card/is_card_number/37828224631 execution_time [3.912µs; 3.913µs] or [-0.009%; +0.009%] None None None
credit_card/is_card_number/37828224631 throughput [255583875.903op/s; 255630047.967op/s] or [-0.009%; +0.009%] None None None
credit_card/is_card_number/378282246310005 execution_time [68.345µs; 68.533µs] or [-0.137%; +0.137%] None None None
credit_card/is_card_number/378282246310005 throughput [14593002.758op/s; 14632964.192op/s] or [-0.137%; +0.137%] None None None
credit_card/is_card_number/37828224631000521389798 execution_time [52.042µs; 52.059µs] or [-0.016%; +0.016%] None None None
credit_card/is_card_number/37828224631000521389798 throughput [19209029.131op/s; 19215158.771op/s] or [-0.016%; +0.016%] None None None
credit_card/is_card_number/x371413321323331 execution_time [6.035µs; 6.039µs] or [-0.033%; +0.033%] None None None
credit_card/is_card_number/x371413321323331 throughput [165598439.070op/s; 165704744.991op/s] or [-0.032%; +0.032%] None None None
credit_card/is_card_number_no_luhn/ execution_time [3.912µs; 3.913µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ throughput [255563764.766op/s; 255615533.155op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time [64.235µs; 64.289µs] or [-0.042%; +0.042%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput [15554849.754op/s; 15567948.621op/s] or [-0.042%; +0.042%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time [58.234µs; 58.283µs] or [-0.042%; +0.042%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 throughput [17157741.901op/s; 17172292.942op/s] or [-0.042%; +0.042%] None None None
credit_card/is_card_number_no_luhn/37828224631 execution_time [3.913µs; 3.914µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/37828224631 throughput [255509435.685op/s; 255560696.856op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/378282246310005 execution_time [54.583µs; 54.631µs] or [-0.044%; +0.044%] None None None
credit_card/is_card_number_no_luhn/378282246310005 throughput [18304888.612op/s; 18320838.096op/s] or [-0.044%; +0.044%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time [52.015µs; 52.031µs] or [-0.015%; +0.015%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput [19219371.372op/s; 19225261.708op/s] or [-0.015%; +0.015%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 execution_time [6.036µs; 6.039µs] or [-0.026%; +0.026%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 throughput [165590887.559op/s; 165677628.524op/s] or [-0.026%; +0.026%] None None None

Group 11

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
profile_add_sample_frames_x1000 execution_time 4.191ms 4.198ms ± 0.008ms 4.197ms ± 0.001ms 4.199ms 4.201ms 4.203ms 4.301ms 2.49% 12.158 160.207 0.18% 0.001ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
profile_add_sample_frames_x1000 execution_time [4.197ms; 4.199ms] or [-0.026%; +0.026%] None None None

Group 12

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time 185.643µs 186.175µs ± 0.353µs 186.100µs ± 0.131µs 186.258µs 186.674µs 187.565µs 188.196µs 1.13% 2.787 9.858 0.19% 0.025µs 1 200
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput 5313622.627op/s 5371317.309op/s ± 10118.000op/s 5373459.463op/s ± 3795.649op/s 5376932.007op/s 5381260.670op/s 5383095.202op/s 5386696.825op/s 0.25% -2.764 9.712 0.19% 715.451op/s 1 200
normalization/normalize_name/normalize_name/bad-name execution_time 17.788µs 17.887µs ± 0.041µs 17.882µs ± 0.023µs 17.909µs 17.956µs 17.999µs 18.075µs 1.08% 0.723 1.913 0.23% 0.003µs 1 200
normalization/normalize_name/normalize_name/bad-name throughput 55325111.743op/s 55907810.687op/s ± 129099.920op/s 55921170.897op/s ± 73410.855op/s 55991219.264op/s 56082134.701op/s 56175741.344op/s 56217196.793op/s 0.53% -0.700 1.842 0.23% 9128.743op/s 1 200
normalization/normalize_name/normalize_name/good execution_time 9.804µs 9.853µs ± 0.026µs 9.845µs ± 0.013µs 9.864µs 9.893µs 9.929µs 10.036µs 1.93% 2.424 12.277 0.26% 0.002µs 1 200
normalization/normalize_name/normalize_name/good throughput 99645277.908op/s 101495923.436op/s ± 263260.800op/s 101571081.503op/s ± 130638.942op/s 101669654.497op/s 101770400.480op/s 101842419.664op/s 102000366.274op/s 0.42% -2.360 11.692 0.26% 18615.350op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time [186.126µs; 186.224µs] or [-0.026%; +0.026%] None None None
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput [5369915.052op/s; 5372719.567op/s] or [-0.026%; +0.026%] None None None
normalization/normalize_name/normalize_name/bad-name execution_time [17.881µs; 17.892µs] or [-0.032%; +0.032%] None None None
normalization/normalize_name/normalize_name/bad-name throughput [55889918.680op/s; 55925702.695op/s] or [-0.032%; +0.032%] None None None
normalization/normalize_name/normalize_name/good execution_time [9.849µs; 9.856µs] or [-0.036%; +0.036%] None None None
normalization/normalize_name/normalize_name/good throughput [101459438.021op/s; 101532408.850op/s] or [-0.036%; +0.036%] None None None

Group 13

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
sdk_test_data/rules-based execution_time 144.428µs 146.333µs ± 1.646µs 146.091µs ± 0.578µs 146.714µs 147.955µs 153.436µs 161.894µs 10.82% 5.387 42.698 1.12% 0.116µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
sdk_test_data/rules-based execution_time [146.105µs; 146.562µs] or [-0.156%; +0.156%] None None None

Group 14

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
two way interface execution_time 18.026µs 25.158µs ± 9.058µs 18.324µs ± 0.185µs 32.282µs 40.460µs 48.274µs 72.656µs 296.50% 1.408 3.249 35.91% 0.640µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
two way interface execution_time [23.903µs; 26.414µs] or [-4.990%; +4.990%] None None None

Group 15

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
single_flag_killswitch/rules-based execution_time 187.334ns 190.139ns ± 2.377ns 189.435ns ± 1.450ns 191.387ns 194.760ns 196.957ns 200.724ns 5.96% 1.252 1.594 1.25% 0.168ns 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
single_flag_killswitch/rules-based execution_time [189.810ns; 190.469ns] or [-0.173%; +0.173%] None None None

Group 16

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
tags/replace_trace_tags execution_time 2.297µs 2.382µs ± 0.023µs 2.387µs ± 0.007µs 2.392µs 2.406µs 2.420µs 2.422µs 1.49% -2.185 5.130 0.98% 0.002µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
tags/replace_trace_tags execution_time [2.379µs; 2.386µs] or [-0.136%; +0.136%] None None None

Group 17

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
receiver_entry_point/report/2597 execution_time 3.417ms 3.442ms ± 0.012ms 3.442ms ± 0.008ms 3.449ms 3.463ms 3.478ms 3.502ms 1.75% 0.867 2.272 0.36% 0.001ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
receiver_entry_point/report/2597 execution_time [3.441ms; 3.444ms] or [-0.050%; +0.050%] None None None

Group 18

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
write only interface execution_time 1.246µs 3.313µs ± 1.442µs 3.094µs ± 0.026µs 3.126µs 3.768µs 14.010µs 15.171µs 390.32% 7.297 54.636 43.41% 0.102µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
write only interface execution_time [3.113µs; 3.513µs] or [-6.031%; +6.031%] None None None

Group 19

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 18eef42 1771860331 yannham/process-context-sharing
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching serializing traces from their internal representation to msgpack execution_time 13.783ms 13.819ms ± 0.029ms 13.816ms ± 0.012ms 13.825ms 13.857ms 13.944ms 13.999ms 1.33% 3.158 13.265 0.21% 0.002ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching serializing traces from their internal representation to msgpack execution_time [13.815ms; 13.823ms] or [-0.029%; +0.029%] None None None

Baseline

Omitted due to size.

@codecov-commenter
Copy link

codecov-commenter commented Feb 19, 2026

Codecov Report

❌ Patch coverage is 80.60606% with 32 lines in your changes missing coverage. Please review.
✅ Project coverage is 71.30%. Comparing base (bf953c0) to head (18eef42).
⚠️ Report is 7 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1585      +/-   ##
==========================================
+ Coverage   71.22%   71.30%   +0.08%     
==========================================
  Files         423      424       +1     
  Lines       62130    62635     +505     
==========================================
+ Hits        44253    44665     +412     
- Misses      17877    17970      +93     
Components Coverage Δ
libdd-crashtracker 63.13% <ø> (+0.44%) ⬆️
libdd-crashtracker-ffi 16.56% <ø> (-0.81%) ⬇️
libdd-alloc 98.77% <ø> (ø)
libdd-data-pipeline 87.80% <ø> (+0.88%) ⬆️
libdd-data-pipeline-ffi 75.24% <ø> (+1.68%) ⬆️
libdd-common 79.73% <ø> (-0.44%) ⬇️
libdd-common-ffi 73.40% <ø> (ø)
libdd-telemetry 62.48% <ø> (-0.04%) ⬇️
libdd-telemetry-ffi 16.75% <ø> (ø)
libdd-dogstatsd-client 82.64% <ø> (ø)
datadog-ipc 80.74% <ø> (+0.02%) ⬆️
libdd-profiling 81.56% <ø> (+0.01%) ⬆️
libdd-profiling-ffi 63.65% <ø> (ø)
datadog-sidecar 34.17% <ø> (+0.52%) ⬆️
datdog-sidecar-ffi 15.55% <ø> (+2.29%) ⬆️
spawn-worker 54.69% <ø> (ø)
libdd-tinybytes 93.16% <ø> (ø)
libdd-trace-normalization 81.71% <ø> (ø)
libdd-trace-obfuscation 94.21% <ø> (ø)
libdd-trace-protobuf 68.00% <ø> (ø)
libdd-trace-utils 89.10% <ø> (+0.01%) ⬆️
datadog-tracer-flare 88.95% <ø> (-1.50%) ⬇️
libdd-log 74.69% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@yannham yannham force-pushed the yannham/process-context-sharing branch from 6598cc3 to 43208fe Compare February 19, 2026 17:29
rand = "0.8.3"
rmp = "0.8.14"
rmp-serde = "1.3.0"
rustix = { version = "1.1.3", features = ["param", "mm", "process"] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why rustix rather than the libc crate?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO it's higher-level and a nicer to use than libc (for example memfd_create returns a Result and an RAII file descriptor that is automatically closed upon drop, while the one from libc returns a c_int, etc.). I saw a bunch of occurrences already in Cargo.lock and thought it was pulled already anyway. But upon scrutiny it seems that the 1.1.3 major version bucket is mostly used by tempfile, which is a dev dependency most of the time? So maybe this is actually pulling some additional stuff.

There are a bunch of other dependencies that use the 0.38 version of rustix, so I can also downgrade to this one. But it's just a slight QoL improvement; happy to switch to libc if you think it's better for whatever reason.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth giving a quick check on the size of the resulting builds (I thought we had a github action that posted that but I'm not seeing it?).

In particular, we don't have a specific set target size that we need to stay under, but for many reasons we often have to ship variants so 1 MiB extra does add up if we need to ship e.g. N architectures and M versions.


Having said that, I feel like I'd seen rustix before but hadn't paid a lot of attention to it.

Looking at https://crates.io/crates/rustix it lists that it can work even without libc and that's amazing! If we could drop libc as a dependency from libdatadog would be super-unlock, since one situation where we end up needing to repeat builds is sometimes needing to have builds for both for glibc AND musl.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we could drop libc as a dependency from libdatadog would be super-unlock, since one situation where we end up needing to repeat builds is sometimes needing to have builds for both for glibc AND musl.

That's not really possible, because std uses libc

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not really possible, because std uses libc

Yeah, I know. Rust is very disapponting in this :P

Add an explicit rustix dependency (which was already pulled as a transitive dependency). Prep work for process context sharing.
@yannham yannham force-pushed the yannham/process-context-sharing branch from ed9af88 to fedbeb2 Compare February 20, 2026 15:59
@yannham yannham force-pushed the yannham/process-context-sharing branch from fedbeb2 to 74d2641 Compare February 20, 2026 16:38
Copy link
Member

@ivoanjo ivoanjo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a pass on it! Sorry from my part if there is a bit of a confusion with older versions of the spec being implemented, I'll make sure to keep a close eye on the libdatadog impl so it doesn't fall behind while things are still sometimes moving in the spec.

Comment on lines +61 to +62
/// We use `signature` as a release notification for publication, and `published_at_ns` for
/// updates. Ideally, those should be two `AtomicU64`, but this isn't compatible with
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this in yesterday's OTel Profiling SIG meeting and I'll go ahead and simplify this soon so that published_at_ns is the notification for both creation and updates. (I'll drop a note on the channel when I do so, so we can update all the impls as needed)

Comment on lines 410 to 418
/// Checks if a mapping line refers to the OTEL_CTX mapping by name
///
/// Handles both anonymous naming (`[anon:OTEL_CTX]`) and memfd naming
/// (`/memfd:OTEL_CTX` which may have ` (deleted)` suffix).
fn is_named_otel_mapping(line: &str) -> bool {
let trimmed = line.trim_end();
trimmed.ends_with("[anon:OTEL_CTX]")
|| trimmed.contains("/memfd:OTEL_CTX")
|| trimmed.contains("memfd:OTEL_CTX")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few notes:

  1. Wait, did you see any variant with memfd that did not start with /memfd?
  2. Rather than mixing ends_with and contains, I suggest always using start_with
  3. There was a slight oversight on my part and I forgot to list a third variant here -- I added it to the spec recently; you can see [anon_shmem:OTEL_CTX] as well (the spec explains when that can happen)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I took this directly from Scott's prototype. I'll clean it up a bit following your remarks (we're looking at whole lines of /proc/self/map, which is why this doesn't use start_with - the name is the 6th column. But even for a test, I agree that it should match the spec better and be a bit more robust).

Comment on lines 435 to 437
// The atomic alignment constraints are checked during publication.
let signature = unsafe { AtomicU64::from_ptr(ptr).load(Ordering::Acquire) };
&signature.to_ne_bytes() == super::SIGNATURE
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably use SeqCst as well (to match my suggestion above)

@dd-octo-sts
Copy link

dd-octo-sts bot commented Feb 23, 2026

Artifact Size Benchmark Report

aarch64-alpine-linux-musl
Artifact Baseline Commit Change
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.a 97.94 MB 97.95 MB +0% (+1.44 KB) 👌
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.so 9.01 MB 9.01 MB 0% (0 B) 👌
aarch64-unknown-linux-gnu
Artifact Baseline Commit Change
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.a 113.60 MB 113.61 MB +0% (+4.05 KB) 👌
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.so 11.58 MB 11.58 MB +0% (+304 B) 👌
libdatadog-x64-windows
Artifact Baseline Commit Change
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.dll 27.69 MB 27.69 MB 0% (0 B) 👌
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.lib 76.26 KB 76.26 KB 0% (0 B) 👌
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.pdb 186.14 MB 186.15 MB +0% (+8.00 KB) 👌
/libdatadog-x64-windows/debug/static/datadog_profiling_ffi.lib 918.80 MB 918.80 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.dll 10.31 MB 10.31 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.lib 76.26 KB 76.26 KB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.pdb 24.96 MB 24.96 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/static/datadog_profiling_ffi.lib 52.22 MB 52.22 MB 0% (0 B) 👌
libdatadog-x86-windows
Artifact Baseline Commit Change
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.dll 23.45 MB 23.45 MB 0% (0 B) 👌
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.lib 77.44 KB 77.44 KB 0% (0 B) 👌
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.pdb 190.58 MB 190.57 MB -0% (-8.00 KB) 👌
/libdatadog-x86-windows/debug/static/datadog_profiling_ffi.lib 904.30 MB 904.30 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.dll 7.82 MB 7.82 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.lib 77.44 KB 77.44 KB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.pdb 26.73 MB 26.73 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/static/datadog_profiling_ffi.lib 47.76 MB 47.76 MB 0% (0 B) 👌
x86_64-alpine-linux-musl
Artifact Baseline Commit Change
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.a 86.07 MB 86.07 MB -0% (-680 B) 👌
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.so 10.52 MB 10.52 MB 0% (0 B) 👌
x86_64-unknown-linux-gnu
Artifact Baseline Commit Change
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.a 106.70 MB 106.70 MB +0% (+1.78 KB) 👌
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.so 12.27 MB 12.27 MB +0% (+8 B) 👌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants