Skip to content

[DeepSeek-V4] Implement Compressed Attention Layers#3866

Open
parambole wants to merge 1 commit into
dsv4-moe-routing-primitivesfrom
deepseek_v4_compressed_attention
Open

[DeepSeek-V4] Implement Compressed Attention Layers#3866
parambole wants to merge 1 commit into
dsv4-moe-routing-primitivesfrom
deepseek_v4_compressed_attention

Conversation

@parambole
Copy link
Copy Markdown
Collaborator

@parambole parambole commented May 11, 2026

Description

Implement compressed attention mechanisms and indexer modules required for DeepSeek-V4 integration into MaxText:

  • CSACompressor & HCACompressor: Long-range attention compressors supporting causal block bias and YaRN frequency scaling decoupling.
  • LightningIndexer: Memory-efficient indexer module implementing sentinel masking and dynamic RoPE scaling.
  • Configuration: Register attention compression hyperparameters (compress_ratios, index_head_dim, sliding_window) in types.py and base.yml.
  • Unit test suite (tests/unit/deepseek_v4_vs_reference_test.py) validating attention compression parity against PyTorch reference implementations at atol=1e-5, rtol=1e-5.

Tests

Tested on CPU

pytest  tests/unit/deepseek_v4_vs_reference_test.py

======================= 10 passed, 10 warnings in 20.42s =======================
tests/unit/deepseek_v4_vs_reference_test.py ..........                   [100%]

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 11, 2026

Codecov Report

❌ Patch coverage is 7.63052% with 230 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/maxtext/layers/attention_compressed.py 7.63% 230 Missing ⚠️

📢 Thoughts on this report? Let us know!

@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 5f54827 to 07eb3e2 Compare May 11, 2026 19:39
@parambole parambole changed the base branch from deepseek_v4_core_primitives to dsv4-moe-routing-primitives May 11, 2026 20:29
@parambole parambole force-pushed the dsv4-moe-routing-primitives branch from 37ee811 to 31329c5 Compare May 11, 2026 20:38
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 07eb3e2 to 4520166 Compare May 11, 2026 20:43
@parambole parambole force-pushed the dsv4-moe-routing-primitives branch from 31329c5 to 22a57ff Compare May 12, 2026 17:23
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 4520166 to 10ca4f6 Compare May 12, 2026 17:23
@parambole parambole force-pushed the dsv4-moe-routing-primitives branch from 22a57ff to 32869e5 Compare May 12, 2026 21:12
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 10ca4f6 to 31a5932 Compare May 12, 2026 21:13
@parambole parambole force-pushed the dsv4-moe-routing-primitives branch from 32869e5 to c92f2e0 Compare May 14, 2026 17:51
…ghtningIndexer)

Implement compressed attention mechanisms and indexer modules for DeepSeek-V4 integration into MaxText:

- CSACompressor & HCACompressor: Long-range attention compressors supporting causal block bias and YaRN frequency scaling decoupling.
- LightningIndexer: Memory-efficient indexer module implementing sentinel masking and dynamic RoPE scaling.
- Configuration: Register attention compression hyperparameters (compress_ratios, index_head_dim, sliding_window) in types.py and base.yml.
- Parity verification: Extended unit test suite (deepseek_v4_vs_reference_test.py) validating attention compression parity against PyTorch reference implementations at atol=1e-5, rtol=1e-5.
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 31a5932 to c98a34e Compare May 14, 2026 17:53
@parambole parambole changed the title Implement DeepSeek-V4 Compressed Attention Layers [DeepSeek-V4] Implement Compressed Attention Layers May 14, 2026
@github-actions
Copy link
Copy Markdown

🤖 Hi @parambole, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants