Skip to content

[DeepSeek-V4] Implement model integration, decoders, and configuration stack#3867

Open
parambole wants to merge 1 commit into
deepseek_v4_compressed_attentionfrom
dsv4_integration
Open

[DeepSeek-V4] Implement model integration, decoders, and configuration stack#3867
parambole wants to merge 1 commit into
deepseek_v4_compressed_attentionfrom
dsv4_integration

Conversation

@parambole
Copy link
Copy Markdown
Collaborator

@parambole parambole commented May 11, 2026

Description

Implement full model architecture, decoder integration layers, and execution configurations required for DeepSeek-V4 integration into MaxText:

  • deepseek_v4.py: Model architecture definition supporting cyclical layer stacking, multi-head hyper-connections (mHC), and engram memory management.
  • decoders.py & nnx_decoders.py: Integration of DeepSeekV4DecoderLayer, supporting get_attention_type routing and scanned vs unrolled compilation parity.
  • mhc.py & engram.py: Execution layers for additive hyper-connections and engram state projection.
  • Configuration: Register model configs (deepseek_v4-flash.yml, deepseek_v4-tiny.yml) and hyperparameter definitions in base.yml and types.py.
  • Unit test suite (tests/unit/deepseek_v4_vs_reference_test.py) validating end-to-end decoder block parity against PyTorch reference implementations at atol=1e-5, rtol=1e-5.

Tests

Tested on CPU

pytest  tests/unit/deepseek_v4_vs_reference_test.py

================= 12 passed, 318 warnings in 63.46s (0:01:03) ==================
tests/unit/deepseek_v4_vs_reference_test.py ............                 [100%]

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 11, 2026

@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 5f54827 to 07eb3e2 Compare May 11, 2026 19:39
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 07eb3e2 to 4520166 Compare May 11, 2026 20:43
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 4520166 to 10ca4f6 Compare May 12, 2026 17:23
@parambole parambole force-pushed the dsv4_integration branch 3 times, most recently from a1e3133 to efc2768 Compare May 12, 2026 21:02
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 10ca4f6 to 31a5932 Compare May 12, 2026 21:13
@parambole parambole force-pushed the deepseek_v4_compressed_attention branch from 31a5932 to c98a34e Compare May 14, 2026 17:53
…ation stack

Implement full model architecture, decoder integration layers, and execution configurations for DeepSeek-V4 integration into MaxText:

- deepseek_v4.py: Model architecture definition supporting cyclical layer stacking and hyper-connections.
- decoders.py & nnx_decoders.py: Integration of DeepSeekV4DecoderLayer, supporting get_attention_type routing and scanned vs unrolled compilation parity.
- mhc.py & engram.py: Integration of multi-head hyper-connections (mHC) and engram memory management.
- Configuration: Register model configs (deepseek_v4-flash.yml, deepseek_v4-tiny.yml) and hyperparameter definitions in base.yml and types.py.
- Parity verification: Comprehensive unit test suite (deepseek_v4_vs_reference_test.py) validating end-to-end decoder block parity against PyTorch reference implementations at atol=1e-5, rtol=1e-5.
@parambole parambole changed the title DeepSeek V4 Integration [DeepSeek-V4] Implement model integration, decoders, and configuration stack May 14, 2026
@github-actions
Copy link
Copy Markdown

🤖 Hi @parambole, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@github-actions
Copy link
Copy Markdown

🤖 I'm sorry @parambole, but I was unable to process your request. Please see the logs for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants