Skip to content

Conversation

@Rohan-Bierneni
Copy link
Collaborator

@Rohan-Bierneni Rohan-Bierneni commented Jan 22, 2026

Description

PR Title: Enable TFLOPs Calculation for Qwen3-Next

Description

This PR implements the TFLOPs calculation logic for the Qwen3-Next architecture. Qwen3-Next utilizes a hybrid design containing both standard full attention layers and linear attention (Gated Delta Net) layers, alongside a Mixture-of-Experts (MoE) structure. This update ensures training TFLOPs are accurately reported for this model family.

Key Changes

1. TFLOPs Calculation Logic (src/MaxText/maxtext_utils.py)

  • Added calculate_gated_delta_net_flops_per_device: Implemented logic to calculate FLOPs for the Linear Attention layers, breaking down operations into:
    • Projections: QKVZ, BA, and Output projections.
    • Convolution: Depthwise convolutions on the key/value/gate states.
    • Core Attention: Intra-chunk and inter-chunk recurrent state operations.
  • Updated calculate_tflops_training_per_device:
    • Added a specific branch for DecoderBlockType.QWEN3_NEXT.
    • Logic now calculates the number of "full attention" layers vs. "linear attention" layers based on the inhomogeneous_layer_cycle_interval.
    • Combines FLOPs from embeddings, MoE FFNs (routed + shared), full causal attention, and linear attention.
  • Updated FFN Helpers: Verified calculate_routed_and_shared_ffn_tflops_per_device and get_dense_moe_layers explicitly support QWEN3_NEXT.

2. Unit Tests (tests/unit/flop_calculation_test.py)

  • Added test_qwen3_next_flops: A new unit test that verifies the implementation against a "golden" manual calculation for the 80B model configuration.
  • Added compute_qwen3_next_attention_flops_per_device: A helper function to compute the expected attention-specific FLOPs for the test assertion.

3. Config Updates (src/MaxText/configs/models/qwen3-next-80b-a3b.yml)

  • Added shared_experts: 1 to the model configuration to ensure correct parameter counting in the FFN FLOPs calculation.

If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/477291633

Tests

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Jan 22, 2026

Codecov Report

❌ Patch coverage is 94.59459% with 2 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/maxtext/utils/maxtext_utils.py 94.59% 1 Missing and 1 partial ⚠️

📢 Thoughts on this report? Let us know!

@Rohan-Bierneni Rohan-Bierneni self-assigned this Jan 22, 2026
Copy link
Collaborator

@RissyRan RissyRan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update!

We have some unit tests for those tflops, if you could come up with a test, that would be great, but not mandatory for this PR. Also, could you also have some sanity check with other MoE model (no change impact)? Similar to #2979

@Rohan-Bierneni
Copy link
Collaborator Author

Thanks for the update!

We have some unit tests for those tflops, if you could come up with a test, that would be great, but not mandatory for this PR. Also, could you also have some sanity check with other MoE model (no change impact)? Similar to #2979

I have added a unit test in flop_calculation_test.py. Also will run the sanity checks and update the pr description with results.

Copy link
Collaborator

@parambole parambole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment regarding the attention calculation. PTAL.

Copy link
Collaborator

@parambole parambole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you!

Copy link
Collaborator

@RissyRan RissyRan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the change! I sent a quick sync-up, hopefully this could iterate faster.

@github-actions
Copy link

🤖 Hi @Rohan-Bierneni, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This Pull Request introduces TFLOPs calculation for the Qwen3-Next model architecture, which includes Gated Delta Net (Linear Attention) layers and a Mixture-of-Experts (MoE) structure. The implementation correctly integrates the new calculation logic and includes a comprehensive unit test to verify its accuracy.

🔍 General Feedback

  • The TFLOPs calculation logic for Gated Delta Net appears well-detailed and systematically broken down.
  • The unit test test_qwen3_next_flops provides good coverage by manually calculating expected FLOPs and comparing them against the implemented function.

@Rohan-Bierneni Rohan-Bierneni force-pushed the rbierneni-qwen3-next-tflops branch 5 times, most recently from 6192638 to 8d977db Compare January 29, 2026 17:22
Add flop test for qwen3 next and resolve pr comments

Typo in model config fixed

Update flops_intra to account for inner_attn loop

Update testcase with new flops_intra calculation

Change if statement to less ambiguous

Add comments on QKVZ tensor shapes

Typo in comment

fix linter issues

files formatted
@Rohan-Bierneni Rohan-Bierneni force-pushed the rbierneni-qwen3-next-tflops branch from 8d977db to 3fcbcc7 Compare January 30, 2026 18:45
@Rohan-Bierneni
Copy link
Collaborator Author

Manually adding pull_ready tag since all tests passed and 2 approvals recieved

copybara-service bot pushed a commit that referenced this pull request Jan 30, 2026
FUTURE_COPYBARA_INTEGRATE_REVIEW=#2999 from AI-Hypercomputer:rbierneni-qwen3-next-tflops 3fcbcc7
PiperOrigin-RevId: 863413100
copybara-service bot pushed a commit that referenced this pull request Jan 30, 2026
FUTURE_COPYBARA_INTEGRATE_REVIEW=#2999 from AI-Hypercomputer:rbierneni-qwen3-next-tflops 3fcbcc7
PiperOrigin-RevId: 863413100
copybara-service bot pushed a commit that referenced this pull request Jan 30, 2026
FUTURE_COPYBARA_INTEGRATE_REVIEW=#2999 from AI-Hypercomputer:rbierneni-qwen3-next-tflops 3fcbcc7
PiperOrigin-RevId: 863413100
copybara-service bot pushed a commit that referenced this pull request Jan 31, 2026
FUTURE_COPYBARA_INTEGRATE_REVIEW=#2999 from AI-Hypercomputer:rbierneni-qwen3-next-tflops 3fcbcc7
PiperOrigin-RevId: 863423004
copybara-service bot pushed a commit that referenced this pull request Jan 31, 2026
FUTURE_COPYBARA_INTEGRATE_REVIEW=#2999 from AI-Hypercomputer:rbierneni-qwen3-next-tflops 3fcbcc7
PiperOrigin-RevId: 863423004
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants