Skip to content

feat: Support aux loss normalization in RL SFT#2194

Draft
pthombre wants to merge 1 commit intomainfrom
pranav/moe_loss_normalization
Draft

feat: Support aux loss normalization in RL SFT#2194
pthombre wants to merge 1 commit intomainfrom
pranav/moe_loss_normalization

Conversation

@pthombre
Copy link
Copy Markdown

@pthombre pthombre commented Apr 2, 2026

What does this PR do ?

Remove the MoE aux loss assertion that blocked aux_loss usage with calculate_per_token_loss=True. Add moe_grad_scale_func to properly normalize MOE auxiliary loss gradients: sets scale to 1/global_valid_toks before forward-backward and clears it after, so that after DDP SUM the aux loss gradient is correctly averaged.

Also adds sft_nanov3.yaml config for nano-v3 SFT training with MoE seq_aux_loss enabled.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 2, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Remove the MoE aux loss assertion that blocked aux_loss usage with
calculate_per_token_loss=True. Add moe_grad_scale_func to properly
normalize MOE auxiliary loss gradients: sets scale to 1/global_valid_toks
before forward-backward and clears it after, so that after DDP SUM the
aux loss gradient is correctly averaged.

Also adds sft_nanov3.yaml config for nano-v3 SFT training with MoE
seq_aux_loss enabled.

Signed-off-by: Pranav Prashant Thombre <pthombre@nvidia.com>
@pthombre pthombre force-pushed the pranav/moe_loss_normalization branch from e1dcc07 to 136fa39 Compare April 2, 2026 20:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant