[Feature] Offload optimizer states to CPU to reduce NPU memory with minimal performance impact#1524
Open
tina-wen wants to merge 1 commit intoInternLM:mainfrom
Open
[Feature] Offload optimizer states to CPU to reduce NPU memory with minimal performance impact#1524tina-wen wants to merge 1 commit intoInternLM:mainfrom
tina-wen wants to merge 1 commit intoInternLM:mainfrom
Conversation
0242b59 to
b8035c2
Compare
HAOCHENYE
reviewed
Mar 4, 2026
| self.optimizer = optimizer | ||
| self.swap_optimizer_times = swap_optimizer_times | ||
| if SwapOptimizerOperate.swap_to_device_stream is None: | ||
| SwapOptimizerOperate.swap_to_device_stream = torch.npu.Stream() |
Collaborator
There was a problem hiding this comment.
Please use get_torch_device_module() to get DEVICE_MODULE to replace torch.npu
| self.optimizer.swap_numel = swap_numel | ||
|
|
||
| swap_memory = swap_num * 8 / 1024 / 1024 | ||
| print('[Rank {}] swap optimizer param num: {}, param size: {}MB\n'.format(torch.npu.current_device(), swap_num, swap_memory), end='') |
Collaborator
There was a problem hiding this comment.
Using logger defined in xtuner
| cls.swap_to_host_events_map[param] = None | ||
|
|
||
| @classmethod | ||
| def swap_all_to_device(cls): |
Collaborator
There was a problem hiding this comment.
Should the swap_to_device_stream wait for the main cuda stream to avoid for the memory peak cause by backward computation and swap_all_to_device
Collaborator
There was a problem hiding this comment.
swap_to_device_stream is not used?
| cls.swap_to_device_events_map[param] = torch.npu.current_stream().record_event() | ||
|
|
||
| @classmethod | ||
| def wait_swap_to_device_event(cls, param): |
| [group['step']], amsgrad=amsgrad, lr=group['lr'], beta1=beta1, beta2=beta2, weight_decay=group['weight_decay'], | ||
| eps=group['eps'], maximize=group['maximize']) | ||
|
|
||
| # it maybe removed |
Collaborator
There was a problem hiding this comment.
Why swap_all_to_host is not called here?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR adds CPU offloading for optimizer states to reduce NPU memory usage. Optimizer states stay in host memory and are transferred to device only during optimizer.step() via h2d/d2h communications.
Changes
Testing
Verified with: