Optimizer State and Master Weight Offloading#2811
Conversation
|
We are changing our review process and marking all open, unlabeled PRs as draft. This change will go in effect starting once #3659 is merged. Moving forward, all PRs will be required to start as draft PRs. If you wish to get your PR merged, mark your PR as “Ready for review”. Read more about the new process at submit.md. |
| @@ -1274,13 +1217,18 @@ def validate_args(args, defaults={}): | |||
| "must be used in conjunction with `--fp8-recipe delayed`." | |||
| ) | |||
|
|
|||
| if args.offload_optimizer_states: | |||
There was a problem hiding this comment.
does this work with args.optimizer_cpu_offload ? Can we or should we ever offload both?
There was a problem hiding this comment.
They cannot work together. optimizer_cpu_offload offloads more things, including states, grads, etc., and does the optimizer computation on CPU. So, there is no need to use them together. But CPU computation is slow, this feature just copies states from/to GPUs and still has the computation on the GPU, which is much faster on GB200-like systems.
There was a problem hiding this comment.
do we need to add an assert for this then?
Phlip79
left a comment
There was a problem hiding this comment.
Can you please update the documentation at megatron/core/optimizer/cpu_offloading/README.md?
| @@ -0,0 +1,315 @@ | |||
| # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. | |||
|
|
||
| class OptimizerStateOffloader: | ||
| """ | ||
| Manages offloading of optimizer states and master weights to CPU. |
There was a problem hiding this comment.
Can we not use the term "master weights" throughout? "Primary weights" or "fp32 weights" instead?
There was a problem hiding this comment.
Other places, including the TE flag and attributes, use "master weights". It is confusing unless we rename all the use cases from TE to MCore.
| @@ -0,0 +1,337 @@ | |||
| # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. | |||
d126ff6 to
8533ec8
Compare
What does this PR do ?
dev branch: #2760
PR Description: Optimizer State Offloading for DistributedOptimizer
Summary
This PR introduces optimizer state offloading to CPU for the
DistributedOptimizer, enabling significant GPU memory savings during training by temporarily moving optimizer states (exp_avg, exp_avg_sq) and master weights to CPU memory when not in use.Motivation
During the forward and backward passes, optimizer states occupy GPU memory but are not actively used. For large models, these states can consume a substantial portion of GPU memory. By offloading optimizer states to CPU after
optimizer.step()and reloading them before the next step, we can reclaim this GPU memory for other operations like activation checkpointing or larger batch sizes.Performance & Memory Savings
Memory Savings:
Comparison with Optimizer CPU Offloading:
More details:
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
Feel free to message or comment the @megatron-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.