Skip to content

Optimizer State and Master Weight Offloading#2811

Open
hxbai wants to merge 7 commits intoNVIDIA:mainfrom
hxbai:opt_state_offload_main
Open

Optimizer State and Master Weight Offloading#2811
hxbai wants to merge 7 commits intoNVIDIA:mainfrom
hxbai:opt_state_offload_main

Conversation

@hxbai
Copy link
Copy Markdown
Contributor

@hxbai hxbai commented Jan 5, 2026

What does this PR do ?

dev branch: #2760

PR Description: Optimizer State Offloading for DistributedOptimizer

Summary

This PR introduces optimizer state offloading to CPU for the DistributedOptimizer, enabling significant GPU memory savings during training by temporarily moving optimizer states (exp_avg, exp_avg_sq) and master weights to CPU memory when not in use.

Motivation

During the forward and backward passes, optimizer states occupy GPU memory but are not actively used. For large models, these states can consume a substantial portion of GPU memory. By offloading optimizer states to CPU after optimizer.step() and reloading them before the next step, we can reclaim this GPU memory for other operations like activation checkpointing or larger batch sizes.

Performance & Memory Savings

Memory Savings:

  • On DeepSeek-V3, this feature saves 15-20GB of GPU memory (with 0.1-0.2s/iter overhead on GB200)

Comparison with Optimizer CPU Offloading:

Aspect CPU Offloading (ZeRO-Offload style) State Offloading (This PR)
Where optimizer runs CPU GPU
D2H/H2D frequency Every step (gradients + params) Every step (states only)
Compute location Adam step on CPU Adam step on GPU
Best for Memory-constrained, bandwidth-limited High-bandwidth interconnects (NVLink, GB200)

More details:

  • With higher H2D/D2H bandwidth such as GB200, the state offloading has significantly less overhead
  • Async transfer overlapping
  • Optimizer step still runs on GPU, avoiding CPU compute bottleneck
  • Pinned memory enables maximum PCIe/NVLink bandwidth utilization
  • Currently requires TE FusedAdam optimizer

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @megatron-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @megatron-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@hxbai hxbai requested review from a team as code owners January 5, 2026 14:05
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Jan 5, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@Phlip79
Copy link
Copy Markdown
Member

Phlip79 commented Mar 4, 2026

We are changing our review process and marking all open, unlabeled PRs as draft. This change will go in effect starting once #3659 is merged.

Moving forward, all PRs will be required to start as draft PRs. If you wish to get your PR merged, mark your PR as “Ready for review”. Read more about the new process at submit.md.

@Phlip79 Phlip79 marked this pull request as draft March 4, 2026 22:25
@hxbai hxbai marked this pull request as ready for review March 17, 2026 00:06
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 17, 2026 00:06
@svcnvidia-nemo-ci svcnvidia-nemo-ci added Final Review PR is in the "final review" stage complexity: medium labels Mar 17, 2026
@@ -1274,13 +1217,18 @@ def validate_args(args, defaults={}):
"must be used in conjunction with `--fp8-recipe delayed`."
)

if args.offload_optimizer_states:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this work with args.optimizer_cpu_offload ? Can we or should we ever offload both?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They cannot work together. optimizer_cpu_offload offloads more things, including states, grads, etc., and does the optimizer computation on CPU. So, there is no need to use them together. But CPU computation is slow, this feature just copies states from/to GPUs and still has the computation on the GPU, which is much faster on GB200-like systems.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to add an assert for this then?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

Copy link
Copy Markdown
Member

@Phlip79 Phlip79 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please update the documentation at megatron/core/optimizer/cpu_offloading/README.md?

@@ -0,0 +1,315 @@
# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrong year.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


class OptimizerStateOffloader:
"""
Manages offloading of optimizer states and master weights to CPU.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not use the term "master weights" throughout? "Primary weights" or "fp32 weights" instead?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other places, including the TE flag and attributes, use "master weights". It is confusing unless we rename all the use cases from TE to MCore.

@@ -0,0 +1,337 @@
# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrong year.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@hxbai hxbai force-pushed the opt_state_offload_main branch from d126ff6 to 8533ec8 Compare April 14, 2026 02:14
@hxbai hxbai requested review from a team as code owners April 14, 2026 02:14
@svcnvidia-nemo-ci svcnvidia-nemo-ci removed the Final Review PR is in the "final review" stage label Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants