Skip to content

feat(optimizer): add FlashAdamW optimizer integration#4229

Open
meinie0826 wants to merge 1 commit intoNVIDIA:mainfrom
meinie0826:feature/flashadamw-integration
Open

feat(optimizer): add FlashAdamW optimizer integration#4229
meinie0826 wants to merge 1 commit intoNVIDIA:mainfrom
meinie0826:feature/flashadamw-integration

Conversation

@meinie0826
Copy link
Copy Markdown

What does this PR do ?

Integrates FlashAdamW from the flashoptim library (>= 0.1.3) into Megatron-Core's optimizer infrastructure.

FlashAdamW reduces optimizer memory from 16 bytes/param (BF16 model + fp32 master weight + fp32 exp_avg + fp32 exp_avg_sq) to ~7 bytes/param via:

  • Master weight splitting: BF16 param + INT8/INT16 ECC correction term (24-bit or 32-bit effective precision)
  • Companded state quantization: softsign-transformed INT8 exp_avg, sqrt-transformed INT8 exp_avg_sq

Closes #4171

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 9, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copilot AI review requested due to automatic review settings April 9, 2026 10:28
@meinie0826 meinie0826 requested review from a team as code owners April 9, 2026 10:28
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 9, 2026 10:29
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 9, 2026

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@meinie0826 meinie0826 marked this pull request as ready for review April 9, 2026 10:29
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 9, 2026 10:29
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Apr 9, 2026
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Integrates the flashoptim library’s FlashAdamW optimizer into Megatron-Core by adding configuration/CLI support, wiring optimizer creation into the optimizer factory, and introducing unit tests for FlashAdamW behavior.

Changes:

  • Added FlashAdamW argument parsing + validation and included flashadamw in the --optimizer choices.
  • Extended OptimizerConfig with FlashAdamW-specific knobs (master weight bits, quantization, checkpoint compression).
  • Implemented FlashAdamW instantiation in the optimizer factory and added a dedicated unit test module.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.

File Description
tests/unit_tests/optimizer/test_flashadamw.py Adds FlashAdamW-focused unit tests (config defaults, basic CUDA runs, checkpoint roundtrips, and memory/state size checks).
megatron/training/arguments.py Adds FlashAdamW CLI flags and validation; registers flashadamw as a supported optimizer choice.
megatron/core/optimizer/optimizer_config.py Introduces FlashAdamW fields in OptimizerConfig.
megatron/core/optimizer/__init__.py Wires FlashAdamW into optimizer creation and routes it through the “standard optimizer” path.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +598 to +615
elif config.optimizer == 'flashadamw':
try:
from flashoptim import FlashAdamW
except ImportError:
raise ImportError(
"FlashAdamW optimizer requires flashoptim >= 0.1.3. "
"Install it with: pip install 'flashoptim>=0.1.3'"
)
optimizer = FlashAdamW(
param_groups,
lr=config.lr,
betas=(config.adam_beta1, config.adam_beta2),
eps=config.adam_eps,
weight_decay=config.weight_decay,
master_weight_bits=config.flashadamw_master_weight_bits,
quantize=config.flashadamw_quantize,
compress_state_dict=config.flashadamw_compress_state_dict,
)
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FlashAdamW is created with BF16 model parameters here, but the standard Megatron wrapping that happens later for bf16/fp16 will wrap this optimizer with Float16OptimizerWithFloat16Params, which replaces param_group['params'] with FP32 clones (creating external master weights). That defeats FlashAdamW’s internal master-weight splitting, likely breaks FlashAdamW dtype assumptions, and contradicts the validation message that FlashAdamW manages its own master weights. Consider special-casing flashadamw so it is wrapped with a MegatronOptimizer that does not create FP32 parameter copies (e.g., FP32Optimizer or a dedicated wrapper), and ensure incompatible flags like use_distributed_optimizer/fp16 scaling are handled explicitly in-core (not only via arguments.py).

Copilot uses AI. Check for mistakes.
import copy
import tempfile
from pathlib import Path
from unittest.mock import MagicMock, patch
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused imports: MagicMock and patch are imported but never referenced in this test module. Please remove them to keep the test file clean and avoid lint failures in environments that enforce unused-import checks.

Copilot uses AI. Check for mistakes.
@chtruong814 chtruong814 added the needs-follow-up Issue needs follow-up label Apr 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-request Final Review PR is in the "final review" stage needs-follow-up Issue needs follow-up

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: FlashOptim - optimizer memory reduction

5 participants