[main] feat(moe): Support gated delta net for Qwen3-Next (1/4)#1989
[main] feat(moe): Support gated delta net for Qwen3-Next (1/4)#1989Phlip79 merged 4 commits intoNVIDIA:mainfrom
Conversation
|
What tests have been done to validate the implementation of the gated delta net is correct? |
9c74ec3 to
4fcc94b
Compare
99d022e to
1a63170
Compare
|
Hi @jaredcasper , I have rebased main and make the following changes to this MR,
Thanks. cc @yanring |
Hi @jaredcasper , we have do several things to validate the implementation.
In the next few weeks, we will further add a functional test to guarantee the long-term correctness of Qwen3-Next. |
065a890 to
c355b9a
Compare
In the meantime do you think it makes sense to split this up and get the other features in? |
Do you mean covering multiple features in a single functional test? It will be excellent, as long as we do not have too many features in one test case making it hard to maintain. |
|
Hi @jaredcasper , I remember you mentioned that you need a thorough review to the |
No, I mean there are 7 changes listed in the PR, do they all need to be in one PR? I'm saying while some of the features are getting tests added for them does it make sense to open PRs for some of the others that have been tested. |
I thought you were going to work on it a bit after our conversation, but if you are done refining I've asked some from my team to do a thorough review. |
|
/ok to test c574f43 |
|
/ok to test 271bb87 |
|
/ok to test 7c2a449 |
yanring
left a comment
There was a problem hiding this comment.
LGTM, thanks for the refinement!
|
/ok to test 8fe3705 |
|
/ok to test 476bc57 |
megatron/training/training.py
Outdated
| key_projection_size = args.kv_channels * args.num_query_groups | ||
| value_projection_size = args.kv_channels * args.num_query_groups | ||
| standard_self_attn_term = ( | ||
| 3 |
There was a problem hiding this comment.
Can you add explanations here since you aren't using expansion factor? Or ever better use expansion_factor to make this more readable. The transformation of this equation you've done here isn't immediately obvious, can you make it closer to the previous formulation?
There was a problem hiding this comment.
I'm sure it is correct because:
- I have checked carefully and make sure it is equivalent to the previous code.
- I have checked in an E2E test and find it outputs the same result as before.
I also add some comments to make it easy to understand.
I do not agree with the point "use expansion_factor to make this more readable". I know most of the people are confused by it because there is a 2x in the expansion_factor:
# - 2x: GEMMs of a particular size are stacked twice in the standard Transformer model
# architectures implemented in this codebase (e.g., h->ffn_h GEMM and ffn_h->h GEMM
# in MLP layer).
which is only immediately obvious for MLP. Could you please tell me what does this 2x factor mean in each term? To tell you the truth, many colleagues and I finally understand it after a careful derivation, reflecting how truly unreadable it previously was. In contrast, the calculation of MLA doesn't use expansion_factor, and as you can see, it is much cleaner than MHA and GQA. Furthermore, the previous version has a chaotic calculation sequence and lacked comments, rendering it completely unreadable. That's why I argue that we need to rewrite it.
There was a problem hiding this comment.
Different approaches, I thought the previous was much more readable. :)
In any case, there is a large comment explaining expansion_factor, and the creation of the variable itself, but expansion_factor is not used any more (unless I'm missing its use?). Please clean this up if you are going to remove expansion_factor. The comments explaining expansion factor can be moved down here.
There was a problem hiding this comment.
Thanks for your suggestion and discussion. I refine it again and break the expansion_factor into forward_backward_expansion_factor, fma_expansion_factor, and ffn_expansion_factor and refine their comments.
|
/ok to test 82ba0d0 |
|
/ok to test 3529e98 |
|
/ok to test 3e1ca94 |
Minor refines Co-authored-by: Li Tao <lit@nvidia.com> fix CI get_transformer_block_with_experimental_attention_variant_spec Update test_mamba_moe_model.py Reopen qwen3next functional test in lightweight mode (NVIDIA#2493) Signed-off-by: oliver könig <okoenig@nvidia.com> Co-authored-by: oliver könig <okoenig@nvidia.com>
|
/ok to test 418be3c |
Signed-off-by: oliver könig <okoenig@nvidia.com>
|
/ok to test fd96ce8 |
|
/ok to test df953ac |
|
/ok to test 13d2cc9 |
|
/ok to test 06e3ea9 |
|
/ok to test f58c310 |
What does this PR do ?
MR to dev.
Design doc
Qwen3-Next functionality PRs.
Changes in this PR:
New supported arguments for Qwen3-Next
New for Qwen3-Next, but already supported in MCore yet
LM loss curve with the training dataset of Qwen3 are as below (GBS=256, seq_len=4096, TP1 is in green, TP2 is in blue).
wandb url
Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.