Skip to content

cascade: remove dead weight init code#13026

Merged
comfyanonymous merged 1 commit intoComfy-Org:masterfrom
rattus128:prs/dynamic-vram-fixes/cascade
Mar 18, 2026
Merged

cascade: remove dead weight init code#13026
comfyanonymous merged 1 commit intoComfy-Org:masterfrom
rattus128:prs/dynamic-vram-fixes/cascade

Conversation

@rattus128
Copy link
Copy Markdown
Contributor

This weight init process is fully shadowed be the weight load and doesnt work in dynamic_vram were the weight allocation is deferred.

Example test conditions:
Linux, 5090
Stable cascade -> Flux 2

image

This change to emulate windows:

--- a/comfy/ops.py
+++ b/comfy/ops.py
@@ -337,8 +337,7 @@ class disable_weight_init:
 
         def __init__(self, in_features, out_features, bias=True, device=None, dtype=None):
             # don't trust subclasses that BYO state dict loader to call us.
-            if (not comfy.model_management.WINDOWS
-                or not comfy.memory_management.aimdo_enabled
+            if (not comfy.memory_management.aimdo_enabled
                 or type(self)._load_from_state_dict is not disable_weight_init.Linear._load_from_state_dict):
                 super().__init__(in_features, out_features, bias, device, dtype)
                 return
@@ -360,8 +359,7 @@ class disable_weight_init:
         def _load_from_state_dict(self, state_dict, prefix, local_metadata,
                                 strict, missing_keys, unexpected_keys, error_msgs):
 
-            if (not comfy.model_management.WINDOWS
-                or not comfy.memory_management.aimdo_enabled
+            if (not comfy.memory_management.aimdo_enabled
                 or type(self)._load_from_state_dict is not disable_weight_init.Linear._load_from_state_dict):
                 return super()._load_from_state_dict(state_dict, prefix, local_metadata, strict,
                                                      missing_keys, unexpected_keys, error_msgs)

Before:

  File "/home/rattus/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1063, in apply
    module.apply(fn)
  File "/home/rattus/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1064, in apply
    fn(self)
  File "/home/rattus/ComfyUI/comfy/ldm/cascade/stage_a.py", line 144, in _basic_init
    torch.nn.init.xavier_uniform_(module.weight)
  File "/home/rattus/venv/lib/python3.12/site-packages/torch/nn/init.py", line 463, in xavier_uniform_
    fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rattus/venv/lib/python3.12/site-packages/torch/nn/init.py", line 417, in _calculate_fan_in_and_fan_out
    dimensions = tensor.dim()
                 ^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'dim'

Prompt executed in 0.69 seconds

After:

Model AutoencoderKL prepared for dynamic VRAM loading. 160MB Staged. 0 patches attached.
Prompt executed in 64.76 seconds

Same seed outputs checked for consistency ✅

This weight init process is fully shadowed be the weight load and
doesnt work in dynamic_vram were the weight allocation is deferred.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 17, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 8448cf95-1011-478a-8ee3-eaef297fff15

📥 Commits

Reviewing files that changed from the base of the PR and between 8b9d039 and b68a96a.

📒 Files selected for processing (1)
  • comfy/ldm/cascade/stage_a.py

📝 Walkthrough

Walkthrough

The ResBlock class in comfy/ldm/cascade/stage_a.py was modified to change the gammas parameter initialization from trainable (requires_grad=True) to non-trainable (requires_grad=False). Additionally, the automatic weight initialization routine (_basic_init function) and its application via self.apply() were removed. The overall control flow of the class remains unchanged.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: removing weight initialization code from the ResBlock class that was not functioning properly.
Description check ✅ Passed The description is directly related to the changeset, explaining why the weight init code was problematic and providing concrete evidence with error traces and successful test results.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can customize the high-level summary generated by CodeRabbit.

Configure the reviews.high_level_summary_instructions setting to provide custom instructions for generating the high-level summary.

@comfyanonymous comfyanonymous merged commit cad24ce into Comfy-Org:master Mar 18, 2026
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants