Skip to content

Add a supports_fp64 function.#13368

Merged
comfyanonymous merged 1 commit intomasterfrom
temp_pr
Apr 12, 2026
Merged

Add a supports_fp64 function.#13368
comfyanonymous merged 1 commit intomasterfrom
temp_pr

Conversation

@comfyanonymous
Copy link
Copy Markdown
Member

No description provided.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 12, 2026

📝 Walkthrough

Walkthrough

A new supports_fp64() helper function was added to comfy/model_management.py to determine floating-point precision support across different device backends. The rope() function in comfy/ldm/flux/math.py was refactored to use this new helper instead of directly checking for specific device types (MPS, Intel XPU, DirectML). The helper returns False for unsupported backends and True otherwise, replacing the prior hardcoded device checks. The underlying RoPE computation logic and outputs remain unchanged.

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive No description was provided by the author, making it impossible to assess relevance to the changeset. Add a pull request description explaining the purpose of the supports_fp64 function and why it's needed for the RoPE computation changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding a new supports_fp64 function to the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/model_management.py`:
- Around line 1735-1748: supports_fp64 currently accepts device=None but never
resolves None to the active device, causing MPS to be misdetected; update
supports_fp64 to resolve a None device to the current/active device before
calling is_device_mps (e.g., obtain torch.cuda.current_device()/torch.device or
use the existing project helper that returns the active device), then run the
existing checks (is_device_mps, is_intel_xpu, is_directml_enabled, is_ixuca)
against that resolved device so MPS is detected correctly and FP64 support is
reported accurately.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 104a9aad-a218-4140-b887-3d8e550656aa

📥 Commits

Reviewing files that changed from the base of the PR and between a2840e7 and 4748d5d.

📒 Files selected for processing (2)
  • comfy/ldm/flux/math.py
  • comfy/model_management.py

Comment on lines +1735 to +1748
def supports_fp64(device=None):
if is_device_mps(device):
return False

if is_intel_xpu():
return False

if is_directml_enabled():
return False

if is_ixuca():
return False

return True
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

device=None currently reports FP64 support incorrectly on MPS.

Line 1735 introduces device=None, but the function never resolves None to the active device. In MPS mode, supports_fp64() returns True, which can route FP64 ops to an unsupported backend.

Proposed fix
 def supports_fp64(device=None):
+    if device is None:
+        device = get_torch_device()
+
     if is_device_mps(device):
         return False
 
     if is_intel_xpu():
         return False
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def supports_fp64(device=None):
if is_device_mps(device):
return False
if is_intel_xpu():
return False
if is_directml_enabled():
return False
if is_ixuca():
return False
return True
def supports_fp64(device=None):
if device is None:
device = get_torch_device()
if is_device_mps(device):
return False
if is_intel_xpu():
return False
if is_directml_enabled():
return False
if is_ixuca():
return False
return True
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/model_management.py` around lines 1735 - 1748, supports_fp64 currently
accepts device=None but never resolves None to the active device, causing MPS to
be misdetected; update supports_fp64 to resolve a None device to the
current/active device before calling is_device_mps (e.g., obtain
torch.cuda.current_device()/torch.device or use the existing project helper that
returns the active device), then run the existing checks (is_device_mps,
is_intel_xpu, is_directml_enabled, is_ixuca) against that resolved device so MPS
is detected correctly and FP64 support is reported accurately.

@comfyanonymous comfyanonymous merged commit 55ebd28 into master Apr 12, 2026
16 checks passed
@comfyanonymous comfyanonymous deleted the temp_pr branch April 12, 2026 01:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant