Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion comfy/ldm/flux/math.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor, mask=None, transforme

def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
assert dim % 2 == 0
if comfy.model_management.is_device_mps(pos.device) or comfy.model_management.is_intel_xpu() or comfy.model_management.is_directml_enabled():
if not comfy.model_management.supports_fp64(pos.device):
device = torch.device("cpu")
else:
device = pos.device
Expand Down
15 changes: 15 additions & 0 deletions comfy/model_management.py
Original file line number Diff line number Diff line change
Expand Up @@ -1732,6 +1732,21 @@ def supports_mxfp8_compute(device=None):

return True

def supports_fp64(device=None):
if is_device_mps(device):
return False

if is_intel_xpu():
return False

if is_directml_enabled():
return False

if is_ixuca():
return False

return True
Comment on lines +1735 to +1748
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

device=None currently reports FP64 support incorrectly on MPS.

Line 1735 introduces device=None, but the function never resolves None to the active device. In MPS mode, supports_fp64() returns True, which can route FP64 ops to an unsupported backend.

Proposed fix
 def supports_fp64(device=None):
+    if device is None:
+        device = get_torch_device()
+
     if is_device_mps(device):
         return False
 
     if is_intel_xpu():
         return False
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def supports_fp64(device=None):
if is_device_mps(device):
return False
if is_intel_xpu():
return False
if is_directml_enabled():
return False
if is_ixuca():
return False
return True
def supports_fp64(device=None):
if device is None:
device = get_torch_device()
if is_device_mps(device):
return False
if is_intel_xpu():
return False
if is_directml_enabled():
return False
if is_ixuca():
return False
return True
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/model_management.py` around lines 1735 - 1748, supports_fp64 currently
accepts device=None but never resolves None to the active device, causing MPS to
be misdetected; update supports_fp64 to resolve a None device to the
current/active device before calling is_device_mps (e.g., obtain
torch.cuda.current_device()/torch.device or use the existing project helper that
returns the active device), then run the existing checks (is_device_mps,
is_intel_xpu, is_directml_enabled, is_ixuca) against that resolved device so MPS
is detected correctly and FP64 support is reported accurately.


def extended_fp16_support():
# TODO: check why some models work with fp16 on newer torch versions but not on older
if torch_version_numeric < (2, 7):
Expand Down
Loading