Skip to content

Fix blur and sharpen nodes not working with fp16 intermediates.#13181

Merged
comfyanonymous merged 1 commit intomasterfrom
temp_pr
Mar 27, 2026
Merged

Fix blur and sharpen nodes not working with fp16 intermediates.#13181
comfyanonymous merged 1 commit intomasterfrom
temp_pr

Conversation

@comfyanonymous
Copy link
Copy Markdown
Member

No description provided.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 27, 2026

📝 Walkthrough

Walkthrough

This pull request updates the Gaussian kernel generation function to accept and respect a dtype parameter, ensuring blur and sharpen post-processing operations use kernels matching the input image's data type. The gaussian_kernel function signature is expanded with an optional dtype parameter defaulting to torch.float32, and the function returns the normalized kernel cast to that dtype. The Blur and Sharpen execute methods are updated to explicitly pass the image's dtype when creating kernels, aligning kernel data types with input images.

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive No description was provided, making it impossible to evaluate whether it relates to the changeset. Add a pull request description explaining the fp16 dtype handling changes and their impact on blur/sharpen operations.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: fixing blur and sharpen nodes to work with fp16 intermediates.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
comfy_extras/nodes_post_processing.py (1)

203-204: Remove the now-redundant kernel cast in Sharpen.execute.

Since Line 203 already requests dtype=image.dtype from gaussian_kernel, Line 204 can be dropped for clarity.

♻️ Suggested cleanup
-        kernel = gaussian_kernel(kernel_size, sigma, device=image.device, dtype=image.dtype) * -(alpha*10)
-        kernel = kernel.to(dtype=image.dtype)
+        kernel = gaussian_kernel(kernel_size, sigma, device=image.device, dtype=image.dtype) * -(alpha*10)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy_extras/nodes_post_processing.py` around lines 203 - 204, In
Sharpen.execute, remove the redundant kernel.astype/cast line: since
gaussian_kernel is already called with dtype=image.dtype (kernel =
gaussian_kernel(kernel_size, sigma, device=image.device, dtype=image.dtype) *
-(alpha*10)), delete the subsequent kernel = kernel.to(dtype=image.dtype)
statement so the kernel variable is not unnecessarily re-cast.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@comfy_extras/nodes_post_processing.py`:
- Around line 203-204: In Sharpen.execute, remove the redundant
kernel.astype/cast line: since gaussian_kernel is already called with
dtype=image.dtype (kernel = gaussian_kernel(kernel_size, sigma,
device=image.device, dtype=image.dtype) * -(alpha*10)), delete the subsequent
kernel = kernel.to(dtype=image.dtype) statement so the kernel variable is not
unnecessarily re-cast.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 7a6f9b4d-5565-4948-aad2-6cecda1a407f

📥 Commits

Reviewing files that changed from the base of the PR and between 1dc64f3 and f5f39a0.

📒 Files selected for processing (1)
  • comfy_extras/nodes_post_processing.py

@comfyanonymous comfyanonymous merged commit b1fdbeb into master Mar 27, 2026
16 checks passed
@comfyanonymous comfyanonymous deleted the temp_pr branch March 27, 2026 02:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant