Skip to content

CORE-15 feat: Support Qwen3.5 text generation models#12771

Merged
comfyanonymous merged 11 commits intoComfy-Org:masterfrom
kijai:qwen35
Mar 26, 2026
Merged

CORE-15 feat: Support Qwen3.5 text generation models#12771
comfyanonymous merged 11 commits intoComfy-Org:masterfrom
kijai:qwen35

Conversation

@kijai
Copy link
Copy Markdown
Contributor

@kijai kijai commented Mar 4, 2026

Adds support for Qwen 3.5 text generation models.

Model for testing (only uploaded 4b in case changes to the layer names are requested):

https://huggingface.co/Comfy-Org/Qwen3.5

Tested with 2b, 4b, 9b

image

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds full QWEN35 support (new comfy/text_encoders/qwen35.py and tokenizer config) and five TEModel.QWEN35_* enum variants. Extends TE detection and loading to recognize and route QWEN35 weights, updates single- and multi-SD TE routing to instantiate qwen35 TEs and tokenizers, and normalizes state_dict prefixes. Propagates a new presence_penalty parameter through CLIP/SD clip generate methods, Llama2 generate/sample_token paths, and node-level text-generation inputs. Introduces Llama2 helpers (get_past_len, compute_freqs_cis, init_kv_cache). Updates llama_detect to check multiple weight keys.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 2.90% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main feature addition: support for Qwen3.5 text generation models, which is the primary objective of this PR.
Description check ✅ Passed The description is directly related to the changeset, explaining that support for Qwen 3.5 text generation models was added, with testing details and a reference to test models, aligning with the code changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy_extras/nodes_textgen.py`:
- Line 37: The node schema exposes an input named "thinking" but
TextGenerateLTX2Prompt.execute does not accept or forward it; update the execute
method signature to accept a thinking parameter (with same optional/default
semantics) and ensure the thinking value is forwarded to wherever other generate
parameters are passed (e.g., the call that builds or calls the LTX2
prompt/generation function), referencing TextGenerateLTX2Prompt.execute and the
"thinking" input so the node interface remains consistent and the option is
effective.

In `@comfy/sd.py`:
- Around line 429-435: The generate wrapper in comfy/sd.py changed parameter
order (placing presence_penalty before seed) which breaks callers that pass seed
positionally; update the generate method signature in the class (def
generate(...)) so that seed appears in the original position (move seed before
presence_penalty and any subsequent params) or restore the original parameter
ordering, and ensure the internal call to self.cond_stage_model.generate
continues to forward seed (use seed=seed to be explicit) — adjust the signature
and any callers of comfy.sd.generate accordingly to preserve backward
compatibility.

In `@comfy/sd1_clip.py`:
- Around line 743-744: The generate method's signature changed positional
ordering by adding presence_penalty after top_p but before seed, breaking
callers that pass seed positionally; restore backward compatibility by keeping
seed as the last positional parameter (or accept *args/**kwargs) so existing
positional calls to generate(..., seed) still bind correctly—update the generate
function signature in comfy/sd1_clip.py (method generate) to place seed before
any newly added optional params or refactor to forward **kwargs to getattr(self,
self.clip).generate while preserving the original parameter order.

In `@comfy/text_encoders/qwen35.py`:
- Line 47: The stop_tokens default only contains 248044 and therefore omits the
tokenizer's configured EOS token 248046 (`<|im_end|>`), causing generation to
potentially ignore normal EOS; update the default_factory for stop_tokens in the
stop_tokens field (where it's currently set to [248044]) to include 248046 as
well so generation will stop on the configured EOS token; ensure any tests or
callers that rely on stop_tokens still behave correctly after adding the EOS id.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 782807b1-5316-416a-9541-24840891ed18

📥 Commits

Reviewing files that changed from the base of the PR and between ac4a943 and 0b43910.

📒 Files selected for processing (8)
  • comfy/sd.py
  • comfy/sd1_clip.py
  • comfy/text_encoders/llama.py
  • comfy/text_encoders/qwen35.py
  • comfy/text_encoders/qwen35_tokenizer/merges.txt
  • comfy/text_encoders/qwen35_tokenizer/tokenizer_config.json
  • comfy/text_encoders/qwen35_tokenizer/vocab.json
  • comfy_extras/nodes_textgen.py

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/sd.py`:
- Around line 1253-1263: The detection code currently assumes the presence of
"model.language_model.layers.0.input_layernorm.weight" after checking
"model.language_model.layers.0.linear_attn.A_log", which can raise KeyError for
variant checkpoints; update the branch that checks
"model.language_model.layers.0.linear_attn.A_log" to first verify the existence
of "model.language_model.layers.0.input_layernorm.weight" (e.g., using
sd.get(...) or an explicit key in sd check) and only read weight.shape if
present, otherwise fall back to the default TEModel (TEModel.QWEN35_2B) or an
appropriate safe branch; ensure the change references the same keys and TEModel
constants (TEModel.QWEN35_08B, TEModel.QWEN35_4B, TEModel.QWEN35_9B,
TEModel.QWEN35_27B, TEModel.QWEN35_2B) so detection remains backward-compatible.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9fc56ddd-cbf0-4236-8116-9acabb07e511

📥 Commits

Reviewing files that changed from the base of the PR and between 0b43910 and c2aa8b0.

📒 Files selected for processing (4)
  • comfy/sd.py
  • comfy/sd1_clip.py
  • comfy/text_encoders/qwen35.py
  • comfy_extras/nodes_textgen.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • comfy/text_encoders/qwen35.py

@jovan2009
Copy link
Copy Markdown

This comment is a feature request loosely related to this PR, please disregard it if it's out of place.

I wanted at some point (but I neglected) to make a feature request for a node similar to TextGenerateLTX2Prompt but for Wan 2.2, using for the system prompt the information from this file: https://github.com/Wan-Video/Wan2.2/blob/main/wan/utils/system_prompt.py

Or, even better, a more generally applicable prompt enhancer node with a system prompt that can be edited by the user.

@Amazon90
Copy link
Copy Markdown

Amazon90 commented Mar 6, 2026

@kijai

image

ComfyUI Error Report

Error Details

  • Node ID: 14
  • Node Type: CLIPLoaderGGUF
  • Exception Type: ValueError
  • Exception Message: Unexpected text model architecture type in GGUF file: 'qwen35'

Stack Trace

  File "D:\ComfyUI\execution.py", line 524, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 333, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 307, in _async_map_node_over_list
    await process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 295, in process_inputs
    result = f(**inputs)

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 251, in load_clip
    return (self.load_patcher([clip_path], clip_type, self.load_data([clip_path])),)
                                                      ~~~~~~~~~~~~~~^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 227, in load_data
    sd = gguf_clip_loader(p)

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py", line 471, in gguf_clip_loader
    sd, extra = gguf_sd_loader(path, is_text_model=True)
                ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py", line 108, in gguf_sd_loader
    raise ValueError(f"Unexpected text model architecture type in GGUF file: {arch_str!r}")

System Information

  • ComfyUI Version: 0.16.3
  • Arguments: D:\ComfyUI\main.py --listen --auto-launch --preview-method auto --use-sage-attention --disable-cuda-malloc
  • OS: win32
  • Python Version: 3.13.12 (tags/v3.13.12:1cbe481, Feb 3 2026, 18:22:25) [MSC v.1944 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.10.0+cu130

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12878086144
    • VRAM Free: 11527913472
    • Torch VRAM Total: 33554432
    • Torch VRAM Free: 25034752

@kijai
Copy link
Copy Markdown
Contributor Author

kijai commented Mar 6, 2026

@kijai

image # ComfyUI Error Report ## Error Details * **Node ID:** 14 * **Node Type:** CLIPLoaderGGUF * **Exception Type:** ValueError * **Exception Message:** Unexpected text model architecture type in GGUF file: 'qwen35'

Stack Trace

  File "D:\ComfyUI\execution.py", line 524, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 333, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 307, in _async_map_node_over_list
    await process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 295, in process_inputs
    result = f(**inputs)

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 251, in load_clip
    return (self.load_patcher([clip_path], clip_type, self.load_data([clip_path])),)
                                                      ~~~~~~~~~~~~~~^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 227, in load_data
    sd = gguf_clip_loader(p)

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py", line 471, in gguf_clip_loader
    sd, extra = gguf_sd_loader(path, is_text_model=True)
                ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py", line 108, in gguf_sd_loader
    raise ValueError(f"Unexpected text model architecture type in GGUF file: {arch_str!r}")

System Information

  • ComfyUI Version: 0.16.3
  • Arguments: D:\ComfyUI\main.py --listen --auto-launch --preview-method auto --use-sage-attention --disable-cuda-malloc
  • OS: win32
  • Python Version: 3.13.12 (tags/v3.13.12:1cbe481, Feb 3 2026, 18:22:25) [MSC v.1944 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.10.0+cu130

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync

    • Type: cuda
    • VRAM Total: 12878086144
    • VRAM Free: 11527913472
    • Torch VRAM Total: 33554432
    • Torch VRAM Free: 25034752

This would be issue for https://github.com/city96/ComfyUI-GGUF , I don't think it supports Qwen3.5.

@Amazon90
Copy link
Copy Markdown

Amazon90 commented Mar 6, 2026

@kijai Even with the same settings and model as you, I still get errors. What should I do?

image

ComfyUI Error Report

Error Details

  • Node ID: 7
  • Node Type: TextGenerate
  • Exception Type: AttributeError
  • Exception Message: 'CLIPTextModel' object has no attribute 'generate'

Stack Trace

  File "D:\ComfyUI\execution.py", line 524, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 333, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 307, in _async_map_node_over_list
    await process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 295, in process_inputs
    result = f(**inputs)

  File "D:\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)

  File "D:\ComfyUI\comfy_api\latest\_io.py", line 1764, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)

  File "D:\ComfyUI\comfy_extras\nodes_textgen.py", line 56, in execute
    generated_ids = clip.generate(
        tokens,
    ...<7 lines>...
        seed=seed
    )

  File "D:\ComfyUI\comfy\sd.py", line 434, in generate
    return self.cond_stage_model.generate(tokens, do_sample=do_sample, max_length=max_length, temperature=temperature, top_k=top_k, top_p=top_p, min_p=min_p, repetition_penalty=repetition_penalty, seed=seed)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\comfy\sd1_clip.py", line 744, in generate
    return getattr(self, self.clip).generate(tokens, do_sample=do_sample, max_length=max_length, temperature=temperature, top_k=top_k, top_p=top_p, min_p=min_p, repetition_penalty=repetition_penalty, seed=seed)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\comfy\sd1_clip.py", line 318, in generate
    return self.transformer.generate(embeds, do_sample, max_length, temperature, top_k, top_p, min_p, repetition_penalty, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1965, in __getattr__
    raise AttributeError(
        f"'{type(self).__name__}' object has no attribute '{name}'"
    )

System Information

  • ComfyUI Version: 0.16.3
  • Arguments: D:\ComfyUI\main.py --listen --auto-launch --preview-method auto --use-sage-attention --disable-cuda-malloc
  • OS: win32
  • Python Version: 3.13.12 (tags/v3.13.12:1cbe481, Feb 3 2026, 18:22:25) [MSC v.1944 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.10.0+cu130

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12878086144
    • VRAM Free: 11527913472
    • Torch VRAM Total: 33554432
    • Torch VRAM Free: 25034752

@kijai
Copy link
Copy Markdown
Contributor Author

kijai commented Mar 6, 2026

@kijai Even with the same settings and model as you, I still get errors. What should I do?

image # ComfyUI Error Report ## Error Details * **Node ID:** 7 * **Node Type:** TextGenerate * **Exception Type:** AttributeError * **Exception Message:** 'CLIPTextModel' object has no attribute 'generate'

Stack Trace

  File "D:\ComfyUI\execution.py", line 524, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 333, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 307, in _async_map_node_over_list
    await process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 295, in process_inputs
    result = f(**inputs)

  File "D:\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
    return method(locked_class, **inputs)

  File "D:\ComfyUI\comfy_api\latest\_io.py", line 1764, in EXECUTE_NORMALIZED
    to_return = cls.execute(*args, **kwargs)

  File "D:\ComfyUI\comfy_extras\nodes_textgen.py", line 56, in execute
    generated_ids = clip.generate(
        tokens,
    ...<7 lines>...
        seed=seed
    )

  File "D:\ComfyUI\comfy\sd.py", line 434, in generate
    return self.cond_stage_model.generate(tokens, do_sample=do_sample, max_length=max_length, temperature=temperature, top_k=top_k, top_p=top_p, min_p=min_p, repetition_penalty=repetition_penalty, seed=seed)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\comfy\sd1_clip.py", line 744, in generate
    return getattr(self, self.clip).generate(tokens, do_sample=do_sample, max_length=max_length, temperature=temperature, top_k=top_k, top_p=top_p, min_p=min_p, repetition_penalty=repetition_penalty, seed=seed)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This PR isn't merged yet, based on the error you haven't merged it manually either so it won't recognize the model. Either wait until it's merged or apply the PR yourself for testing.

@Amazon90
Copy link
Copy Markdown

Amazon90 commented Mar 6, 2026

This PR isn't merged yet, based on the error you haven't merged it manually either so it won't recognize the model. Either wait until it's merged or apply the PR yourself for testing.

Thank you!

@Myoko
Copy link
Copy Markdown

Myoko commented Mar 6, 2026

Waiting to be merged into the main branch

@kaptainkory
Copy link
Copy Markdown

This actually working?

2026-03-07_22-23

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy_extras/nodes_textgen.py`:
- Around line 27-29: Restore the original node category string for the
TextGenerate node to avoid unintentionally moving UI entries: change the
category value in the Schema returned by the TextGenerate node (node_id
"TextGenerate") back to the previous "textgen/" form instead of "textgen" so
that its child/inheriting node TextGenerateLTX2Prompt keeps the same menu
placement; ensure any other nodes relying on that category are unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4d6e7be3-7c8c-410c-a689-8c0ce71af0b8

📥 Commits

Reviewing files that changed from the base of the PR and between 56708ac and 2edbfbf.

📒 Files selected for processing (1)
  • comfy_extras/nodes_textgen.py

@fidel1234xdd
Copy link
Copy Markdown

Any info of this will be merged? checked locally on my comfy and it's working great

@Kosinkadink Kosinkadink added the Core Core team dependency label Mar 13, 2026
@Dev0Lab1
Copy link
Copy Markdown

In my experience so far, the TextGenerate node of ComfyUI takes extremely too much time to complete compared to connecting to a server running locally any LLM model (big or small). In fact, using connection I can get 60-150k/s in some models while no matter what model I choose for TextGenerate it takes more than 100s for 250 tokens!

@sonnybox
Copy link
Copy Markdown

In my experience so far, the TextGenerate node of ComfyUI takes extremely too much time to complete compared to connecting to a server running locally any LLM model (big or small). In fact, using connection I can get 60-150k/s in some models while no matter what model I choose for TextGenerate it takes more than 100s for 250 tokens!

@Dev0Lab1 It should not be that slow. On my system I get 90t/s using llama.cpp (f16 gguf) and 40t/s (bf16) on ComfyUI.

@comfyanonymous comfyanonymous merged commit 404d7b9 into Comfy-Org:master Mar 26, 2026
14 checks passed
@alexsarmiento
Copy link
Copy Markdown

Some generous soul who can share where to find the 9B version of Qwen3.5? Or at least share the script to convert it for ComfyUI

@kijai
Copy link
Copy Markdown
Contributor Author

kijai commented Mar 26, 2026

Some generous soul who can share where to find the 9B version of Qwen3.5? Or at least share the script to convert it for ComfyUI

I have it ready, just didn't upload yet.

@alexisrolland alexisrolland changed the title feat: Support Qwen3.5 text generation models CORE-15 feat: Support Qwen3.5 text generation models Mar 28, 2026
@workflowsguy
Copy link
Copy Markdown

This actually working?
2026-03-07_22-23

I get the same error message running this in ComfyUI 0.8.28 (26040356464hyh4)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Core Core team dependency

Projects

None yet

Development

Successfully merging this pull request may close these issues.