Skip to content

fix: regression in text generate with LTXAV model#13170

Merged
comfyanonymous merged 13 commits intoComfy-Org:masterfrom
kijai:qwen35
Mar 26, 2026
Merged

fix: regression in text generate with LTXAV model#13170
comfyanonymous merged 13 commits intoComfy-Org:masterfrom
kijai:qwen35

Conversation

@kijai
Copy link
Copy Markdown
Contributor

@kijai kijai commented Mar 26, 2026

Somehow missed the presence_penalty in the LTXAV text model specifically.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 26, 2026

📝 Walkthrough

Walkthrough

This change adds a presence_penalty parameter to the text generation pipeline in the text encoder module. The parameter is introduced to two method signatures: Gemma3_12BModel.generate() and LTXAVTEModel.generate(). The parameter is then threaded through the call chain, propagating from LTXAVTEModel through Gemma3_12BModel to the underlying transformer's generation method. This enables presence penalty control to be passed down to the transformer during text generation operations.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: fixing a regression in the LTXAV text model by restoring the presence_penalty parameter.
Description check ✅ Passed The description is directly related to the changeset, explaining that the presence_penalty parameter was omitted from the LTXAV text model and is now being restored.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/text_encoders/lt.py`:
- Around line 192-193: The generate method in com­fy/text_encoders/lt.py should
provide a default for presence_penalty to preserve backward compatibility;
update the generate signature (function name: generate) to set
presence_penalty=0.0 and keep the internal call to
self.gemma3_12b.generate(tokens["gemma3_12b"], do_sample, max_length,
temperature, top_k, top_p, min_p, repetition_penalty, seed, presence_penalty)
unchanged so callers that omit presence_penalty continue to work.
- Around line 94-98: The generate method signature in class/function generate
currently adds presence_penalty without a default causing callers to break;
update the signature to add a default (presence_penalty=0.0) so it matches
sibling implementations (e.g., llama.py and sd.py) and maintain backward
compatibility, leaving the internal usage of presence_penalty unchanged in the
call to self.transformer.generate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 248df038-1ee1-4bb7-abb9-fb69362fd3aa

📥 Commits

Reviewing files that changed from the base of the PR and between 2a1f402 and 6ef043e.

📒 Files selected for processing (1)
  • comfy/text_encoders/lt.py

@comfyanonymous comfyanonymous merged commit b0fd65e into Comfy-Org:master Mar 26, 2026
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants