Draft
Conversation
Contributor
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
zhewenl
commented
Feb 11, 2026
| # Note: thinker_max_new_tokens controls the thinker's generation limit (default 1024), | ||
| # which is separate from max_new_tokens. Cap it to avoid long waits. | ||
| result = full_model.generate( | ||
| **calib_batch, max_new_tokens=100, thinker_max_new_tokens=100 |
| result = full_model.generate(**calib_batch, max_new_tokens=100) | ||
| print("[DEBUG] pre_quantize: starting qwen3omni preview generation (max_new_tokens=100)...", flush=True) | ||
| result = full_model.generate( | ||
| **calib_batch, max_new_tokens=100, thinker_max_new_tokens=100 |
| # For Qwen3-Omni Thinking models, the thinker's token limit is controlled by | ||
| # a separate `thinker_max_new_tokens` param (default 1024), not `max_new_tokens`. | ||
| # Cap it to avoid unbounded chain-of-thought generation during calibration. | ||
| if "qwen3omni" in model.__class__.__name__.lower(): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Qwen3-Omni Thinking models have a separate
thinker_max_new_tokensparameter (default value =1024) that is independent of max_new_tokens.During calibration, setting max_new_tokens=1 only limits the talker — the thinker still generates up to 1024 tokens per sample, causing a ~500x slowdown that makes calibration extremely slow