Skip to content

Conversation

JJJYmmm
Copy link
Contributor

@JJJYmmm JJJYmmm commented Sep 22, 2025

Support MRoPE + YaRN for Qwen3-VL, tested with internal ckpt. 🙏

Enable YaRN with modified config:

...
    "max_position_embeddings": 1000000,
    "rope_scaling": {
      "rope_type": "yarn",
      "mrope_section": [
        24,
        20,
        20
      ],
      "mrope_interleaved": true,
      "factor": 3.0, # MRoPE has a slower factor compared to vanilla rope
      "original_max_position_embeddings": 256000
    },
...

Signed-off-by: liuye.hj <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for MRoPE with YaRN scaling. The changes in vllm/model_executor/layers/rotary_embedding/__init__.py correctly dispatch to MRotaryEmbedding when YaRN scaling is used with mrope_section. The changes in vllm/model_executor/layers/rotary_embedding/mrope.py add YaRN parameters and reuse logic from YaRNScalingRotaryEmbedding. However, I've found a critical issue in how MRotaryEmbedding interacts with the YaRN scaling logic, which will lead to incorrect rotary embeddings. Please see my detailed comment.

liuye.hj and others added 2 commits September 22, 2025 21:25
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution! This looks reasonable to me!

@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 22, 2025
@ywang96 ywang96 enabled auto-merge (squash) September 22, 2025 19:30
@ywang96 ywang96 merged commit fc97733 into vllm-project:main Sep 23, 2025
46 of 52 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants