Skip to content

Conversation

simondanielsson
Copy link
Contributor

@simondanielsson simondanielsson commented Sep 17, 2025

Purpose

Closes #25071.

Test Plan

  1. When using whisper:
vllm serve openai/whisper-large-v3

logs should no longer mention "Chunked prefill is enabled with ...":

(APIServer pid=3140911) INFO 09-17 12:37:08 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=8192.
(APIServer pid=3140911) INFO 09-17 12:37:10 [__init__.py:2790] Encoder-decoder models do not support chunked prefill nor prefix caching; disabling both.

Expecting simply

(APIServer pid=3140911) INFO 09-17 12:37:10 [__init__.py:2790] Encoder-decoder models do not support chunked prefill nor prefix caching; disabling both.
  1. Should lead to no changes to the SchedulerConfig nor VllmConfig. Verify with new tests.

Test Result

  1. Command:
  • Tested on GPU: L4.
  • Output from "test" command:
(vllm) danielssonsimon@XXXXXX:~/code/vllm$ vllm serve openai/whisper-large-v3
INFO 09-17 18:43:30 [__init__.py:216] Automatically detected platform cuda.
(APIServer pid=49917) INFO 09-17 18:43:33 [api_server.py:1813] vLLM API server version 0.10.2rc3.dev169+ge3db5ebb6.d20250917
(APIServer pid=49917) INFO 09-17 18:43:33 [utils.py:328] non-default args: {'model_tag': 'openai/whisper-large-v3', 'model': 'openai/whisper-large-v3'}
(APIServer pid=49917) INFO 09-17 18:43:42 [__init__.py:707] Resolved architecture: WhisperForConditionalGeneration
(APIServer pid=49917) `torch_dtype` is deprecated! Use `dtype` instead!
(APIServer pid=49917) INFO 09-17 18:43:42 [__init__.py:1762] Using max model len 448
(APIServer pid=49917) INFO 09-17 18:43:43 [scheduler.py:197] Encoder-decoder models do not support chunked prefill nor prefix caching; disabling both.
Fetching 1 files: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 11915.64it/s]
  1. New tests pass locally.

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@simondanielsson simondanielsson changed the title [Bug]: Clean up chunked prefill logging when using whisper [Bugfix]: Clean up chunked prefill logging when using whisper Sep 17, 2025
Copy link

mergify bot commented Sep 17, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @simondanielsson.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Comment on lines 96 to 102
is_encoder_decoder: bool = False
"""True if the model is an encoder-decoder model."""

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this already exists in ModelConfig, why duplicate it here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, we likely don't want to store it here as well.

Would an InitVar be sufficient here?

Copy link
Member

@hmellor hmellor Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The InitVar solution works.

However, in other cases like this (where two sibling configs interact) I've tended to perform those interactions in the parent's __post_init__, VllmConfig in this case. Would that work in this case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's where I had it before this change, but we end up with a confusing log message about features being enabled coming from the SchedulerConfig's post_init before VllmConfig's post_init fixed it and disabled them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option would be to perform the Chunked prefill is enabled... log in the VllmConfig, but not sure it makes sense to put it there

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, thankn you for explaining. Let's stick with the initvar

@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from fefc7ab to 4a48dc5 Compare September 18, 2025 13:03
Copy link
Member

@russellb russellb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, as long as @hmellor is ok with your change to use an InitVar. Thanks!

Copy link

mergify bot commented Sep 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @simondanielsson.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 19, 2025
@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from 4a48dc5 to 59b2a17 Compare September 20, 2025 09:14
@mergify mergify bot removed the needs-rebase label Sep 20, 2025
Copy link

mergify bot commented Sep 21, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @simondanielsson.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 21, 2025
@hmellor
Copy link
Member

hmellor commented Sep 26, 2025

Sorry we should have merged this already. @simondanielsson can you please update the branch and we'll enable auto-merge?

@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from 59b2a17 to 5e0a186 Compare September 26, 2025 20:40
@mergify mergify bot removed the needs-rebase label Sep 26, 2025
@russellb russellb enabled auto-merge (squash) September 26, 2025 20:42
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2025
@russellb russellb disabled auto-merge September 26, 2025 20:43
@russellb russellb enabled auto-merge (squash) September 26, 2025 20:57
auto-merge was automatically disabled September 27, 2025 07:31

Head branch was pushed to by a user without write access

@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from d769e3c to fb763f6 Compare September 27, 2025 07:31
Signed-off-by: simondanielsson <[email protected]>
Signed-off-by: simondanielsson <[email protected]>
Signed-off-by: simondanielsson <[email protected]>
Signed-off-by: simondanielsson <[email protected]>
@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from fb763f6 to 2abc703 Compare September 27, 2025 08:56
@simondanielsson
Copy link
Contributor Author

@russellb seeing CI is failing now in an (to me seemingly) unrelated place - are you more familiar with the kv connectors and can see directly what is causing this err? Otherwise I'll debug this after the weekend :) Thanks!

[2025-09-27T09:55:31Z] FAILED v1/kv_connector/unit/test_offloading_connector.py::test_offloading_connector - AssertionError: Expected 'lookup' to not have been called. Called 1 times.
--
  | [2025-09-27T09:55:31Z] Calls: [call(<itertools.islice object at 0x7f08ddc4fbf0>)].
  | [2025-09-27T09:55:31Z]
  | [2025-09-27T09:55:31Z] pytest introspection follows:
  | [2025-09-27T09:55:31Z]
  | [2025-09-27T09:55:31Z] Args:
  | [2025-09-27T09:55:31Z] assert (<itertools.i...f08ddc4fbf0>,) == ()
  | [2025-09-27T09:55:31Z]
  | [2025-09-27T09:55:31Z]   Left contains one more item: <itertools.islice object at 0x7f08ddc4fbf0>
  | [2025-09-27T09:55:31Z]
  | [2025-09-27T09:55:31Z]   Full diff:
  | [2025-09-27T09:55:31Z]   - ()
  | [2025-09-27T09:55:31Z]   + (
  | [2025-09-27T09:55:31Z]   +     <itertools.islice object at 0x7f08ddc4fbf0>,
  | [2025-09-27T09:55:31Z]   + )
  | [2025-09-27T09:55:31Z] !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
  | [2025-09-27T09:55:31Z] ============= 1 failed, 20 passed, 2 warnings in 86.19s (0:01:26) ==============
  | [2025-09-27T09:55:37Z] 🚨 Error: The command exited with status 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Clean up chunked prefill logging when using whisper
3 participants