Skip to content

Commit 91c35d7

Browse files
authored
[Bugfix] Fix mc2 operator error in aclgraph + ep<16 scenario (#2609)
### What this PR does / why we need it? 1. quickfix mc2 operator error in aclgraph + ep<16 scenario to recover CI, will be refactorred in the future 2. disable aclgraph when testing w8a8 ### How was this patch tested? CI passed with existing test. - vLLM version: v0.10.1.1 - vLLM main: vllm-project/vllm@9508960 Signed-off-by: MengqingCao <[email protected]>
1 parent ee6d141 commit 91c35d7

File tree

2 files changed

+4
-2
lines changed

2 files changed

+4
-2
lines changed

tests/e2e/multicard/test_qwen3_moe.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,7 @@ def test_models_distributed_Qwen3_MOE_TP2_WITH_EP():
5555
tensor_parallel_size=2,
5656
enable_expert_parallel=True,
5757
distributed_executor_backend="mp",
58+
enforce_eager=False,
5859
) as vllm_model:
5960
vllm_model.generate_greedy(example_prompts, max_tokens)
6061

@@ -71,7 +72,7 @@ def test_models_distributed_Qwen3_MOE_W8A8():
7172
dtype=dtype,
7273
tensor_parallel_size=2,
7374
quantization="ascend",
74-
enforce_eager=False,
75+
enforce_eager=True,
7576
) as vllm_model:
7677
vllm_model.generate_greedy(example_prompts, max_tokens)
7778

vllm_ascend/ops/common_fused_moe.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -242,7 +242,8 @@ def forward_impl(self, hidden_states: torch.Tensor,
242242
moe_comm_method_name = forward_context.moe_comm_method_name
243243

244244
# TODO: Can we refactor this logic to model_runner?
245-
if not self.moe_config.use_ep:
245+
# TODO: Adjusted logic to differentiate between A2 and A3, we check ep_size here since mc2 only support ep_size >= 16 on A3 now
246+
if self.moe_config.ep_size < 16:
246247
moe_comm_method_name = "allgathercommimpl"
247248

248249
forward_context.moe_comm_method = getattr(self, moe_comm_method_name)

0 commit comments

Comments
 (0)