-
Notifications
You must be signed in to change notification settings - Fork 376
Open
Description
after running:
python tools/llm/run_vlm.py --model Qwen/Qwen2.5-VL-3B-Instruct --precision FP16 --num_tokens 128 --cache static_v1 --enable_pytorch_run --benchmark
I got the following error:
--- Registering SDPA lowering pass locally for LM compilation ---
Trying to export the model using torch.export.export()..
Trying torch.export._trace._export to trace the graph since torch.export.export() failed
Traceback (most recent call last):
File "/home/xxx/TensorRT/tools/llm/run_vlm.py", line 606, in <module>
trt_lm = compile_lm_torchtrt(model, args, device)
File "/home/xxx/TensorRT/tools/llm/run_vlm.py", line 322, in compile_lm_torchtrt
return _compile_lm(lm_model, example_embeds, args, device)
File "/home/xxx/TensorRT/tools/llm/run_vlm.py", line 275, in _compile_lm
trt_mod = torch_tensorrt.dynamo.compile(
File "/home/xxx/anaconda3/envs/torch_trt/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 696, in compile
deallocate_module(exported_program.module(), delete_module=False)
File "/home/xxx/anaconda3/envs/torch_trt/lib/python3.10/site-packages/torch/export/exported_program.py", line 1399, in module
module = _unlift_exported_program_lifted_states(self, check_guards=check_guards)
File "/home/xxx/anaconda3/envs/torch_trt/lib/python3.10/site-packages/torch/export/_unlift.py", line 732, in _unlift_exported_program_lifted_states
input_paths = _get_input_paths(
File "/home/xxx/anaconda3/envs/torch_trt/lib/python3.10/site-packages/torch/export/_unlift.py", line 527, in _get_input_paths
ctx = signature.bind(*args, **kwargs).arguments
File "/home/xxx/anaconda3/envs/torch_trt/lib/python3.10/inspect.py", line 3186, in bind
return self._bind(args, kwargs)
File "/home/xxx/anaconda3/envs/torch_trt/lib/python3.10/inspect.py", line 3156, in _bind
raise TypeError('missing a required argument: {arg!r}'. \
TypeError: missing a required argument: 'arg__reshape_copy_13_k_input'
my environment:
------------------------ ---------------
accelerate 1.12.0
av 16.0.1
certifi 2025.11.12
charset-normalizer 3.4.4
cuda-bindings 13.0.3
cuda-pathfinder 1.2.2
cuda-toolkit 13.0.1
dllist 2.0.0
filelock 3.20.0
fsspec 2025.12.0
hf-xet 1.2.0
huggingface-hub 0.36.0
idna 3.11
Jinja2 3.1.6
MarkupSafe 3.0.3
modelscope 1.32.0
mpmath 1.3.0
networkx 3.4.2
numpy 2.2.6
nvidia-cublas 13.1.0.3
nvidia-cublas-cu12 12.8.4.1
nvidia-cuda-cupti 13.0.85
nvidia-cuda-cupti-cu12 12.8.90
nvidia-cuda-nvrtc 13.0.88
nvidia-cuda-nvrtc-cu12 12.8.93
nvidia-cuda-runtime 13.0.88
nvidia-cuda-runtime-cu12 12.8.90
nvidia-cudnn-cu12 9.10.2.21
nvidia-cudnn-cu13 9.13.0.50
nvidia-cufft 12.0.0.61
nvidia-cufft-cu12 11.3.3.83
nvidia-cufile 1.15.1.6
nvidia-cufile-cu12 1.13.1.3
nvidia-curand 10.4.0.35
nvidia-curand-cu12 10.3.9.90
nvidia-cusolver 12.0.4.66
nvidia-cusolver-cu12 11.7.3.90
nvidia-cusparse 12.6.3.3
nvidia-cusparse-cu12 12.5.8.93
nvidia-cusparselt-cu12 0.7.1
nvidia-cusparselt-cu13 0.8.0
nvidia-nccl-cu12 2.27.5
nvidia-nccl-cu13 2.28.9
nvidia-nvjitlink 13.0.88
nvidia-nvjitlink-cu12 12.8.93
nvidia-nvshmem-cu12 3.3.20
nvidia-nvshmem-cu13 3.4.5
nvidia-nvtx 13.0.85
nvidia-nvtx-cu12 12.8.90
packaging 25.0
pillow 12.0.0
pip 25.3
psutil 7.1.3
PyYAML 6.0.3
qwen-vl-utils 0.0.14
regex 2025.11.3
requests 2.32.5
safetensors 0.7.0
setuptools 80.9.0
sympy 1.14.0
tensorrt 10.13.3.9.post1
tensorrt_cu12 10.12.0.36
tensorrt_cu12_bindings 10.12.0.36
tensorrt_cu12_libs 10.12.0.36
tensorrt_cu13 10.13.3.9.post1
tensorrt_cu13_bindings 10.13.3.9.post1
tensorrt_cu13_libs 10.13.3.9.post1
tokenizers 0.21.4
torch 2.9.1
torch_tensorrt 2.9.0
torchvision 0.24.1
tqdm 4.67.1
transformers 4.52.3
triton 3.5.1
typing_extensions 4.15.0
urllib3 2.6.2
wheel 0.45.1
The same error occurs when i use the official container(nvcr.io/nvidia/pytorch:25.11-py3) with the following three additional packages:
pip install accelerate transformers==4.52.3 qwen-vl-utils
-----------------------------------------------------------------------------------------------------
--- Registering SDPA lowering pass locally for LM compilation ---
Trying to export the model using torch.export.export()..
Trying torch.export._trace._export to trace the graph since torch.export.export() failed
Traceback (most recent call last):
File "/workspace/TensorRT/tools/llm/run_vlm.py", line 606, in <module>
trt_lm = compile_lm_torchtrt(model, args, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/TensorRT/tools/llm/run_vlm.py", line 322, in compile_lm_torchtrt
return _compile_lm(lm_model, example_embeds, args, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/TensorRT/tools/llm/run_vlm.py", line 275, in _compile_lm
trt_mod = torch_tensorrt.dynamo.compile(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/_compiler.py", line 696, in compile
deallocate_module(exported_program.module(), delete_module=False)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/exported_program.py", line 1399, in module
module = _unlift_exported_program_lifted_states(self, check_guards=check_guards)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/_unlift.py", line 826, in _unlift_exported_program_lifted_states
input_paths = _get_input_paths(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/export/_unlift.py", line 592, in _get_input_paths
binded = signature.bind(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/inspect.py", line 3242, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/inspect.py", line 3212, in _bind
raise TypeError('missing a required argument: {arg!r}'. \
TypeError: missing a required argument: 'arg__reshape_copy_13_k_input'
Metadata
Metadata
Assignees
Labels
No labels