Errors

#2
by CihanDogan - opened

Hi, with the latest version of vLLM (you specified), cuda 12.8 and 2x h100s I get the following error:

WorkerProc failed to start.
Traceback (most recent call last):
File "/root/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 722, in worker_main
worker = WorkerProc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 562, in init
self.worker.load_model()
File "/root/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 289, in load_model
self.model_runner.load_model(eep_scale_up=eep_scale_up)
File "/root/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3581, in load_model
self.model = model_loader.load_model(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 56, in load_model
process_weights_after_loading(model, model_config, target_device)
File "/root/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/utils.py", line 108, in process_weights_after_loading
quant_method.process_weights_after_loading(module)
File "/root/.venv/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/awq_marlin.py", line 585, in process_weights_after_loading
marlin_w13_qweight = ops.awq_marlin_moe_repack(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.venv/lib/python3.12/site-packages/vllm/_custom_ops.py", line 1327, in awq_marlin_moe_repack
output[e] = torch.ops._C.awq_marlin_repack(
~~~~~~^^^

torch.AcceleratorError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
Search for cudaErrorUnsupportedPtxVersion' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA` to enable device-side assertions.

From nvidia-smi:
Driver: 570.195.03 ✅
CUDA: 12.8 ✅
GPUs: 2× H100 PCIe (sm_90a) ✅

Hi, the error was on my end, this fixed it: https://github.com/vllm-project/vllm/issues/31027

I was about to say. It works perfectly in my side. Better than cyankiwi quants!

CihanDogan changed discussion status to closed

Sign up or log in to comment