vllm error: Extra inputs are not permitted
#1
by
traphix - opened
2 * L40s
vllm == 0.13.0rc2.dev6+g434ac76a7
launch params
python3 -m vllm.entrypoints.openai.api_server \
--served-model-name qwen3-next-80b-a3b-instruct \
--model /data/model-cache/Qwen3-Next-80B-A3B-Instruct-quantized.w4a16 \
--tensor-parallel-size 2 \
--enable-expert-parallel \
--enable-auto-tool-choice \
--tool-call-parser hermes
error
(APIServer pid=322) INFO 12-13 15:34:01 [api_server.py:1351] vLLM API server version 0.13.0rc2.dev6+g434ac76a7
(APIServer pid=322) INFO 12-13 15:34:01 [utils.py:253] non-default args: {'host': '0.0.0.0', 'port': 30522, 'enable_auto_tool_choice': True, 'tool_call_parser': 'hermes', 'model': '/data/model-cache/Qwen3-Next-80B-A3B-Instruct-quantized.w4a16', 'served_model_name': ['qwen3-next-80b-a3b-instruct'], 'tensor_parallel_size': 2, 'enable_expert_parallel': True}
(APIServer pid=322) INFO 12-13 15:34:01 [model.py:629] Resolved architecture: Qwen3NextForCausalLM
(APIServer pid=322) INFO 12-13 15:34:01 [model.py:1755] Using max model len 262144
(APIServer pid=322) INFO 12-13 15:34:01 [scheduler.py:228] Chunked prefill is enabled with max_num_batched_tokens=2048.
(APIServer pid=322) INFO 12-13 15:34:01 [config.py:310] Disabling cascade attention since it is not supported for hybrid models.
(APIServer pid=322) INFO 12-13 15:34:02 [config.py:437] Setting attention block size to 544 tokens to ensure that attention page size is >= mamba page size.
(APIServer pid=322) INFO 12-13 15:34:02 [config.py:461] Padding mamba page size by 1.49% to ensure that mamba page size and attention page size are exactly equal.
(APIServer pid=322) Traceback (most recent call last):
(APIServer pid=322) File "<frozen runpy>", line 198, in _run_module_as_main
(APIServer pid=322) File "<frozen runpy>", line 88, in _run_code
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1469, in <module>
(APIServer pid=322) uvloop.run(run_server(args))
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=322) return __asyncio.run(
(APIServer pid=322) ^^^^^^^^^^^^^^
(APIServer pid=322) File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=322) return runner.run(main)
(APIServer pid=322) ^^^^^^^^^^^^^^^^
(APIServer pid=322) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=322) return self._loop.run_until_complete(task)
(APIServer pid=322) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=322) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=322) return await main
(APIServer pid=322) ^^^^^^^^^^
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1398, in run_server
(APIServer pid=322) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1417, in run_server_worker
(APIServer pid=322) async with build_async_engine_client(
(APIServer pid=322) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=322) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=322) return await anext(self.gen)
(APIServer pid=322) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 172, in build_async_engine_client
(APIServer pid=322) async with build_async_engine_client_from_engine_args(
(APIServer pid=322) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=322) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=322) return await anext(self.gen)
(APIServer pid=322) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 198, in build_async_engine_client_from_engine_args
(APIServer pid=322) vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=322) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1772, in create_engine_config
(APIServer pid=322) config = VllmConfig(
(APIServer pid=322) ^^^^^^^^^^^
(APIServer pid=322) File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=322) s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=322) pydantic_core._pydantic_core.ValidationError: 2 validation errors for VllmConfig
(APIServer pid=322) scale_dtype
(APIServer pid=322) Extra inputs are not permitted [type=extra_forbidden, input_value=None, input_type=NoneType]
(APIServer pid=322) For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden
(APIServer pid=322) zp_dtype
(APIServer pid=322) Extra inputs are not permitted [type=extra_forbidden, input_value=None, input_type=NoneType]
(APIServer pid=322) For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden