Problem when loading with latest vllm
#12
by flefevre - opened
Hello
when trying to setup a vllm 0.7.3 server with
command: --host 0.0.0.0 --port ${VLLM_PORT} --model mistralai/Mistral-Small-3.1-24B-Instruct-2503 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size ${TENSOR_PARALLEL_SIZE} --gpu-memory-utilization ${GPU_MEMORY_UTILIZATION} --trust-remote-code --disable-log-requests
i got an error about the
TypeError: MultimodalConfig.__init__() got an unexpected keyword argument 'spatial_merge_size'
I would appreciate your help.
detailled logs:
lm-mistralsmall | INFO 03-18 14:45:09 __init__.py:207] Automatically detected platform cuda.
vllm-mistralsmall | INFO 03-18 14:45:09 api_server.py:912] vLLM API server version 0.7.3
vllm-mistralsmall | INFO 03-18 14:45:09 api_server.py:913] args: Namespace(host='0.0.0.0', port=5011, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=True, enable_reasoning=False, reasoning_parser=None, tool_call_parser='mistral', tool_parser_plugin='', model='mistralai/Mistral-Small-3.1-24B-Instruct-2503', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='mistral', trust_remote_code=True, allowed_local_media_path=None, download_dir=None, load_format='mistral', config_format='mistral', dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.6, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt={'image': 10}, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, disable_log_requests=True, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False)
vllm-mistralsmall | INFO 03-18 14:45:09 api_server.py:209] Started engine process with PID 43
vllm-mistralsmall | INFO 03-18 14:45:09 config.py:2444] Downcasting torch.float32 to torch.float16.
vllm-mistralsmall | INFO 03-18 14:45:12 __init__.py:207] Automatically detected platform cuda.
vllm-mistralsmall | INFO 03-18 14:45:12 config.py:2444] Downcasting torch.float32 to torch.float16.
vllm-mistralsmall | INFO 03-18 14:45:13 config.py:549] This model supports multiple tasks: {'embed', 'reward', 'classify', 'generate', 'score'}. Defaulting to 'generate'.
vllm-mistralsmall | WARNING 03-18 14:45:13 arg_utils.py:1197] The model has a long context length (128000). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
vllm-mistralsmall | Traceback (most recent call last):
vllm-mistralsmall | File "<frozen runpy>", line 198, in _run_module_as_main
vllm-mistralsmall | File "<frozen runpy>", line 88, in _run_code
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 991, in <module>
vllm-mistralsmall | uvloop.run(run_server(args))
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
vllm-mistralsmall | return __asyncio.run(
vllm-mistralsmall | ^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
vllm-mistralsmall | return runner.run(main)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
vllm-mistralsmall | return self._loop.run_until_complete(task)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
vllm-mistralsmall | return await main
vllm-mistralsmall | ^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 947, in run_server
vllm-mistralsmall | async with build_async_engine_client(args) as engine_client:
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
vllm-mistralsmall | return await anext(self.gen)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 139, in build_async_engine_client
vllm-mistralsmall | async with build_async_engine_client_from_engine_args(
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
vllm-mistralsmall | return await anext(self.gen)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_client_from_engine_args
vllm-mistralsmall | mq_engine_client = await asyncio.get_running_loop().run_in_executor(
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/lib/python3.12/concurrent/futures/thread.py", line 59, in run
vllm-mistralsmall | result = self.fn(*self.args, **self.kwargs)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/client.py", line 99, in __init__
vllm-mistralsmall | self.tokenizer = init_tokenizer_from_configs(
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 32, in init_tokenizer_from_configs
vllm-mistralsmall | return get_tokenizer_group(parallel_config.tokenizer_pool_config,
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 53, in get_tokenizer_group
vllm-mistralsmall | return tokenizer_cls.from_config(tokenizer_pool_config, **init_kwargs)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 33, in from_config
vllm-mistralsmall | return cls(**init_kwargs)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 25, in __init__
vllm-mistralsmall | self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 191, in get_tokenizer
vllm-mistralsmall | tokenizer = MistralTokenizer.from_pretrained(str(tokenizer_name),
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizers/mistral.py", line 227, in from_pretrained
vllm-mistralsmall | mistral_tokenizer = PublicMistralTokenizer.from_file(tokenizer_file)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/mistral_common/tokens/tokenizers/mistral.py", line 184, in from_file
vllm-mistralsmall | tokenizer = Tekkenizer.from_file(tokenizer_filename)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | File "/usr/local/lib/python3.12/dist-packages/mistral_common/tokens/tokenizers/tekken.py", line 146, in from_file
vllm-mistralsmall | untyped["multimodal"] = MultimodalConfig(**mm)
vllm-mistralsmall | ^^^^^^^^^^^^^^^^^^^^^^
vllm-mistralsmall | TypeError: MultimodalConfig.__init__() got an unexpected keyword argument 'spatial_merge_size'
vllm-mistralsmall | INFO 03-18 14:45:17 config.py:549] This model supports multiple tasks: {'embed', 'classify', 'generate', 'score', 'reward'}. Defaulting to 'generate'.
vllm-mistralsmall | WARNING 03-18 14:45:17 arg_utils.py:1197] The model has a long context length (128000). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
vllm-mistralsmall | INFO 03-18 14:45:17 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.3) with config: model='mistralai/Mistral-Small-3.1-24B-Instruct-2503', speculative_config=None, tokenizer='mistralai/Mistral-Small-3.1-24B-Instruct-2503', skip_tokenizer_init=False, tokenizer_mode=mistral, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=128000, download_dir=None, load_format=LoadFormat.MISTRAL, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=mistralai/Mistral-Small-3.1-24B-Instruct-2503, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
^CGracefully stopping... (press Ctrl+C again to force)
We have to use vllm >=0.8.0 just published yesterday!
Thanks to the job done
flefevre changed discussion status to closed