Elfsong commited on
Commit
ec973df
Β·
verified Β·
1 Parent(s): 03048ae

Scheduled Commit

Browse files
Files changed (5) hide show
  1. vllm_0004000.log +143 -20
  2. vllm_0005000.log +241 -32
  3. vllm_0006000.log +18 -20
  4. vllm_0006500.log +203 -97
  5. vllm_0007000.log +18 -20
vllm_0004000.log CHANGED
@@ -1,20 +1,143 @@
1
- (APIServer pid=3232323) INFO 02-03 01:26:39 [utils.py:325]
2
- (APIServer pid=3232323) INFO 02-03 01:26:39 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
- (APIServer pid=3232323) INFO 02-03 01:26:39 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
- (APIServer pid=3232323) INFO 02-03 01:26:39 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0004000
5
- (APIServer pid=3232323) INFO 02-03 01:26:39 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
- (APIServer pid=3232323) INFO 02-03 01:26:39 [utils.py:325]
7
- (APIServer pid=3232323) INFO 02-03 01:26:39 [utils.py:261] non-default args: {'port': 9000, 'model': 'Elfsong/VLM_stage_2_iter_0004000', 'trust_remote_code': True, 'quantization': 'bitsandbytes', 'gpu_memory_utilization': 0.3}
8
- (APIServer pid=3232323) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
- (APIServer pid=3232323) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
- (APIServer pid=3232323) INFO 02-03 01:26:41 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
- (APIServer pid=3232323) INFO 02-03 01:26:41 [model.py:1561] Using max model len 40960
12
- (APIServer pid=3232323) INFO 02-03 01:26:41 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
- (APIServer pid=3232323) INFO 02-03 01:26:44 [vllm.py:624] Asynchronous scheduling is enabled.
14
- (EngineCore_DP0 pid=3233898) INFO 02-03 01:26:58 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0004000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0004000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0004000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
- (EngineCore_DP0 pid=3233898) INFO 02-03 01:26:59 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:52225 backend=nccl
16
- (EngineCore_DP0 pid=3233898) INFO 02-03 01:26:59 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
- (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:00 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0004000...
18
- (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:02 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
- (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:03 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
- Cancellation requested; stopping current tasks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ (APIServer pid=3309550) INFO 02-03 01:40:54 [utils.py:325]
2
+ (APIServer pid=3309550) INFO 02-03 01:40:54 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
+ (APIServer pid=3309550) INFO 02-03 01:40:54 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
+ (APIServer pid=3309550) INFO 02-03 01:40:54 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0004000
5
+ (APIServer pid=3309550) INFO 02-03 01:40:54 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
+ (APIServer pid=3309550) INFO 02-03 01:40:54 [utils.py:325]
7
+ (APIServer pid=3309550) INFO 02-03 01:40:54 [utils.py:261] non-default args: {'port': 9000, 'model': 'Elfsong/VLM_stage_2_iter_0004000', 'trust_remote_code': True, 'gpu_memory_utilization': 0.4}
8
+ (APIServer pid=3309550) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
+ (APIServer pid=3309550) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
+ (APIServer pid=3309550) INFO 02-03 01:40:56 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
+ (APIServer pid=3309550) INFO 02-03 01:40:56 [model.py:1561] Using max model len 40960
12
+ (APIServer pid=3309550) INFO 02-03 01:40:56 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
+ (APIServer pid=3309550) INFO 02-03 01:40:56 [vllm.py:624] Asynchronous scheduling is enabled.
14
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:41:06 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0004000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0004000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0004000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:41:11 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:59653 backend=nccl
16
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:41:11 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:41:12 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0004000...
18
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:41:13 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:42:48 [weight_utils.py:527] Time spent downloading weights for Elfsong/VLM_stage_2_iter_0004000: 92.466791 seconds
20
+ (EngineCore_DP0 pid=3310619)
21
+ (EngineCore_DP0 pid=3310619)
22
+ (EngineCore_DP0 pid=3310619)
23
+ (EngineCore_DP0 pid=3310619)
24
+ (EngineCore_DP0 pid=3310619)
25
+ (EngineCore_DP0 pid=3310619)
26
+ (EngineCore_DP0 pid=3310619)
27
+ (EngineCore_DP0 pid=3310619)
28
+ (EngineCore_DP0 pid=3310619)
29
+ (EngineCore_DP0 pid=3310619)
30
+ (EngineCore_DP0 pid=3310619)
31
+ (EngineCore_DP0 pid=3310619)
32
+ (EngineCore_DP0 pid=3310619)
33
+ (EngineCore_DP0 pid=3310619)
34
+ (EngineCore_DP0 pid=3310619)
35
+ (EngineCore_DP0 pid=3310619)
36
+ (EngineCore_DP0 pid=3310619)
37
+ (EngineCore_DP0 pid=3310619)
38
+ (EngineCore_DP0 pid=3310619)
39
+ (EngineCore_DP0 pid=3310619)
40
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:43:08 [default_loader.py:291] Loading weights took 20.23 seconds
41
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:43:09 [gpu_model_runner.py:4118] Model loading took 61.03 GiB memory and 115.770729 seconds
42
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:43:25 [backends.py:805] Using cache directory: /home/mingzhe/.cache/vllm/torch_compile_cache/226ddebc06/rank_0_0/backbone for vLLM's torch.compile
43
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:43:25 [backends.py:865] Dynamo bytecode transform time: 15.42 s
44
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:43:38 [backends.py:302] Cache the graph of compile range (1, 8192) for later use
45
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:44:12 [backends.py:319] Compiling a graph for compile range (1, 8192) takes 34.69 s
46
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:44:12 [monitor.py:34] torch.compile takes 50.11 s in total
47
+ (EngineCore_DP0 pid=3310619) INFO 02-03 01:44:15 [gpu_worker.py:356] Available KV cache memory: -77.05 GiB
48
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] EngineCore failed to start.
49
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] Traceback (most recent call last):
50
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 937, in run_engine_core
51
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs)
52
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
53
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 691, in __init__
54
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] super().__init__(
55
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 112, in __init__
56
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
57
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
58
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 253, in _initialize_kv_caches
59
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] kv_cache_configs = get_kv_cache_configs(
60
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^
61
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 1516, in get_kv_cache_configs
62
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] _check_enough_kv_cache_memory(
63
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 616, in _check_enough_kv_cache_memory
64
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] raise ValueError(
65
+ (EngineCore_DP0 pid=3310619) ERROR 02-03 01:44:15 [core.py:946] ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine. See https://docs.vllm.ai/en/latest/configuration/conserving_memory/ for more details.
66
+ (EngineCore_DP0 pid=3310619) Process EngineCore_DP0:
67
+ (EngineCore_DP0 pid=3310619) Traceback (most recent call last):
68
+ (EngineCore_DP0 pid=3310619) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
69
+ (EngineCore_DP0 pid=3310619) self.run()
70
+ (EngineCore_DP0 pid=3310619) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
71
+ (EngineCore_DP0 pid=3310619) self._target(*self._args, **self._kwargs)
72
+ (EngineCore_DP0 pid=3310619) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 950, in run_engine_core
73
+ (EngineCore_DP0 pid=3310619) raise e
74
+ (EngineCore_DP0 pid=3310619) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 937, in run_engine_core
75
+ (EngineCore_DP0 pid=3310619) engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs)
76
+ (EngineCore_DP0 pid=3310619) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
77
+ (EngineCore_DP0 pid=3310619) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 691, in __init__
78
+ (EngineCore_DP0 pid=3310619) super().__init__(
79
+ (EngineCore_DP0 pid=3310619) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 112, in __init__
80
+ (EngineCore_DP0 pid=3310619) num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
81
+ (EngineCore_DP0 pid=3310619) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
82
+ (EngineCore_DP0 pid=3310619) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 253, in _initialize_kv_caches
83
+ (EngineCore_DP0 pid=3310619) kv_cache_configs = get_kv_cache_configs(
84
+ (EngineCore_DP0 pid=3310619) ^^^^^^^^^^^^^^^^^^^^^
85
+ (EngineCore_DP0 pid=3310619) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 1516, in get_kv_cache_configs
86
+ (EngineCore_DP0 pid=3310619) _check_enough_kv_cache_memory(
87
+ (EngineCore_DP0 pid=3310619) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py", line 616, in _check_enough_kv_cache_memory
88
+ (EngineCore_DP0 pid=3310619) raise ValueError(
89
+ (EngineCore_DP0 pid=3310619) ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine. See https://docs.vllm.ai/en/latest/configuration/conserving_memory/ for more details.
90
+ [rank0]:[W203 01:44:17.466159054 ProcessGroupNCCL.cpp:1524] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
91
+ (APIServer pid=3309550) Traceback (most recent call last):
92
+ (APIServer pid=3309550) File "<frozen runpy>", line 198, in _run_module_as_main
93
+ (APIServer pid=3309550) File "<frozen runpy>", line 88, in _run_code
94
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 991, in <module>
95
+ (APIServer pid=3309550) uvloop.run(run_server(args))
96
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 96, in run
97
+ (APIServer pid=3309550) return __asyncio.run(
98
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^
99
+ (APIServer pid=3309550) File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
100
+ (APIServer pid=3309550) return runner.run(main)
101
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^^^
102
+ (APIServer pid=3309550) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
103
+ (APIServer pid=3309550) return self._loop.run_until_complete(task)
104
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
105
+ (APIServer pid=3309550) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
106
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 48, in wrapper
107
+ (APIServer pid=3309550) return await main
108
+ (APIServer pid=3309550) ^^^^^^^^^^
109
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 919, in run_server
110
+ (APIServer pid=3309550) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
111
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 938, in run_server_worker
112
+ (APIServer pid=3309550) async with build_async_engine_client(
113
+ (APIServer pid=3309550) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
114
+ (APIServer pid=3309550) return await anext(self.gen)
115
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^^^^^^^^
116
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 147, in build_async_engine_client
117
+ (APIServer pid=3309550) async with build_async_engine_client_from_engine_args(
118
+ (APIServer pid=3309550) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
119
+ (APIServer pid=3309550) return await anext(self.gen)
120
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^^^^^^^^
121
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 188, in build_async_engine_client_from_engine_args
122
+ (APIServer pid=3309550) async_llm = AsyncLLM.from_vllm_config(
123
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^^^^^^^^^^^^^
124
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 228, in from_vllm_config
125
+ (APIServer pid=3309550) return cls(
126
+ (APIServer pid=3309550) ^^^^
127
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 155, in __init__
128
+ (APIServer pid=3309550) self.engine_core = EngineCoreClient.make_async_mp_client(
129
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
130
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 122, in make_async_mp_client
131
+ (APIServer pid=3309550) return AsyncMPClient(*client_args)
132
+ (APIServer pid=3309550) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
133
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 819, in __init__
134
+ (APIServer pid=3309550) super().__init__(
135
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 479, in __init__
136
+ (APIServer pid=3309550) with launch_core_engines(vllm_config, executor_class, log_stats) as (
137
+ (APIServer pid=3309550) File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__
138
+ (APIServer pid=3309550) next(self.gen)
139
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 933, in launch_core_engines
140
+ (APIServer pid=3309550) wait_for_engine_startup(
141
+ (APIServer pid=3309550) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 992, in wait_for_engine_startup
142
+ (APIServer pid=3309550) raise RuntimeError(
143
+ (APIServer pid=3309550) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
vllm_0005000.log CHANGED
@@ -1,32 +1,241 @@
1
- (APIServer pid=3232791) INFO 02-03 01:26:45 [utils.py:325]
2
- (APIServer pid=3232791) INFO 02-03 01:26:45 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
- (APIServer pid=3232791) INFO 02-03 01:26:45 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
- (APIServer pid=3232791) INFO 02-03 01:26:45 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0005000
5
- (APIServer pid=3232791) INFO 02-03 01:26:45 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
- (APIServer pid=3232791) INFO 02-03 01:26:45 [utils.py:325]
7
- (APIServer pid=3232791) INFO 02-03 01:26:45 [utils.py:261] non-default args: {'port': 9001, 'model': 'Elfsong/VLM_stage_2_iter_0005000', 'trust_remote_code': True, 'quantization': 'bitsandbytes', 'gpu_memory_utilization': 0.3}
8
- (APIServer pid=3232791) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
- (APIServer pid=3232791) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
- (APIServer pid=3232791) INFO 02-03 01:26:46 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
- (APIServer pid=3232791) INFO 02-03 01:26:46 [model.py:1561] Using max model len 40960
12
- (APIServer pid=3232791) INFO 02-03 01:26:46 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
- (APIServer pid=3232791) INFO 02-03 01:26:50 [vllm.py:624] Asynchronous scheduling is enabled.
14
- (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:03 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0005000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0005000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0005000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
- (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:04 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:54097 backend=nccl
16
- (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:04 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
- (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:06 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0005000...
18
- (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:08 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
- (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:08 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
- (EngineCore_DP0 pid=3234476) INFO 02-03 01:39:24 [weight_utils.py:527] Time spent downloading weights for Elfsong/VLM_stage_2_iter_0005000: 734.294342 seconds
21
- (EngineCore_DP0 pid=3234476)
22
- (EngineCore_DP0 pid=3234476)
23
- (EngineCore_DP0 pid=3234476)
24
- (EngineCore_DP0 pid=3234476)
25
- (EngineCore_DP0 pid=3234476)
26
- (EngineCore_DP0 pid=3234476)
27
- (EngineCore_DP0 pid=3234476)
28
- (EngineCore_DP0 pid=3234476)
29
- (EngineCore_DP0 pid=3234476)
30
- (EngineCore_DP0 pid=3234476)
31
- (EngineCore_DP0 pid=3234476)
32
- (EngineCore_DP0 pid=3234476)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ (APIServer pid=3309898) INFO 02-03 01:40:59 [utils.py:325]
2
+ (APIServer pid=3309898) INFO 02-03 01:40:59 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
+ (APIServer pid=3309898) INFO 02-03 01:40:59 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
+ (APIServer pid=3309898) INFO 02-03 01:40:59 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0005000
5
+ (APIServer pid=3309898) INFO 02-03 01:40:59 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
+ (APIServer pid=3309898) INFO 02-03 01:40:59 [utils.py:325]
7
+ (APIServer pid=3309898) INFO 02-03 01:40:59 [utils.py:261] non-default args: {'port': 9001, 'model': 'Elfsong/VLM_stage_2_iter_0005000', 'trust_remote_code': True, 'gpu_memory_utilization': 0.4}
8
+ (APIServer pid=3309898) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
+ (APIServer pid=3309898) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
+ (APIServer pid=3309898) INFO 02-03 01:41:01 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
+ (APIServer pid=3309898) INFO 02-03 01:41:01 [model.py:1561] Using max model len 40960
12
+ (APIServer pid=3309898) INFO 02-03 01:41:01 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
+ (APIServer pid=3309898) INFO 02-03 01:41:01 [vllm.py:624] Asynchronous scheduling is enabled.
14
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:41:12 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0005000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0005000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0005000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:41:16 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:53221 backend=nccl
16
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:41:16 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:41:17 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0005000...
18
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:41:18 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
+ (EngineCore_DP0 pid=3312712)
20
+ (EngineCore_DP0 pid=3312712)
21
+ (EngineCore_DP0 pid=3312712)
22
+ (EngineCore_DP0 pid=3312712)
23
+ (EngineCore_DP0 pid=3312712)
24
+ (EngineCore_DP0 pid=3312712)
25
+ (EngineCore_DP0 pid=3312712)
26
+ (EngineCore_DP0 pid=3312712)
27
+ (EngineCore_DP0 pid=3312712)
28
+ (EngineCore_DP0 pid=3312712)
29
+ (EngineCore_DP0 pid=3312712)
30
+ (EngineCore_DP0 pid=3312712)
31
+ (EngineCore_DP0 pid=3312712)
32
+ (EngineCore_DP0 pid=3312712)
33
+ (EngineCore_DP0 pid=3312712)
34
+ (EngineCore_DP0 pid=3312712)
35
+ (EngineCore_DP0 pid=3312712)
36
+ (EngineCore_DP0 pid=3312712)
37
+ (EngineCore_DP0 pid=3312712)
38
+ (EngineCore_DP0 pid=3312712)
39
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:41:42 [default_loader.py:291] Loading weights took 21.25 seconds
40
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:41:43 [gpu_model_runner.py:4118] Model loading took 61.03 GiB memory and 24.133907 seconds
41
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:42:00 [backends.py:805] Using cache directory: /home/mingzhe/.cache/vllm/torch_compile_cache/9c9322c6b2/rank_0_0/backbone for vLLM's torch.compile
42
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:42:00 [backends.py:865] Dynamo bytecode transform time: 16.15 s
43
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:42:20 [backends.py:302] Cache the graph of compile range (1, 8192) for later use
44
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:44:12 [backends.py:319] Compiling a graph for compile range (1, 8192) takes 120.43 s
45
+ (EngineCore_DP0 pid=3312712) INFO 02-03 01:44:12 [monitor.py:34] torch.compile takes 136.58 s in total
46
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] EngineCore failed to start.
47
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] Traceback (most recent call last):
48
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4800, in _dummy_sampler_run
49
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] sampler_output = self.sampler(
50
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^
51
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
52
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return self._call_impl(*args, **kwargs)
53
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
54
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
55
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return forward_call(*args, **kwargs)
56
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 96, in forward
58
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] sampled, processed_logprobs = self.sample(logits, sampling_metadata)
59
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
60
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 187, in sample
61
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] random_sampled, processed_logprobs = self.topk_topp_sampler(
62
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^
63
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
64
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return self._call_impl(*args, **kwargs)
65
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
66
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
67
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return forward_call(*args, **kwargs)
68
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
69
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 104, in forward_native
70
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] logits = self.apply_top_k_top_p(logits, k, p)
71
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
72
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 262, in apply_top_k_top_p
73
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] logits_sort, logits_idx = logits.sort(dim=-1, descending=False)
74
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
75
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.74 GiB. GPU 0 has a total capacity of 139.80 GiB of which 368.44 MiB is free. Process 603285 has 4.61 GiB memory in use. Process 3310619 has 68.65 GiB memory in use. Including non-PyTorch memory, this process has 66.16 GiB memory in use. Of the allocated memory 65.02 GiB is allocated by PyTorch, and 416.55 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
76
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946]
77
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] The above exception was the direct cause of the following exception:
78
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946]
79
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] Traceback (most recent call last):
80
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 937, in run_engine_core
81
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs)
82
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
83
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 691, in __init__
84
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] super().__init__(
85
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 112, in __init__
86
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
87
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
88
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 242, in _initialize_kv_caches
89
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] available_gpu_memory = self.model_executor.determine_available_memory()
90
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
91
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 126, in determine_available_memory
92
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return self.collective_rpc("determine_available_memory")
93
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
94
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 75, in collective_rpc
95
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] result = run_method(self.driver_worker, method, args, kwargs)
96
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
97
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/serial_utils.py", line 461, in run_method
98
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return func(*args, **kwargs)
99
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^
100
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
101
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return func(*args, **kwargs)
102
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^
103
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 322, in determine_available_memory
104
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] self.model_runner.profile_run()
105
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4981, in profile_run
106
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] output = self._dummy_sampler_run(last_hidden_states)
107
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
108
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
109
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] return func(*args, **kwargs)
110
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^
111
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4805, in _dummy_sampler_run
112
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] raise RuntimeError(
113
+ (EngineCore_DP0 pid=3312712) ERROR 02-03 01:44:12 [core.py:946] RuntimeError: CUDA out of memory occurred when warming up sampler with 1024 dummy requests. Please try lowering `max_num_seqs` or `gpu_memory_utilization` when initializing the engine.
114
+ (EngineCore_DP0 pid=3312712) Process EngineCore_DP0:
115
+ (EngineCore_DP0 pid=3312712) Traceback (most recent call last):
116
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4800, in _dummy_sampler_run
117
+ (EngineCore_DP0 pid=3312712) sampler_output = self.sampler(
118
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^
119
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
120
+ (EngineCore_DP0 pid=3312712) return self._call_impl(*args, **kwargs)
121
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
122
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
123
+ (EngineCore_DP0 pid=3312712) return forward_call(*args, **kwargs)
124
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
125
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 96, in forward
126
+ (EngineCore_DP0 pid=3312712) sampled, processed_logprobs = self.sample(logits, sampling_metadata)
127
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
128
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 187, in sample
129
+ (EngineCore_DP0 pid=3312712) random_sampled, processed_logprobs = self.topk_topp_sampler(
130
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^
131
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
132
+ (EngineCore_DP0 pid=3312712) return self._call_impl(*args, **kwargs)
133
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
134
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
135
+ (EngineCore_DP0 pid=3312712) return forward_call(*args, **kwargs)
136
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
137
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 104, in forward_native
138
+ (EngineCore_DP0 pid=3312712) logits = self.apply_top_k_top_p(logits, k, p)
139
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
140
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 262, in apply_top_k_top_p
141
+ (EngineCore_DP0 pid=3312712) logits_sort, logits_idx = logits.sort(dim=-1, descending=False)
142
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
143
+ (EngineCore_DP0 pid=3312712) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.74 GiB. GPU 0 has a total capacity of 139.80 GiB of which 368.44 MiB is free. Process 603285 has 4.61 GiB memory in use. Process 3310619 has 68.65 GiB memory in use. Including non-PyTorch memory, this process has 66.16 GiB memory in use. Of the allocated memory 65.02 GiB is allocated by PyTorch, and 416.55 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
144
+ (EngineCore_DP0 pid=3312712)
145
+ (EngineCore_DP0 pid=3312712) The above exception was the direct cause of the following exception:
146
+ (EngineCore_DP0 pid=3312712)
147
+ (EngineCore_DP0 pid=3312712) Traceback (most recent call last):
148
+ (EngineCore_DP0 pid=3312712) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
149
+ (EngineCore_DP0 pid=3312712) self.run()
150
+ (EngineCore_DP0 pid=3312712) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
151
+ (EngineCore_DP0 pid=3312712) self._target(*self._args, **self._kwargs)
152
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 950, in run_engine_core
153
+ (EngineCore_DP0 pid=3312712) raise e
154
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 937, in run_engine_core
155
+ (EngineCore_DP0 pid=3312712) engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs)
156
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
157
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 691, in __init__
158
+ (EngineCore_DP0 pid=3312712) super().__init__(
159
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 112, in __init__
160
+ (EngineCore_DP0 pid=3312712) num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
161
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
162
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 242, in _initialize_kv_caches
163
+ (EngineCore_DP0 pid=3312712) available_gpu_memory = self.model_executor.determine_available_memory()
164
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
165
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 126, in determine_available_memory
166
+ (EngineCore_DP0 pid=3312712) return self.collective_rpc("determine_available_memory")
167
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
168
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 75, in collective_rpc
169
+ (EngineCore_DP0 pid=3312712) result = run_method(self.driver_worker, method, args, kwargs)
170
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
171
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/serial_utils.py", line 461, in run_method
172
+ (EngineCore_DP0 pid=3312712) return func(*args, **kwargs)
173
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^
174
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
175
+ (EngineCore_DP0 pid=3312712) return func(*args, **kwargs)
176
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^
177
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 322, in determine_available_memory
178
+ (EngineCore_DP0 pid=3312712) self.model_runner.profile_run()
179
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4981, in profile_run
180
+ (EngineCore_DP0 pid=3312712) output = self._dummy_sampler_run(last_hidden_states)
181
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
182
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
183
+ (EngineCore_DP0 pid=3312712) return func(*args, **kwargs)
184
+ (EngineCore_DP0 pid=3312712) ^^^^^^^^^^^^^^^^^^^^^
185
+ (EngineCore_DP0 pid=3312712) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4805, in _dummy_sampler_run
186
+ (EngineCore_DP0 pid=3312712) raise RuntimeError(
187
+ (EngineCore_DP0 pid=3312712) RuntimeError: CUDA out of memory occurred when warming up sampler with 1024 dummy requests. Please try lowering `max_num_seqs` or `gpu_memory_utilization` when initializing the engine.
188
+ [rank0]:[W203 01:44:15.196915434 ProcessGroupNCCL.cpp:1524] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
189
+ (APIServer pid=3309898) Traceback (most recent call last):
190
+ (APIServer pid=3309898) File "<frozen runpy>", line 198, in _run_module_as_main
191
+ (APIServer pid=3309898) File "<frozen runpy>", line 88, in _run_code
192
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 991, in <module>
193
+ (APIServer pid=3309898) uvloop.run(run_server(args))
194
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 96, in run
195
+ (APIServer pid=3309898) return __asyncio.run(
196
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^
197
+ (APIServer pid=3309898) File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
198
+ (APIServer pid=3309898) return runner.run(main)
199
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^^^
200
+ (APIServer pid=3309898) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
201
+ (APIServer pid=3309898) return self._loop.run_until_complete(task)
202
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
203
+ (APIServer pid=3309898) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
204
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 48, in wrapper
205
+ (APIServer pid=3309898) return await main
206
+ (APIServer pid=3309898) ^^^^^^^^^^
207
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 919, in run_server
208
+ (APIServer pid=3309898) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
209
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 938, in run_server_worker
210
+ (APIServer pid=3309898) async with build_async_engine_client(
211
+ (APIServer pid=3309898) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
212
+ (APIServer pid=3309898) return await anext(self.gen)
213
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^^^^^^^^
214
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 147, in build_async_engine_client
215
+ (APIServer pid=3309898) async with build_async_engine_client_from_engine_args(
216
+ (APIServer pid=3309898) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
217
+ (APIServer pid=3309898) return await anext(self.gen)
218
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^^^^^^^^
219
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 188, in build_async_engine_client_from_engine_args
220
+ (APIServer pid=3309898) async_llm = AsyncLLM.from_vllm_config(
221
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^^^^^^^^^^^^^
222
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 228, in from_vllm_config
223
+ (APIServer pid=3309898) return cls(
224
+ (APIServer pid=3309898) ^^^^
225
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 155, in __init__
226
+ (APIServer pid=3309898) self.engine_core = EngineCoreClient.make_async_mp_client(
227
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
228
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 122, in make_async_mp_client
229
+ (APIServer pid=3309898) return AsyncMPClient(*client_args)
230
+ (APIServer pid=3309898) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
231
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 819, in __init__
232
+ (APIServer pid=3309898) super().__init__(
233
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 479, in __init__
234
+ (APIServer pid=3309898) with launch_core_engines(vllm_config, executor_class, log_stats) as (
235
+ (APIServer pid=3309898) File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__
236
+ (APIServer pid=3309898) next(self.gen)
237
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 933, in launch_core_engines
238
+ (APIServer pid=3309898) wait_for_engine_startup(
239
+ (APIServer pid=3309898) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 992, in wait_for_engine_startup
240
+ (APIServer pid=3309898) raise RuntimeError(
241
+ (APIServer pid=3309898) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
vllm_0006000.log CHANGED
@@ -1,20 +1,18 @@
1
- (APIServer pid=3233259) INFO 02-03 01:26:50 [utils.py:325]
2
- (APIServer pid=3233259) INFO 02-03 01:26:50 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
- (APIServer pid=3233259) INFO 02-03 01:26:50 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
- (APIServer pid=3233259) INFO 02-03 01:26:50 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0006000
5
- (APIServer pid=3233259) INFO 02-03 01:26:50 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
- (APIServer pid=3233259) INFO 02-03 01:26:50 [utils.py:325]
7
- (APIServer pid=3233259) INFO 02-03 01:26:50 [utils.py:261] non-default args: {'port': 9002, 'model': 'Elfsong/VLM_stage_2_iter_0006000', 'trust_remote_code': True, 'quantization': 'bitsandbytes', 'gpu_memory_utilization': 0.3}
8
- (APIServer pid=3233259) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
- (APIServer pid=3233259) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
- (APIServer pid=3233259) INFO 02-03 01:26:51 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
- (APIServer pid=3233259) INFO 02-03 01:26:51 [model.py:1561] Using max model len 40960
12
- (APIServer pid=3233259) INFO 02-03 01:26:52 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
- (APIServer pid=3233259) INFO 02-03 01:26:55 [vllm.py:624] Asynchronous scheduling is enabled.
14
- (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:09 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0006000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0006000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0006000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
- (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:10 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:60911 backend=nccl
16
- (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:10 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
- (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:11 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0006000...
18
- (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:13 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
- (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:14 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
- Cancellation requested; stopping current tasks.
 
1
+ (APIServer pid=3310473) INFO 02-03 01:41:05 [utils.py:325]
2
+ (APIServer pid=3310473) INFO 02-03 01:41:05 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
+ (APIServer pid=3310473) INFO 02-03 01:41:05 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
+ (APIServer pid=3310473) INFO 02-03 01:41:05 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0006000
5
+ (APIServer pid=3310473) INFO 02-03 01:41:05 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
+ (APIServer pid=3310473) INFO 02-03 01:41:05 [utils.py:325]
7
+ (APIServer pid=3310473) INFO 02-03 01:41:05 [utils.py:261] non-default args: {'port': 9002, 'model': 'Elfsong/VLM_stage_2_iter_0006000', 'trust_remote_code': True, 'gpu_memory_utilization': 0.4}
8
+ (APIServer pid=3310473) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
+ (APIServer pid=3310473) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
+ (APIServer pid=3310473) INFO 02-03 01:41:06 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
+ (APIServer pid=3310473) INFO 02-03 01:41:06 [model.py:1561] Using max model len 40960
12
+ (APIServer pid=3310473) INFO 02-03 01:41:06 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
+ (APIServer pid=3310473) INFO 02-03 01:41:06 [vllm.py:624] Asynchronous scheduling is enabled.
14
+ (EngineCore_DP0 pid=3313628) INFO 02-03 01:41:18 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0006000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0006000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0006000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
+ (EngineCore_DP0 pid=3313628) INFO 02-03 01:41:21 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:47213 backend=nccl
16
+ (EngineCore_DP0 pid=3313628) INFO 02-03 01:41:21 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
+ (EngineCore_DP0 pid=3313628) INFO 02-03 01:41:23 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0006000...
18
+ (EngineCore_DP0 pid=3313628) INFO 02-03 01:41:24 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
 
 
vllm_0006500.log CHANGED
@@ -1,97 +1,203 @@
1
- (APIServer pid=2014082) INFO 02-02 06:58:38 [api_server.py:1351] vLLM API server version 0.13.0
2
- (APIServer pid=2014082) INFO 02-02 06:58:38 [utils.py:253] non-default args: {'port': 9011, 'model': 'Elfsong/VLM_stage_2_iter_0006500', 'trust_remote_code': True, 'quantization': 'bitsandbytes', 'gpu_memory_utilization': 0.4}
3
- (APIServer pid=2014082) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
4
- (APIServer pid=2014082) INFO 02-02 06:58:39 [model.py:514] Resolved architecture: Qwen3ForCausalLM
5
- (APIServer pid=2014082) INFO 02-02 06:58:39 [model.py:1661] Using max model len 40960
6
- (APIServer pid=2014082) INFO 02-02 06:58:40 [scheduler.py:230] Chunked prefill is enabled with max_num_batched_tokens=8192.
7
- (EngineCore_DP0 pid=2015333) INFO 02-02 06:59:00 [core.py:93] Initializing a V1 LLM engine (v0.13.0) with config: model='Elfsong/VLM_stage_2_iter_0006500', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0006500', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0006500, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False}, 'local_cache_dir': None}
8
- (EngineCore_DP0 pid=2015333) INFO 02-02 06:59:01 [parallel_state.py:1203] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://100.96.20.65:51155 backend=nccl
9
- (EngineCore_DP0 pid=2015333) INFO 02-02 06:59:01 [parallel_state.py:1411] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0
10
- (EngineCore_DP0 pid=2015333) INFO 02-02 06:59:02 [gpu_model_runner.py:3562] Starting to load model Elfsong/VLM_stage_2_iter_0006500...
11
- (EngineCore_DP0 pid=2015333) INFO 02-02 06:59:04 [cuda.py:351] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
12
- (EngineCore_DP0 pid=2015333) INFO 02-02 06:59:04 [bitsandbytes_loader.py:791] Loading weights with BitsAndBytes quantization. May take a while ...
13
- (EngineCore_DP0 pid=2015333)
14
- (EngineCore_DP0 pid=2015333)
15
- (EngineCore_DP0 pid=2015333)
16
- (EngineCore_DP0 pid=2015333)
17
- (EngineCore_DP0 pid=2015333)
18
- (EngineCore_DP0 pid=2015333)
19
- (EngineCore_DP0 pid=2015333)
20
- (EngineCore_DP0 pid=2015333)
21
- (EngineCore_DP0 pid=2015333)
22
- (EngineCore_DP0 pid=2015333)
23
- (EngineCore_DP0 pid=2015333)
24
- (EngineCore_DP0 pid=2015333)
25
- (EngineCore_DP0 pid=2015333)
26
- (EngineCore_DP0 pid=2015333)
27
- (EngineCore_DP0 pid=2015333)
28
- (EngineCore_DP0 pid=2015333)
29
- (EngineCore_DP0 pid=2015333)
30
- (EngineCore_DP0 pid=2015333)
31
- (EngineCore_DP0 pid=2015333)
32
- (EngineCore_DP0 pid=2015333)
33
- (EngineCore_DP0 pid=2015333) INFO 02-02 06:59:51 [gpu_model_runner.py:3659] Model loading took 19.4031 GiB memory and 48.128302 seconds
34
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:02 [backends.py:643] Using cache directory: /home/mingzhed/.cache/vllm/torch_compile_cache/7553580953/rank_0_0/backbone for vLLM's torch.compile
35
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:02 [backends.py:703] Dynamo bytecode transform time: 10.49 s
36
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:10 [backends.py:261] Cache the graph of compile range (1, 8192) for later use
37
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:13 [backends.py:278] Compiling a graph for compile range (1, 8192) takes 3.70 s
38
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:13 [monitor.py:34] torch.compile takes 14.19 s in total
39
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:14 [gpu_worker.py:375] Available KV cache memory: 49.18 GiB
40
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:15 [kv_cache_utils.py:1291] GPU KV cache size: 201,440 tokens
41
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:15 [kv_cache_utils.py:1296] Maximum concurrency for 40,960 tokens per request: 4.92x
42
- (EngineCore_DP0 pid=2015333)
43
- (EngineCore_DP0 pid=2015333)
44
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:31 [gpu_model_runner.py:4587] Graph capturing finished in 16 secs, took 3.22 GiB
45
- (EngineCore_DP0 pid=2015333) INFO 02-02 07:00:31 [core.py:259] init engine (profile, create kv cache, warmup model) took 40.24 seconds
46
- (APIServer pid=2014082) INFO 02-02 07:00:33 [api_server.py:1099] Supported tasks: ['generate']
47
- (APIServer pid=2014082) WARNING 02-02 07:00:34 [model.py:1487] Default sampling parameters have been overridden by the model's Hugging Face generation config recommended from the model creator. If this is not intended, please relaunch vLLM instance with `--generation-config vllm`.
48
- (APIServer pid=2014082) INFO 02-02 07:00:34 [serving_responses.py:201] Using default chat sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
49
- (APIServer pid=2014082) INFO 02-02 07:00:34 [serving_chat.py:137] Using default chat sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
50
- (APIServer pid=2014082) INFO 02-02 07:00:34 [serving_completion.py:77] Using default completion sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
51
- (APIServer pid=2014082) INFO 02-02 07:00:34 [serving_chat.py:137] Using default chat sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
52
- (APIServer pid=2014082) INFO 02-02 07:00:34 [api_server.py:1425] Starting vLLM API server 0 on http://0.0.0.0:9011
53
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:38] Available routes are:
54
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /openapi.json, Methods: GET, HEAD
55
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /docs, Methods: GET, HEAD
56
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /docs/oauth2-redirect, Methods: GET, HEAD
57
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /redoc, Methods: GET, HEAD
58
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /scale_elastic_ep, Methods: POST
59
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /is_scaling_elastic_ep, Methods: POST
60
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /tokenize, Methods: POST
61
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /detokenize, Methods: POST
62
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /inference/v1/generate, Methods: POST
63
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /pause, Methods: POST
64
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /resume, Methods: POST
65
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /is_paused, Methods: GET
66
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /metrics, Methods: GET
67
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /health, Methods: GET
68
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /load, Methods: GET
69
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/models, Methods: GET
70
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /version, Methods: GET
71
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/responses, Methods: POST
72
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/responses/{response_id}, Methods: GET
73
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/responses/{response_id}/cancel, Methods: POST
74
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/messages, Methods: POST
75
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/chat/completions, Methods: POST
76
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/completions, Methods: POST
77
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/audio/transcriptions, Methods: POST
78
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/audio/translations, Methods: POST
79
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /ping, Methods: GET
80
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /ping, Methods: POST
81
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /invocations, Methods: POST
82
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /classify, Methods: POST
83
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/embeddings, Methods: POST
84
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /score, Methods: POST
85
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/score, Methods: POST
86
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /rerank, Methods: POST
87
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v1/rerank, Methods: POST
88
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /v2/rerank, Methods: POST
89
- (APIServer pid=2014082) INFO 02-02 07:00:34 [launcher.py:46] Route: /pooling, Methods: POST
90
- (APIServer pid=2014082) INFO: Started server process [2014082]
91
- (APIServer pid=2014082) INFO: Waiting for application startup.
92
- (APIServer pid=2014082) INFO: Application startup complete.
93
- (APIServer pid=2014082) INFO 02-02 09:26:22 [launcher.py:110] Shutting down FastAPI HTTP server.
94
- [rank0]:[W202 09:26:22.195432165 ProcessGroupNCCL.cpp:1524] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
95
- (APIServer pid=2014082) INFO: Shutting down
96
- (APIServer pid=2014082) INFO: Waiting for application shutdown.
97
- (APIServer pid=2014082) INFO: Application shutdown complete.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ (APIServer pid=3311584) INFO 02-03 01:41:10 [utils.py:325]
2
+ (APIServer pid=3311584) INFO 02-03 01:41:10 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
+ (APIServer pid=3311584) INFO 02-03 01:41:10 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
+ (APIServer pid=3311584) INFO 02-03 01:41:10 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0006500
5
+ (APIServer pid=3311584) INFO 02-03 01:41:10 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
+ (APIServer pid=3311584) INFO 02-03 01:41:10 [utils.py:325]
7
+ (APIServer pid=3311584) INFO 02-03 01:41:10 [utils.py:261] non-default args: {'port': 9003, 'model': 'Elfsong/VLM_stage_2_iter_0006500', 'trust_remote_code': True, 'gpu_memory_utilization': 0.4}
8
+ (APIServer pid=3311584) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
+ (APIServer pid=3311584) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
+ (APIServer pid=3311584) INFO 02-03 01:41:12 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
+ (APIServer pid=3311584) INFO 02-03 01:41:12 [model.py:1561] Using max model len 40960
12
+ (APIServer pid=3311584) INFO 02-03 01:41:12 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
+ (APIServer pid=3311584) INFO 02-03 01:41:12 [vllm.py:624] Asynchronous scheduling is enabled.
14
+ (EngineCore_DP0 pid=3315747) INFO 02-03 01:41:31 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0006500', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0006500', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0006500, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
+ (EngineCore_DP0 pid=3315747) INFO 02-03 01:41:35 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:59489 backend=nccl
16
+ (EngineCore_DP0 pid=3315747) INFO 02-03 01:41:35 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
+ (EngineCore_DP0 pid=3315747) INFO 02-03 01:41:37 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0006500...
18
+ (EngineCore_DP0 pid=3315747) INFO 02-03 01:41:38 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [gpu_model_runner.py:4116] Failed to load model - not enough GPU memory. Try lowering --gpu-memory-utilization to free memory for weights, increasing --tensor-parallel-size, or using --quantization. See https://docs.vllm.ai/en/latest/configuration/conserving_memory/ for more tips. (original error: CUDA out of memory. Tried to allocate 500.00 MiB. GPU 0 has a total capacity of 139.80 GiB of which 312.25 MiB is free. Process 3284142 has 5.32 GiB memory in use. Process 3243122 has 2.91 GiB memory in use. Process 3243120 has 2.80 GiB memory in use. Process 3243121 has 3.62 GiB memory in use. Process 3243119 has 3.37 GiB memory in use. Process 3315388 has 61.82 GiB memory in use. Including non-PyTorch memory, this process has 59.60 GiB memory in use. Of the allocated memory 58.94 GiB is allocated by PyTorch, and 278.00 KiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables))
20
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] EngineCore failed to start.
21
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] Traceback (most recent call last):
22
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 937, in run_engine_core
23
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs)
24
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
25
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 691, in __init__
26
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] super().__init__(
27
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 105, in __init__
28
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.model_executor = executor_class(vllm_config)
29
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
30
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 101, in __init__
31
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self._init_executor()
32
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 48, in _init_executor
33
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.driver_worker.load_model()
34
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 275, in load_model
35
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.model_runner.load_model(eep_scale_up=eep_scale_up)
36
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4117, in load_model
37
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] raise e
38
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4040, in load_model
39
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.model = model_loader.load_model(
40
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^
41
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 50, in load_model
42
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] model = initialize_model(
43
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^
44
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/utils.py", line 48, in initialize_model
45
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] return model_class(vllm_config=vllm_config, prefix=prefix)
46
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
47
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 274, in __init__
48
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.model = Qwen3Model(
49
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^
50
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 306, in __init__
51
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] old_init(self, **kwargs)
52
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 248, in __init__
53
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] super().__init__(
54
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 306, in __init__
55
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] old_init(self, **kwargs)
56
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 394, in __init__
57
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.start_layer, self.end_layer, self.layers = make_layers(
58
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^
59
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 707, in make_layers
60
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
61
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
62
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 396, in <lambda>
63
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] lambda prefix: decoder_layer_type(
64
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^
65
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 196, in __init__
66
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.mlp = Qwen3MLP(
67
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^
68
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 87, in __init__
69
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.gate_up_proj = MergedColumnParallelLinear(
70
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
71
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 670, in __init__
72
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] super().__init__(
73
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 495, in __init__
74
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] self.quant_method.create_weights(
75
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 224, in create_weights
76
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] data=torch.empty(
77
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^
78
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/utils/_device.py", line 103, in __torch_function__
79
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] return func(*args, **kwargs)
80
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] ^^^^^^^^^^^^^^^^^^^^^
81
+ (EngineCore_DP0 pid=3315747) ERROR 02-03 01:41:41 [core.py:946] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 500.00 MiB. GPU 0 has a total capacity of 139.80 GiB of which 312.25 MiB is free. Process 3284142 has 5.32 GiB memory in use. Process 3243122 has 2.91 GiB memory in use. Process 3243120 has 2.80 GiB memory in use. Process 3243121 has 3.62 GiB memory in use. Process 3243119 has 3.37 GiB memory in use. Process 3315388 has 61.82 GiB memory in use. Including non-PyTorch memory, this process has 59.60 GiB memory in use. Of the allocated memory 58.94 GiB is allocated by PyTorch, and 278.00 KiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
82
+ (EngineCore_DP0 pid=3315747) Process EngineCore_DP0:
83
+ (EngineCore_DP0 pid=3315747) Traceback (most recent call last):
84
+ (EngineCore_DP0 pid=3315747) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
85
+ (EngineCore_DP0 pid=3315747) self.run()
86
+ (EngineCore_DP0 pid=3315747) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
87
+ (EngineCore_DP0 pid=3315747) self._target(*self._args, **self._kwargs)
88
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 950, in run_engine_core
89
+ (EngineCore_DP0 pid=3315747) raise e
90
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 937, in run_engine_core
91
+ (EngineCore_DP0 pid=3315747) engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs)
92
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
93
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 691, in __init__
94
+ (EngineCore_DP0 pid=3315747) super().__init__(
95
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 105, in __init__
96
+ (EngineCore_DP0 pid=3315747) self.model_executor = executor_class(vllm_config)
97
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
98
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 101, in __init__
99
+ (EngineCore_DP0 pid=3315747) self._init_executor()
100
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 48, in _init_executor
101
+ (EngineCore_DP0 pid=3315747) self.driver_worker.load_model()
102
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 275, in load_model
103
+ (EngineCore_DP0 pid=3315747) self.model_runner.load_model(eep_scale_up=eep_scale_up)
104
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4117, in load_model
105
+ (EngineCore_DP0 pid=3315747) raise e
106
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4040, in load_model
107
+ (EngineCore_DP0 pid=3315747) self.model = model_loader.load_model(
108
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^^^^^^
109
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 50, in load_model
110
+ (EngineCore_DP0 pid=3315747) model = initialize_model(
111
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^
112
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/utils.py", line 48, in initialize_model
113
+ (EngineCore_DP0 pid=3315747) return model_class(vllm_config=vllm_config, prefix=prefix)
114
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
115
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 274, in __init__
116
+ (EngineCore_DP0 pid=3315747) self.model = Qwen3Model(
117
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^
118
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 306, in __init__
119
+ (EngineCore_DP0 pid=3315747) old_init(self, **kwargs)
120
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 248, in __init__
121
+ (EngineCore_DP0 pid=3315747) super().__init__(
122
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 306, in __init__
123
+ (EngineCore_DP0 pid=3315747) old_init(self, **kwargs)
124
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 394, in __init__
125
+ (EngineCore_DP0 pid=3315747) self.start_layer, self.end_layer, self.layers = make_layers(
126
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^
127
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 707, in make_layers
128
+ (EngineCore_DP0 pid=3315747) maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
129
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
130
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 396, in <lambda>
131
+ (EngineCore_DP0 pid=3315747) lambda prefix: decoder_layer_type(
132
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^
133
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 196, in __init__
134
+ (EngineCore_DP0 pid=3315747) self.mlp = Qwen3MLP(
135
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^
136
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 87, in __init__
137
+ (EngineCore_DP0 pid=3315747) self.gate_up_proj = MergedColumnParallelLinear(
138
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
139
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 670, in __init__
140
+ (EngineCore_DP0 pid=3315747) super().__init__(
141
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 495, in __init__
142
+ (EngineCore_DP0 pid=3315747) self.quant_method.create_weights(
143
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 224, in create_weights
144
+ (EngineCore_DP0 pid=3315747) data=torch.empty(
145
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^
146
+ (EngineCore_DP0 pid=3315747) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/torch/utils/_device.py", line 103, in __torch_function__
147
+ (EngineCore_DP0 pid=3315747) return func(*args, **kwargs)
148
+ (EngineCore_DP0 pid=3315747) ^^^^^^^^^^^^^^^^^^^^^
149
+ (EngineCore_DP0 pid=3315747) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 500.00 MiB. GPU 0 has a total capacity of 139.80 GiB of which 312.25 MiB is free. Process 3284142 has 5.32 GiB memory in use. Process 3243122 has 2.91 GiB memory in use. Process 3243120 has 2.80 GiB memory in use. Process 3243121 has 3.62 GiB memory in use. Process 3243119 has 3.37 GiB memory in use. Process 3315388 has 61.82 GiB memory in use. Including non-PyTorch memory, this process has 59.60 GiB memory in use. Of the allocated memory 58.94 GiB is allocated by PyTorch, and 278.00 KiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
150
+ [rank0]:[W203 01:41:43.753415460 ProcessGroupNCCL.cpp:1524] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
151
+ (APIServer pid=3311584) Traceback (most recent call last):
152
+ (APIServer pid=3311584) File "<frozen runpy>", line 198, in _run_module_as_main
153
+ (APIServer pid=3311584) File "<frozen runpy>", line 88, in _run_code
154
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 991, in <module>
155
+ (APIServer pid=3311584) uvloop.run(run_server(args))
156
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 96, in run
157
+ (APIServer pid=3311584) return __asyncio.run(
158
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^
159
+ (APIServer pid=3311584) File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
160
+ (APIServer pid=3311584) return runner.run(main)
161
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^^^
162
+ (APIServer pid=3311584) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
163
+ (APIServer pid=3311584) return self._loop.run_until_complete(task)
164
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
165
+ (APIServer pid=3311584) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
166
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 48, in wrapper
167
+ (APIServer pid=3311584) return await main
168
+ (APIServer pid=3311584) ^^^^^^^^^^
169
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 919, in run_server
170
+ (APIServer pid=3311584) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
171
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 938, in run_server_worker
172
+ (APIServer pid=3311584) async with build_async_engine_client(
173
+ (APIServer pid=3311584) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
174
+ (APIServer pid=3311584) return await anext(self.gen)
175
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^^^^^^^^
176
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 147, in build_async_engine_client
177
+ (APIServer pid=3311584) async with build_async_engine_client_from_engine_args(
178
+ (APIServer pid=3311584) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
179
+ (APIServer pid=3311584) return await anext(self.gen)
180
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^^^^^^^^
181
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 188, in build_async_engine_client_from_engine_args
182
+ (APIServer pid=3311584) async_llm = AsyncLLM.from_vllm_config(
183
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^^^^^^^^^^^^^
184
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 228, in from_vllm_config
185
+ (APIServer pid=3311584) return cls(
186
+ (APIServer pid=3311584) ^^^^
187
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 155, in __init__
188
+ (APIServer pid=3311584) self.engine_core = EngineCoreClient.make_async_mp_client(
189
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
190
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 122, in make_async_mp_client
191
+ (APIServer pid=3311584) return AsyncMPClient(*client_args)
192
+ (APIServer pid=3311584) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
193
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 819, in __init__
194
+ (APIServer pid=3311584) super().__init__(
195
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 479, in __init__
196
+ (APIServer pid=3311584) with launch_core_engines(vllm_config, executor_class, log_stats) as (
197
+ (APIServer pid=3311584) File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__
198
+ (APIServer pid=3311584) next(self.gen)
199
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 933, in launch_core_engines
200
+ (APIServer pid=3311584) wait_for_engine_startup(
201
+ (APIServer pid=3311584) File "/home/mingzhe/Projects/Arena/.venv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 992, in wait_for_engine_startup
202
+ (APIServer pid=3311584) raise RuntimeError(
203
+ (APIServer pid=3311584) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
vllm_0007000.log CHANGED
@@ -1,20 +1,18 @@
1
- (APIServer pid=3233856) INFO 02-03 01:26:55 [utils.py:325]
2
- (APIServer pid=3233856) INFO 02-03 01:26:55 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
- (APIServer pid=3233856) INFO 02-03 01:26:55 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
- (APIServer pid=3233856) INFO 02-03 01:26:55 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0007000
5
- (APIServer pid=3233856) INFO 02-03 01:26:55 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
- (APIServer pid=3233856) INFO 02-03 01:26:55 [utils.py:325]
7
- (APIServer pid=3233856) INFO 02-03 01:26:55 [utils.py:261] non-default args: {'port': 9003, 'model': 'Elfsong/VLM_stage_2_iter_0007000', 'trust_remote_code': True, 'quantization': 'bitsandbytes', 'gpu_memory_utilization': 0.3}
8
- (APIServer pid=3233856) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
- (APIServer pid=3233856) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
- (APIServer pid=3233856) INFO 02-03 01:26:57 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
- (APIServer pid=3233856) INFO 02-03 01:26:57 [model.py:1561] Using max model len 40960
12
- (APIServer pid=3233856) INFO 02-03 01:26:57 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
- (APIServer pid=3233856) INFO 02-03 01:27:00 [vllm.py:624] Asynchronous scheduling is enabled.
14
- (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:14 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0007000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0007000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0007000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
- (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:15 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:33921 backend=nccl
16
- (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:15 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
- (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:16 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0007000...
18
- (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:18 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
- (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:19 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
- Cancellation requested; stopping current tasks.
 
1
+ (APIServer pid=3313448) INFO 02-03 01:41:15 [utils.py:325]
2
+ (APIServer pid=3313448) INFO 02-03 01:41:15 [utils.py:325] β–ˆ β–ˆ β–ˆβ–„ β–„β–ˆ
3
+ (APIServer pid=3313448) INFO 02-03 01:41:15 [utils.py:325] β–„β–„ β–„β–ˆ β–ˆ β–ˆ β–ˆ β–€β–„β–€ β–ˆ version 0.15.0
4
+ (APIServer pid=3313448) INFO 02-03 01:41:15 [utils.py:325] β–ˆβ–„β–ˆβ–€ β–ˆ β–ˆ β–ˆ β–ˆ model Elfsong/VLM_stage_2_iter_0007000
5
+ (APIServer pid=3313448) INFO 02-03 01:41:15 [utils.py:325] β–€β–€ β–€β–€β–€β–€β–€ β–€β–€β–€β–€β–€ β–€ β–€
6
+ (APIServer pid=3313448) INFO 02-03 01:41:15 [utils.py:325]
7
+ (APIServer pid=3313448) INFO 02-03 01:41:15 [utils.py:261] non-default args: {'port': 9004, 'model': 'Elfsong/VLM_stage_2_iter_0007000', 'trust_remote_code': True, 'gpu_memory_utilization': 0.4}
8
+ (APIServer pid=3313448) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
9
+ (APIServer pid=3313448) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
10
+ (APIServer pid=3313448) INFO 02-03 01:41:16 [model.py:541] Resolved architecture: Qwen3ForCausalLM
11
+ (APIServer pid=3313448) INFO 02-03 01:41:16 [model.py:1561] Using max model len 40960
12
+ (APIServer pid=3313448) INFO 02-03 01:41:16 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=8192.
13
+ (APIServer pid=3313448) INFO 02-03 01:41:16 [vllm.py:624] Asynchronous scheduling is enabled.
14
+ (EngineCore_DP0 pid=3315388) INFO 02-03 01:41:28 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='Elfsong/VLM_stage_2_iter_0007000', speculative_config=None, tokenizer='Elfsong/VLM_stage_2_iter_0007000', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Elfsong/VLM_stage_2_iter_0007000, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
15
+ (EngineCore_DP0 pid=3315388) INFO 02-03 01:41:32 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.21.25.98:53245 backend=nccl
16
+ (EngineCore_DP0 pid=3315388) INFO 02-03 01:41:32 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
17
+ (EngineCore_DP0 pid=3315388) INFO 02-03 01:41:33 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0007000...
18
+ (EngineCore_DP0 pid=3315388) INFO 02-03 01:41:34 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')