Kurt232 commited on
Commit
9f241d6
·
1 Parent(s): 0dd4473

1. Merge benchmark of Llama and Phi4

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.MD +12 -0
  2. merge_bench/logs/llama_darelinear_1.log +96 -0
  3. merge_bench/logs/llama_darelinear_3.log +96 -0
  4. merge_bench/logs/llama_darelinear_5.log +96 -0
  5. merge_bench/logs/llama_darelinear_7.log +96 -0
  6. merge_bench/logs/llama_darelinear_9.log +96 -0
  7. merge_bench/logs/llama_linear_1.log +96 -0
  8. merge_bench/logs/llama_linear_3.log +96 -0
  9. merge_bench/logs/llama_linear_5.log +96 -0
  10. merge_bench/logs/llama_linear_7.log +96 -0
  11. merge_bench/logs/llama_linear_9.log +96 -0
  12. merge_bench/logs/llama_ties_1.log +96 -0
  13. merge_bench/logs/llama_ties_3.log +96 -0
  14. merge_bench/logs/llama_ties_5.log +96 -0
  15. merge_bench/logs/llama_ties_7.log +96 -0
  16. merge_bench/logs/llama_ties_9.log +96 -0
  17. merge_bench/logs/phi_darelinear_1.log +96 -0
  18. merge_bench/logs/phi_darelinear_3.log +96 -0
  19. merge_bench/logs/phi_darelinear_5.log +96 -0
  20. merge_bench/logs/phi_darelinear_7.log +96 -0
  21. merge_bench/logs/phi_darelinear_9.log +96 -0
  22. merge_bench/logs/phi_linear_1.log +100 -0
  23. merge_bench/logs/phi_linear_2.log +96 -0
  24. merge_bench/logs/phi_linear_3.log +96 -0
  25. merge_bench/logs/phi_linear_4.log +96 -0
  26. merge_bench/logs/phi_linear_5.log +96 -0
  27. merge_bench/logs/phi_linear_6.log +96 -0
  28. merge_bench/logs/phi_linear_7.log +96 -0
  29. merge_bench/logs/phi_linear_8.log +96 -0
  30. merge_bench/logs/phi_linear_9.log +96 -0
  31. merge_bench/logs/phi_ties_1.log +96 -0
  32. merge_bench/logs/phi_ties_3.log +96 -0
  33. merge_bench/logs/phi_ties_5.log +96 -0
  34. merge_bench/logs/phi_ties_7.log +96 -0
  35. merge_bench/logs/phi_ties_9.log +96 -0
  36. merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|arc_challenge|0_2025-06-23T01-52-10.258150.parquet +3 -0
  37. merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|arc_easy|0_2025-06-23T01-52-10.258150.parquet +3 -0
  38. merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|commonsenseqa|0_2025-06-23T01-52-10.258150.parquet +3 -0
  39. merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|gsm8k|0_2025-06-23T01-52-10.258150.parquet +3 -0
  40. merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|math_500|0_2025-06-23T01-52-10.258150.parquet +3 -0
  41. merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|truthfulqa|0_2025-06-23T01-52-10.258150.parquet +3 -0
  42. merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|arc_challenge|0_2025-06-23T01-52-10.258150.parquet +3 -0
  43. merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|arc_easy|0_2025-06-23T01-52-10.258150.parquet +3 -0
  44. merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|commonsenseqa|0_2025-06-23T01-52-10.258150.parquet +3 -0
  45. merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|gsm8k|0_2025-06-23T01-52-10.258150.parquet +3 -0
  46. merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|math_500|0_2025-06-23T01-52-10.258150.parquet +3 -0
  47. merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|truthfulqa|0_2025-06-23T01-52-10.258150.parquet +3 -0
  48. merge_bench/outputs/._merged1_llama_darelinear_5/2025-06-23T01-52-10.258150/outputs_mm|arc_challenge|0_2025-06-23T01-52-10.258150.parquet +3 -0
  49. merge_bench/outputs/._merged1_llama_darelinear_5/2025-06-23T01-52-10.258150/outputs_mm|arc_easy|0_2025-06-23T01-52-10.258150.parquet +3 -0
  50. merge_bench/outputs/._merged1_llama_darelinear_5/2025-06-23T01-52-10.258150/outputs_mm|commonsenseqa|0_2025-06-23T01-52-10.258150.parquet +3 -0
README.MD ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Description
2
+ `./test/0-1k`, `./merge_bench/` and `./merge_bench1/` have same eval data.
3
+ The data split includes math_tasks and mcq_tasks.
4
+ ```
5
+ math_tasks = ["mm|aime24|0", "mm|math_500|0", "mm|gsm8k|0"]
6
+ mcq_tasks = ["mm|mmlu_pro|0", "mm|truthfulqa|0", "mm|commonsenseqa|0", "mm|arc_easy|0", "mm|arc_challenge|0", "mm|gpqa_diamond|0"]
7
+ ```
8
+
9
+ And those only contain data samples whose generation length < 1k from respective reasoning model, e.g. DS-R1-Llama3 and Phi4-mini-reasoning. But currently all sample is from phi4-mini-reasoning
10
+
11
+ The difference between `./merge_bench/` and `./merge_bench1/` is `./merge_bench1/` merged all layers of Phi4, while `./merge_bench/` missed `lm_head`.
12
+ Note that the series of Llama in `./merge_bench/` is reaasonable, since those are merged by `mergekit`.
merge_bench/logs/llama_darelinear_1.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 18:47:54 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 18:47:56 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 18:48:03 [config.py:717] This model supports multiple tasks: {'classify', 'score', 'reward', 'embed', 'generate'}. Defaulting to 'generate'.
4
+ INFO 06-28 18:48:03 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 18:48:03 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 18:48:05 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 18:48:05 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 18:48:05 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_06919893'), local_subscribe_addr='ipc:///tmp/d4f9c938-0474-4c85-8776-76fae2cfb900', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 18:48:05 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1515569ebc70>
10
+ WARNING 06-28 18:48:05 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151554dbca90>
11
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:05 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_607805b6'), local_subscribe_addr='ipc:///tmp/446ff4e9-7682-40ee-a3fc-0784e08ffb01', remote_subscribe_addr=None, remote_addr_ipv6=False)
12
+ WARNING 06-28 18:48:05 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1515569ebd30>
13
+ WARNING 06-28 18:48:05 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1515569eb9a0>
14
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:05 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_0e41e491'), local_subscribe_addr='ipc:///tmp/d8e738b3-c034-45e6-b1c5-2dfc295238ed', remote_subscribe_addr=None, remote_addr_ipv6=False)
15
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:05 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_f2d47f78'), local_subscribe_addr='ipc:///tmp/fa2c9b8a-3b1c-4803-b18c-24205bbd5985', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:05 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_5a2a3f4f'), local_subscribe_addr='ipc:///tmp/730a59de-f5a9-4c2f-a5ea-44ed30623ac6', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:07 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:07 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:07 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:07 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:07 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:07 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:07 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:07 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3598716) WARNING 06-28 18:48:08 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3598717) WARNING 06-28 18:48:08 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3598715) WARNING 06-28 18:48:08 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3598714) WARNING 06-28 18:48:08 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:08 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_1ad62c91'), local_subscribe_addr='ipc:///tmp/01e2f2dc-b1dd-4a71-b920-27675c6a453e', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:08 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
31
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:08 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:08 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
33
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:08 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:08 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:08 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:08 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:08 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=2 pid=3598716) WARNING 06-28 18:48:08 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=3 pid=3598717) WARNING 06-28 18:48:08 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=1 pid=3598715) WARNING 06-28 18:48:08 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3598714) WARNING 06-28 18:48:08 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:08 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:08 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:08 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:08 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:13 [loader.py:458] Loading weights took 4.51 seconds
47
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:13 [loader.py:458] Loading weights took 4.51 seconds
48
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:13 [loader.py:458] Loading weights took 4.52 seconds
49
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:13 [loader.py:458] Loading weights took 4.51 seconds
50
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:13 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 4.901312 seconds
51
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:13 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 4.908657 seconds
52
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:13 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 4.896791 seconds
53
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:13 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 4.908974 seconds
54
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
56
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
58
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:20 [backends.py:430] Dynamo bytecode transform time: 7.13 s
59
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:20 [backends.py:430] Dynamo bytecode transform time: 7.13 s
60
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:20 [backends.py:430] Dynamo bytecode transform time: 7.13 s
61
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:20 [backends.py:430] Dynamo bytecode transform time: 7.13 s
62
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.435 s
63
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.433 s
64
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.445 s
65
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.826 s
66
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:31 [monitor.py:33] torch.compile takes 7.13 s in total
67
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:31 [monitor.py:33] torch.compile takes 7.13 s in total
68
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:31 [monitor.py:33] torch.compile takes 7.13 s in total
69
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:31 [monitor.py:33] torch.compile takes 7.13 s in total
70
+ INFO 06-28 18:48:33 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 18:48:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 18:48:33 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 18:48:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 18:48:33 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 18:48:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 18:48:33 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 18:48:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=0 pid=3598714) INFO 06-28 18:48:56 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3598717) INFO 06-28 18:48:56 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
80
+ (VllmWorker rank=2 pid=3598716) INFO 06-28 18:48:56 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3598715) INFO 06-28 18:48:56 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
82
+ INFO 06-28 18:48:56 [core.py:159] init engine (profile, create kv cache, warmup model) took 42.92 seconds
83
+ INFO 06-28 18:48:56 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 19:01:31 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 19:01:31 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5201|± |0.0281|
89
+ | | |math_pass@1:1_samples|0.7488|± |0.0440|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6010|± |0.0251|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6304|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4938|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7226|± |0.0212|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7750|± |0.0669|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3554|± |0.0437|
96
+
merge_bench/logs/llama_darelinear_3.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 19:01:30 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 19:01:32 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 19:01:39 [config.py:717] This model supports multiple tasks: {'score', 'generate', 'reward', 'embed', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-28 19:01:39 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 19:01:39 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 19:01:40 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 19:01:40 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 19:01:40 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_64032cd1'), local_subscribe_addr='ipc:///tmp/45893f5d-8e26-4aa9-9824-5b019d5989cf', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 19:01:40 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14abee988b20>
10
+ WARNING 06-28 19:01:40 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ac0444bdc0>
11
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:40 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_4ee5119d'), local_subscribe_addr='ipc:///tmp/6e2ed57a-78ff-4635-b216-0cc45fbb3fd6', remote_subscribe_addr=None, remote_addr_ipv6=False)
12
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:40 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_8718327d'), local_subscribe_addr='ipc:///tmp/5be42818-9ba4-4977-b053-4709c5ac33b7', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 19:01:40 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ac0444bac0>
14
+ WARNING 06-28 19:01:41 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ac0444bd00>
15
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:41 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_175f72fa'), local_subscribe_addr='ipc:///tmp/987bf39f-efca-4e1f-a76e-9fd79a0830a7', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:41 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_37f6e3ef'), local_subscribe_addr='ipc:///tmp/bc5d6131-6a85-46be-9c6b-7a2502f865ec', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:52 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:52 [pynccl.py:69] vLLM is using nccl==2.21.5
19
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:52 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:52 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:52 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:52 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:52 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:52 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3603787) WARNING 06-28 19:01:53 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3603788) WARNING 06-28 19:01:53 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3603786) WARNING 06-28 19:01:53 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3603785) WARNING 06-28 19:01:53 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:53 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_01e6e72a'), local_subscribe_addr='ipc:///tmp/867fb721-a28c-49e2-a558-3ce978a6e3f7', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:53 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:53 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:53 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:53 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:53 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:53 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=2 pid=3603787) WARNING 06-28 19:01:53 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=3 pid=3603788) WARNING 06-28 19:01:53 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:53 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:53 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=1 pid=3603786) WARNING 06-28 19:01:53 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3603785) WARNING 06-28 19:01:53 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:53 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:53 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:53 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:53 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:54 [loader.py:458] Loading weights took 0.69 seconds
47
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:54 [loader.py:458] Loading weights took 0.69 seconds
48
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:54 [loader.py:458] Loading weights took 0.72 seconds
49
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:54 [loader.py:458] Loading weights took 0.77 seconds
50
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:01:54 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.871916 seconds
51
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:01:54 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.871097 seconds
52
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:01:54 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.927502 seconds
53
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:01:54 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.986968 seconds
54
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:02:00 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
55
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:02:00 [backends.py:430] Dynamo bytecode transform time: 5.67 s
56
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:02:00 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
57
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:02:00 [backends.py:430] Dynamo bytecode transform time: 5.73 s
58
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:02:00 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
59
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:02:00 [backends.py:430] Dynamo bytecode transform time: 5.78 s
60
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:02:00 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
61
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:02:00 [backends.py:430] Dynamo bytecode transform time: 5.83 s
62
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:02:05 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.400 s
63
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:02:05 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.352 s
64
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:02:05 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.368 s
65
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:02:05 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.440 s
66
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:02:11 [monitor.py:33] torch.compile takes 5.78 s in total
67
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:02:11 [monitor.py:33] torch.compile takes 5.67 s in total
68
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:02:11 [monitor.py:33] torch.compile takes 5.73 s in total
69
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:02:11 [monitor.py:33] torch.compile takes 5.83 s in total
70
+ INFO 06-28 19:02:12 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 19:02:12 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 19:02:12 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 19:02:12 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 19:02:12 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 19:02:12 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 19:02:12 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 19:02:12 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=1 pid=3603786) INFO 06-28 19:02:36 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
79
+ (VllmWorker rank=0 pid=3603785) INFO 06-28 19:02:36 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
80
+ (VllmWorker rank=3 pid=3603788) INFO 06-28 19:02:36 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
81
+ (VllmWorker rank=2 pid=3603787) INFO 06-28 19:02:36 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
82
+ INFO 06-28 19:02:36 [core.py:159] init engine (profile, create kv cache, warmup model) took 41.72 seconds
83
+ INFO 06-28 19:02:37 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 19:15:24 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 19:15:24 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5004|± |0.0274|
89
+ | | |math_pass@1:1_samples|0.8055|± |0.0369|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6037|± |0.0251|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6315|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4938|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7360|± |0.0209|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8750|± |0.0530|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2727|± |0.0407|
96
+
merge_bench/logs/llama_darelinear_5.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 19:15:23 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 19:15:24 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 19:15:31 [config.py:717] This model supports multiple tasks: {'embed', 'reward', 'classify', 'score', 'generate'}. Defaulting to 'generate'.
4
+ INFO 06-28 19:15:31 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 19:15:31 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 19:15:33 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 19:15:33 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 19:15:33 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_10577a21'), local_subscribe_addr='ipc:///tmp/004d8b89-cc85-469e-a0a1-eba5bc07a552', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 19:15:33 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x145d5d377dc0>
10
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:33 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_558e8955'), local_subscribe_addr='ipc:///tmp/f7c82864-a4a8-4c22-8842-dc5a67f67a87', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 19:15:33 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x145d4f8dcb20>
12
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:33 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_27b3cabf'), local_subscribe_addr='ipc:///tmp/b6a9ea73-5188-4e6a-950a-8c81376233d5', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 19:15:33 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x145d5d377ac0>
14
+ WARNING 06-28 19:15:33 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x145d5d377d00>
15
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:33 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_c356ca19'), local_subscribe_addr='ipc:///tmp/639cd23e-c9cf-4e49-b6d2-dac211cfea8e', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:33 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_db80b59c'), local_subscribe_addr='ipc:///tmp/90eb8b9b-bfd8-42a1-ba6c-6cb01a8bd850', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:35 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:35 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:35 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:35 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:35 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:35 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:35 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:35 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3609849) WARNING 06-28 19:15:36 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3609848) WARNING 06-28 19:15:36 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3609846) WARNING 06-28 19:15:36 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3609847) WARNING 06-28 19:15:36 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:36 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_4a6645ac'), local_subscribe_addr='ipc:///tmp/96d20b0c-d027-4c6e-a850-071c00e81e80', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:36 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
31
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:36 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
32
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:36 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:36 [cuda.py:221] Using Flash Attention backend on V1 engine.
34
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:36 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:36 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3609849) WARNING 06-28 19:15:36 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3609848) WARNING 06-28 19:15:36 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3609846) WARNING 06-28 19:15:36 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:36 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
40
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:36 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
41
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:36 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
42
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:36 [cuda.py:221] Using Flash Attention backend on V1 engine.
43
+ (VllmWorker rank=1 pid=3609847) WARNING 06-28 19:15:36 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
44
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:36 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:36 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:37 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:37 [loader.py:458] Loading weights took 0.69 seconds
48
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:37 [loader.py:458] Loading weights took 0.72 seconds
49
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:37 [loader.py:458] Loading weights took 0.76 seconds
50
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:37 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.878368 seconds
51
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:37 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.874506 seconds
52
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:37 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.975043 seconds
53
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:37 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.928218 seconds
54
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:43 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
55
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:43 [backends.py:430] Dynamo bytecode transform time: 5.53 s
56
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:43 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:43 [backends.py:430] Dynamo bytecode transform time: 5.55 s
58
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:43 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
59
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:43 [backends.py:430] Dynamo bytecode transform time: 5.57 s
60
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:43 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:43 [backends.py:430] Dynamo bytecode transform time: 5.58 s
62
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.393 s
63
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.397 s
64
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.404 s
65
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.454 s
66
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:15:54 [monitor.py:33] torch.compile takes 5.57 s in total
67
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:15:54 [monitor.py:33] torch.compile takes 5.55 s in total
68
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:15:54 [monitor.py:33] torch.compile takes 5.53 s in total
69
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:15:54 [monitor.py:33] torch.compile takes 5.58 s in total
70
+ INFO 06-28 19:15:55 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 19:15:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 19:15:55 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 19:15:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 19:15:55 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 19:15:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 19:15:55 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 19:15:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3609849) INFO 06-28 19:16:19 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3609848) INFO 06-28 19:16:19 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3609847) INFO 06-28 19:16:19 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3609846) INFO 06-28 19:16:19 [gpu_model_runner.py:1686] Graph capturing finished in 23 secs, took 2.96 GiB
82
+ INFO 06-28 19:16:19 [core.py:159] init engine (profile, create kv cache, warmup model) took 41.19 seconds
83
+ INFO 06-28 19:16:19 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 19:28:57 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 19:28:57 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5105|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7999|± |0.0371|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5853|± |0.0253|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6336|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4844|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7248|± |0.0211|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8750|± |0.0530|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3388|± |0.0432|
96
+
merge_bench/logs/llama_darelinear_7.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 19:28:56 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 19:28:57 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 19:29:04 [config.py:717] This model supports multiple tasks: {'reward', 'score', 'classify', 'generate', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 19:29:04 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 19:29:04 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 19:29:06 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 19:29:06 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 19:29:06 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_26ef43fb'), local_subscribe_addr='ipc:///tmp/1503da60-c19b-48f3-9809-e34d8853a309', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 19:29:06 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151ce2144b50>
10
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:06 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_11981bd0'), local_subscribe_addr='ipc:///tmp/9b1f1f4f-9671-425c-ae4d-18e28195a4bc', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 19:29:06 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151ce3b7fd30>
12
+ WARNING 06-28 19:29:06 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151ce3b7fdf0>
13
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:06 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_2ab12ec2'), local_subscribe_addr='ipc:///tmp/9c3831dc-0da1-47c5-92b2-caa01026898b', remote_subscribe_addr=None, remote_addr_ipv6=False)
14
+ WARNING 06-28 19:29:06 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151ce3b7faf0>
15
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:06 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_26bbf412'), local_subscribe_addr='ipc:///tmp/a3d59c06-7d41-4866-ad94-b254fd1dee6e', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:06 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_2e0f0ae9'), local_subscribe_addr='ipc:///tmp/1cbf6108-c340-4068-a696-3ce96130e9fb', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:13 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:13 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:13 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:13 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:13 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:13 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:13 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:13 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3613747) WARNING 06-28 19:29:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3613746) WARNING 06-28 19:29:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3613742) WARNING 06-28 19:29:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3613743) WARNING 06-28 19:29:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:13 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_1c6a8c3a'), local_subscribe_addr='ipc:///tmp/4e1b6783-859b-428c-a617-d9ff90c87a4f', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:13 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
31
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
32
+ (VllmWorker rank=0 pid=3613742) WARNING 06-28 19:29:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
33
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:13 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
34
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:13 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
35
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:13 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
36
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=3 pid=3613747) WARNING 06-28 19:29:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=2 pid=3613746) WARNING 06-28 19:29:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=1 pid=3613743) WARNING 06-28 19:29:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:14 [loader.py:458] Loading weights took 0.72 seconds
47
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:14 [loader.py:458] Loading weights took 0.72 seconds
48
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:14 [loader.py:458] Loading weights took 0.71 seconds
49
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:14 [loader.py:458] Loading weights took 0.77 seconds
50
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:14 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.905328 seconds
51
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:14 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.900747 seconds
52
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:14 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.926662 seconds
53
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:15 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 1.000593 seconds
54
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
55
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:20 [backends.py:430] Dynamo bytecode transform time: 5.62 s
56
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:20 [backends.py:430] Dynamo bytecode transform time: 5.68 s
58
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:21 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
59
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:21 [backends.py:430] Dynamo bytecode transform time: 5.85 s
60
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:21 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
61
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:21 [backends.py:430] Dynamo bytecode transform time: 5.92 s
62
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.372 s
63
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.360 s
64
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.434 s
65
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.386 s
66
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:31 [monitor.py:33] torch.compile takes 5.68 s in total
67
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:31 [monitor.py:33] torch.compile takes 5.92 s in total
68
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:31 [monitor.py:33] torch.compile takes 5.85 s in total
69
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:31 [monitor.py:33] torch.compile takes 5.62 s in total
70
+ INFO 06-28 19:29:33 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 19:29:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 19:29:33 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 19:29:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 19:29:33 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 19:29:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 19:29:33 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 19:29:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3613747) INFO 06-28 19:29:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=1 pid=3613743) INFO 06-28 19:29:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=2 pid=3613746) INFO 06-28 19:29:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3613742) INFO 06-28 19:29:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 19:29:58 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.73 seconds
83
+ INFO 06-28 19:29:59 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 19:42:40 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 19:42:40 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5357|± |0.0281|
89
+ | | |math_pass@1:1_samples|0.7499|± |0.0440|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5984|± |0.0251|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6452|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5437|± |0.0279|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7248|± |0.0211|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7750|± |0.0669|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3554|± |0.0437|
96
+
merge_bench/logs/llama_darelinear_9.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 19:42:39 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 19:42:41 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 19:42:48 [config.py:717] This model supports multiple tasks: {'score', 'embed', 'classify', 'generate', 'reward'}. Defaulting to 'generate'.
4
+ INFO 06-28 19:42:48 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 19:42:48 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 19:42:50 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 19:42:50 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 19:42:50 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_c90e6d0c'), local_subscribe_addr='ipc:///tmp/966a24d0-22af-4b35-b61c-287d01dabdde', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 19:42:50 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14946571fd90>
10
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:50 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_6dc61b5f'), local_subscribe_addr='ipc:///tmp/e48c70ef-ba23-4cb5-91df-362ff41efa0d', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 19:42:50 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14942fd6caf0>
12
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:50 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1646518f'), local_subscribe_addr='ipc:///tmp/b6ee7bc0-17bb-4f38-b44b-ec7473a9d4bb', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 19:42:50 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14946571fcd0>
14
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:50 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_5fc7d511'), local_subscribe_addr='ipc:///tmp/dc1587a5-cd34-4d51-9ef8-72e1e473fa0d', remote_subscribe_addr=None, remote_addr_ipv6=False)
15
+ WARNING 06-28 19:42:50 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14946571fa90>
16
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:50 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_cceaf416'), local_subscribe_addr='ipc:///tmp/c17485cf-1cc7-433c-9b80-d2e33392d8cd', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3616477) WARNING 06-28 19:42:57 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3616478) WARNING 06-28 19:42:57 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3616476) WARNING 06-28 19:42:57 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3616475) WARNING 06-28 19:42:57 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:57 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_7d14ff7a'), local_subscribe_addr='ipc:///tmp/54dc6b12-0377-4d94-b6c9-a54dfc6fe0b4', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:57 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
31
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:57 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
32
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:57 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:57 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:57 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3616477) WARNING 06-28 19:42:57 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
36
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:57 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:57 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=3 pid=3616478) WARNING 06-28 19:42:57 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=1 pid=3616476) WARNING 06-28 19:42:57 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:57 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=0 pid=3616475) WARNING 06-28 19:42:57 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:57 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:57 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:57 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:57 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:58 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:58 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:58 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:58 [loader.py:458] Loading weights took 0.73 seconds
50
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:42:58 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.868548 seconds
51
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:42:58 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.867938 seconds
52
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:42:58 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.942615 seconds
53
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:42:58 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.920874 seconds
54
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:43:04 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:43:04 [backends.py:430] Dynamo bytecode transform time: 5.57 s
56
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:43:04 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:43:04 [backends.py:430] Dynamo bytecode transform time: 5.75 s
58
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:43:04 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:43:04 [backends.py:430] Dynamo bytecode transform time: 5.90 s
60
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:43:04 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:43:04 [backends.py:430] Dynamo bytecode transform time: 6.00 s
62
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.353 s
63
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.393 s
64
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.390 s
65
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.436 s
66
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:43:15 [monitor.py:33] torch.compile takes 5.75 s in total
67
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:43:15 [monitor.py:33] torch.compile takes 5.90 s in total
68
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:43:15 [monitor.py:33] torch.compile takes 6.00 s in total
69
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:43:15 [monitor.py:33] torch.compile takes 5.57 s in total
70
+ INFO 06-28 19:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 19:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 19:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 19:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 19:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 19:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 19:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 19:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3616478) INFO 06-28 19:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3616477) INFO 06-28 19:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3616475) INFO 06-28 19:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3616476) INFO 06-28 19:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 19:43:42 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.87 seconds
83
+ INFO 06-28 19:43:43 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 19:56:27 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 19:56:27 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5197|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7193|± |0.0465|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5906|± |0.0252|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6367|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5125|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7136|± |0.0214|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7250|± |0.0715|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3388|± |0.0432|
96
+
merge_bench/logs/llama_linear_1.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 19:56:26 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 19:56:27 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 19:56:34 [config.py:717] This model supports multiple tasks: {'score', 'reward', 'classify', 'generate', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 19:56:34 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 19:56:34 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 19:56:36 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 19:56:36 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 19:56:36 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_e47c5064'), local_subscribe_addr='ipc:///tmp/e6ad432d-f508-4f32-bd1f-0d7c0725974d', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 19:56:36 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c86e0c8a90>
10
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:36 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e6b0e3d6'), local_subscribe_addr='ipc:///tmp/f98aba7d-bfc4-4b64-b79d-83126ab2f88c', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 19:56:36 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c86fa33c70>
12
+ WARNING 06-28 19:56:36 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c86fa33d30>
13
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:36 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_c6a69884'), local_subscribe_addr='ipc:///tmp/307da573-bf26-4a86-b0d1-2e4f53d94f88', remote_subscribe_addr=None, remote_addr_ipv6=False)
14
+ WARNING 06-28 19:56:36 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c86fa339a0>
15
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:36 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_28462e05'), local_subscribe_addr='ipc:///tmp/8a976985-b5bc-4a59-a4a6-9019466bb558', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:36 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_2d722b0e'), local_subscribe_addr='ipc:///tmp/3a2db13c-7d51-4363-8f9e-c42a98ab0208', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:39 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:39 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:39 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:39 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:39 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:39 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:39 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:39 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3618691) WARNING 06-28 19:56:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3618692) WARNING 06-28 19:56:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3618690) WARNING 06-28 19:56:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3618689) WARNING 06-28 19:56:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:40 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_6c78750e'), local_subscribe_addr='ipc:///tmp/55c8e627-7a54-4c21-9f63-0cfcbd1725bc', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:40 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
31
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
32
+ (VllmWorker rank=1 pid=3618690) WARNING 06-28 19:56:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
33
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:40 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
34
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:40 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
35
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:40 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
36
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=2 pid=3618691) WARNING 06-28 19:56:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=3 pid=3618692) WARNING 06-28 19:56:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=0 pid=3618689) WARNING 06-28 19:56:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:40 [loader.py:458] Loading weights took 0.66 seconds
47
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:40 [loader.py:458] Loading weights took 0.65 seconds
48
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:40 [loader.py:458] Loading weights took 0.69 seconds
49
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:41 [loader.py:458] Loading weights took 0.75 seconds
50
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:41 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.840570 seconds
51
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:41 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.842686 seconds
52
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:41 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.982921 seconds
53
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:41 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.901269 seconds
54
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:47 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:47 [backends.py:430] Dynamo bytecode transform time: 5.76 s
56
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:47 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
57
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:47 [backends.py:430] Dynamo bytecode transform time: 5.86 s
58
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:47 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
59
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:47 [backends.py:430] Dynamo bytecode transform time: 5.94 s
60
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:47 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
61
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:47 [backends.py:430] Dynamo bytecode transform time: 5.96 s
62
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:52 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.336 s
63
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:52 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.390 s
64
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:52 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.406 s
65
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:52 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.512 s
66
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:56:58 [monitor.py:33] torch.compile takes 5.76 s in total
67
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:56:58 [monitor.py:33] torch.compile takes 5.96 s in total
68
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:56:58 [monitor.py:33] torch.compile takes 5.86 s in total
69
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:56:58 [monitor.py:33] torch.compile takes 5.94 s in total
70
+ INFO 06-28 19:56:59 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 19:56:59 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 19:56:59 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 19:56:59 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 19:56:59 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 19:56:59 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 19:56:59 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 19:56:59 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=1 pid=3618690) INFO 06-28 19:57:25 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3618691) INFO 06-28 19:57:25 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3618689) INFO 06-28 19:57:25 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=3 pid=3618692) INFO 06-28 19:57:25 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 19:57:25 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.39 seconds
83
+ INFO 06-28 19:57:26 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 20:10:01 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 20:10:01 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5198|± |0.0282|
89
+ | | |math_pass@1:1_samples|0.7070|± |0.0467|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6037|± |0.0251|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6336|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4781|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.6890|± |0.0219|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7250|± |0.0715|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3636|± |0.0439|
96
+
merge_bench/logs/llama_linear_3.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 20:10:00 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 20:10:02 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 20:10:09 [config.py:717] This model supports multiple tasks: {'generate', 'reward', 'score', 'embed', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-28 20:10:09 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 20:10:09 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 20:10:11 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 20:10:11 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 20:10:11 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_6257e474'), local_subscribe_addr='ipc:///tmp/d597435c-2e4d-456f-89db-fcbedcefafc3', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 20:10:11 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14db4ad97dc0>
10
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:11 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_6a207ca3'), local_subscribe_addr='ipc:///tmp/46e20e0c-64bd-473c-b401-bac5e8cadcaa', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 20:10:11 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14db49360b20>
12
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:11 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_0799eacd'), local_subscribe_addr='ipc:///tmp/4c8f80b6-f69b-4626-91a1-b0f3e0c81543', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 20:10:11 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14db4ad97d00>
14
+ WARNING 06-28 20:10:11 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14db4ad97ac0>
15
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:11 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_5c611e31'), local_subscribe_addr='ipc:///tmp/0996616a-5d2f-4b7b-aaae-5d95c304730e', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:11 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_3d01097d'), local_subscribe_addr='ipc:///tmp/77a0146d-a0a9-4b05-86ed-60c0860a40fe', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:13 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:13 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:13 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:13 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:13 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:13 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:13 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:13 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3620671) WARNING 06-28 20:10:14 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3620672) WARNING 06-28 20:10:14 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3620670) WARNING 06-28 20:10:14 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3620669) WARNING 06-28 20:10:14 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:14 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_5c739929'), local_subscribe_addr='ipc:///tmp/27185b44-3b72-4a03-bb20-e0fcf7dc7d56', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:14 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:14 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:14 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
33
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:14 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:14 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:14 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3620672) WARNING 06-28 20:10:14 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3620671) WARNING 06-28 20:10:14 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:14 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:14 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=1 pid=3620670) WARNING 06-28 20:10:14 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3620669) WARNING 06-28 20:10:14 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:14 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:14 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:14 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:14 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:15 [loader.py:458] Loading weights took 0.66 seconds
47
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:15 [loader.py:458] Loading weights took 0.70 seconds
48
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:15 [loader.py:458] Loading weights took 0.71 seconds
49
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:15 [loader.py:458] Loading weights took 0.74 seconds
50
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:15 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.846524 seconds
51
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:15 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.885501 seconds
52
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:15 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.946978 seconds
53
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:15 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.930972 seconds
54
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:21 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:21 [backends.py:430] Dynamo bytecode transform time: 5.60 s
56
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:21 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
57
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:21 [backends.py:430] Dynamo bytecode transform time: 5.71 s
58
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:21 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
59
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:21 [backends.py:430] Dynamo bytecode transform time: 5.73 s
60
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:21 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:21 [backends.py:430] Dynamo bytecode transform time: 5.75 s
62
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.402 s
63
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.438 s
64
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.448 s
65
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.464 s
66
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:32 [monitor.py:33] torch.compile takes 5.71 s in total
67
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:32 [monitor.py:33] torch.compile takes 5.73 s in total
68
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:32 [monitor.py:33] torch.compile takes 5.60 s in total
69
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:32 [monitor.py:33] torch.compile takes 5.75 s in total
70
+ INFO 06-28 20:10:33 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 20:10:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 20:10:33 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 20:10:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 20:10:33 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 20:10:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 20:10:33 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 20:10:33 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3620672) INFO 06-28 20:10:56 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3620671) INFO 06-28 20:10:56 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3620669) INFO 06-28 20:10:56 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3620670) INFO 06-28 20:10:56 [gpu_model_runner.py:1686] Graph capturing finished in 24 secs, took 2.96 GiB
82
+ INFO 06-28 20:10:56 [core.py:159] init engine (profile, create kv cache, warmup model) took 41.41 seconds
83
+ INFO 06-28 20:10:57 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 20:23:36 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 20:23:36 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.4992|± |0.0276|
89
+ | | |math_pass@1:1_samples|0.8236|± |0.0343|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5879|± |0.0252|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6135|± |0.0158|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5062|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7472|± |0.0206|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.9000|± |0.0480|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2893|± |0.0414|
96
+
merge_bench/logs/llama_linear_5.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 20:23:35 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 20:23:36 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 20:23:43 [config.py:717] This model supports multiple tasks: {'score', 'embed', 'reward', 'generate', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-28 20:23:43 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 20:23:43 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 20:23:45 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 20:23:45 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 20:23:45 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_b6685efa'), local_subscribe_addr='ipc:///tmp/299a1d34-2ed2-4341-988b-660b2e51724a', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 20:23:45 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x148dc2563d60>
10
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:45 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_919d78a9'), local_subscribe_addr='ipc:///tmp/e31e068f-fcfb-41a8-94de-c14d5b28536d', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 20:23:45 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x148dc0c08ac0>
12
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:45 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_6406296c'), local_subscribe_addr='ipc:///tmp/036b4ed3-da30-4873-bf68-9a5a4cdd976b', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 20:23:45 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x148dc2563ca0>
14
+ WARNING 06-28 20:23:45 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x148dc25639d0>
15
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:45 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_c676c60a'), local_subscribe_addr='ipc:///tmp/1c4487f2-e80f-4d6a-a055-378524306660', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:45 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_6cc8c80e'), local_subscribe_addr='ipc:///tmp/27e1dd2c-d6e2-4155-87cb-32e683bab21b', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:47 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:47 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:47 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:47 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:47 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:47 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:47 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:47 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3622892) WARNING 06-28 20:23:48 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3622891) WARNING 06-28 20:23:48 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3622889) WARNING 06-28 20:23:48 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3622890) WARNING 06-28 20:23:48 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:48 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_e5cd3014'), local_subscribe_addr='ipc:///tmp/28a2752c-3acb-4fc3-9bbf-2d7f0e4908ab', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:48 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:48 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:48 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:48 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:48 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:48 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3622892) WARNING 06-28 20:23:48 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3622891) WARNING 06-28 20:23:48 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:48 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:48 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=0 pid=3622889) WARNING 06-28 20:23:48 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3622890) WARNING 06-28 20:23:48 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:48 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:48 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:48 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:48 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:49 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:49 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:49 [loader.py:458] Loading weights took 0.71 seconds
49
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:49 [loader.py:458] Loading weights took 0.74 seconds
50
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:49 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.870155 seconds
51
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:49 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.867179 seconds
52
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:49 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.924282 seconds
53
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:49 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.974858 seconds
54
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:55 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:23:55 [backends.py:430] Dynamo bytecode transform time: 5.56 s
56
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:55 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:23:55 [backends.py:430] Dynamo bytecode transform time: 5.67 s
58
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:55 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:23:55 [backends.py:430] Dynamo bytecode transform time: 5.76 s
60
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:55 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:23:55 [backends.py:430] Dynamo bytecode transform time: 5.82 s
62
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:24:00 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.652 s
63
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:24:00 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.663 s
64
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:24:00 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.576 s
65
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:24:00 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.604 s
66
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:24:06 [monitor.py:33] torch.compile takes 5.82 s in total
67
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:24:06 [monitor.py:33] torch.compile takes 5.76 s in total
68
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:24:06 [monitor.py:33] torch.compile takes 5.67 s in total
69
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:24:06 [monitor.py:33] torch.compile takes 5.56 s in total
70
+ INFO 06-28 20:24:07 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 20:24:07 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 20:24:07 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 20:24:07 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 20:24:07 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 20:24:07 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 20:24:07 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 20:24:07 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3622892) INFO 06-28 20:24:34 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3622891) INFO 06-28 20:24:34 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3622889) INFO 06-28 20:24:34 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3622890) INFO 06-28 20:24:34 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
82
+ INFO 06-28 20:24:34 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.75 seconds
83
+ INFO 06-28 20:24:34 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 20:37:14 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 20:37:14 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5256|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7443|± |0.0441|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6115|± |0.0250|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6251|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5188|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7136|± |0.0214|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7750|± |0.0669|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3471|± |0.0435|
96
+
merge_bench/logs/llama_linear_7.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 20:37:13 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 20:37:14 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 20:37:21 [config.py:717] This model supports multiple tasks: {'generate', 'reward', 'embed', 'score', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-28 20:37:21 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 20:37:21 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 20:37:23 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 20:37:23 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 20:37:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_93b18dc7'), local_subscribe_addr='ipc:///tmp/7537e22e-27ae-4eed-8ba9-8f8926cf4814', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 20:37:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152d55a33d60>
10
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_d66e4aef'), local_subscribe_addr='ipc:///tmp/5956dee2-1734-4465-acfb-51251cb047b9', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 20:37:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152d4ffc0ac0>
12
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_56d69c19'), local_subscribe_addr='ipc:///tmp/eee01736-5f87-422d-81df-6f613a8a4f39', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 20:37:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152d55a33ca0>
14
+ WARNING 06-28 20:37:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152d55a339d0>
15
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_b35e06ee'), local_subscribe_addr='ipc:///tmp/70baefd3-b12c-4835-9947-e5bce282520d', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_9b3261e8'), local_subscribe_addr='ipc:///tmp/5affabae-a840-41f8-95e7-09f8fb3b8037', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:50 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:50 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:50 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:50 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:50 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:50 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:50 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:50 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3625205) WARNING 06-28 20:37:51 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=1 pid=3625203) WARNING 06-28 20:37:51 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=2 pid=3625204) WARNING 06-28 20:37:51 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3625202) WARNING 06-28 20:37:51 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:51 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_304efd39'), local_subscribe_addr='ipc:///tmp/4e473a91-f106-49ce-a859-61afa613a874', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:51 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
31
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:51 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:51 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
33
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:51 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:51 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:51 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3625205) WARNING 06-28 20:37:51 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:51 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:51 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=2 pid=3625204) WARNING 06-28 20:37:51 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=1 pid=3625203) WARNING 06-28 20:37:51 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3625202) WARNING 06-28 20:37:51 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:51 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:51 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:51 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:51 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:52 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:52 [loader.py:458] Loading weights took 0.73 seconds
48
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:52 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:52 [loader.py:458] Loading weights took 0.74 seconds
50
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:52 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.864541 seconds
51
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:52 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.913058 seconds
52
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:52 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.977701 seconds
53
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:52 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.913645 seconds
54
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:58 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
55
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:37:58 [backends.py:430] Dynamo bytecode transform time: 5.66 s
56
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:58 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:37:58 [backends.py:430] Dynamo bytecode transform time: 5.72 s
58
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:58 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
59
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:37:58 [backends.py:430] Dynamo bytecode transform time: 5.76 s
60
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:58 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
61
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:37:58 [backends.py:430] Dynamo bytecode transform time: 5.78 s
62
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:38:03 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.423 s
63
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:38:03 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.432 s
64
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:38:03 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.545 s
65
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:38:03 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.650 s
66
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:38:09 [monitor.py:33] torch.compile takes 5.76 s in total
67
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:38:09 [monitor.py:33] torch.compile takes 5.66 s in total
68
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:38:09 [monitor.py:33] torch.compile takes 5.72 s in total
69
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:38:09 [monitor.py:33] torch.compile takes 5.78 s in total
70
+ INFO 06-28 20:38:10 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 20:38:10 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 20:38:10 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 20:38:10 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 20:38:10 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 20:38:10 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 20:38:10 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 20:38:10 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=0 pid=3625202) INFO 06-28 20:38:37 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3625205) INFO 06-28 20:38:37 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3625203) INFO 06-28 20:38:37 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
81
+ (VllmWorker rank=2 pid=3625204) INFO 06-28 20:38:37 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
82
+ INFO 06-28 20:38:37 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.85 seconds
83
+ INFO 06-28 20:38:37 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 20:51:18 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 20:51:18 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5158|± |0.0282|
89
+ | | |math_pass@1:1_samples|0.6988|± |0.0481|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5669|± |0.0254|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6209|± |0.0158|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5281|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7226|± |0.0212|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.6750|± |0.0750|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3471|± |0.0435|
96
+
merge_bench/logs/llama_linear_9.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 20:51:17 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 20:51:18 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 20:51:25 [config.py:717] This model supports multiple tasks: {'reward', 'score', 'embed', 'classify', 'generate'}. Defaulting to 'generate'.
4
+ INFO 06-28 20:51:25 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 20:51:25 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 20:51:27 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 20:51:27 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 20:51:27 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_bb84ef60'), local_subscribe_addr='ipc:///tmp/9f532b5b-5fab-4bd7-a386-12ac4cb074df', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 20:51:27 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1529f08dbd60>
10
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:27 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e583dcba'), local_subscribe_addr='ipc:///tmp/bb8329a0-d171-447f-bbb2-4bfaa38bf85a', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 20:51:27 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1529e2f2cac0>
12
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:27 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_2e196d70'), local_subscribe_addr='ipc:///tmp/6c61b285-be86-41cb-9cb1-705e39daaba9', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 20:51:27 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1529f08dbca0>
14
+ WARNING 06-28 20:51:27 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1529f08db9d0>
15
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:27 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_4864d8cf'), local_subscribe_addr='ipc:///tmp/79d64628-22b1-4ea9-a2ec-ee94d71df77d', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:27 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a9de9a12'), local_subscribe_addr='ipc:///tmp/943c88f2-b4dd-40ee-a3e3-13e59c2e57fc', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:30 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:30 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:30 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:30 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:30 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:30 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:30 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:30 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3627171) WARNING 06-28 20:51:30 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3627172) WARNING 06-28 20:51:30 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3627170) WARNING 06-28 20:51:30 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3627169) WARNING 06-28 20:51:30 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:30 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_af3ce749'), local_subscribe_addr='ipc:///tmp/f1cecbba-d83a-49da-abec-058737ee8492', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:30 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:30 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:30 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:30 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:30 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:30 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3627172) WARNING 06-28 20:51:30 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3627171) WARNING 06-28 20:51:30 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:30 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:30 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=0 pid=3627169) WARNING 06-28 20:51:30 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3627170) WARNING 06-28 20:51:30 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:30 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:30 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:30 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:30 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:31 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:31 [loader.py:458] Loading weights took 0.69 seconds
48
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:31 [loader.py:458] Loading weights took 0.69 seconds
49
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:31 [loader.py:458] Loading weights took 0.72 seconds
50
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:31 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.869730 seconds
51
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:31 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.849842 seconds
52
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:32 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.906712 seconds
53
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:32 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.951992 seconds
54
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:37 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:37 [backends.py:430] Dynamo bytecode transform time: 5.56 s
56
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:37 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:37 [backends.py:430] Dynamo bytecode transform time: 5.70 s
58
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:37 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:37 [backends.py:430] Dynamo bytecode transform time: 5.72 s
60
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:38 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:38 [backends.py:430] Dynamo bytecode transform time: 5.98 s
62
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:42 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.382 s
63
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:43 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.404 s
64
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:43 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.443 s
65
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:43 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.471 s
66
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:51:49 [monitor.py:33] torch.compile takes 5.56 s in total
67
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:51:49 [monitor.py:33] torch.compile takes 5.70 s in total
68
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:51:49 [monitor.py:33] torch.compile takes 5.72 s in total
69
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:51:49 [monitor.py:33] torch.compile takes 5.98 s in total
70
+ INFO 06-28 20:51:50 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 20:51:50 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 20:51:50 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 20:51:50 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 20:51:50 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 20:51:50 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 20:51:50 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 20:51:50 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=1 pid=3627170) INFO 06-28 20:52:16 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3627171) INFO 06-28 20:52:16 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3627169) INFO 06-28 20:52:16 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=3 pid=3627172) INFO 06-28 20:52:16 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 20:52:16 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.15 seconds
83
+ INFO 06-28 20:52:16 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 21:04:54 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 21:04:54 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5069|± |0.0276|
89
+ | | |math_pass@1:1_samples|0.7874|± |0.0392|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6142|± |0.0250|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6283|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4875|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7248|± |0.0211|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8500|± |0.0572|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2975|± |0.0417|
96
+
merge_bench/logs/llama_ties_1.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 21:04:53 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 21:04:55 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 21:05:02 [config.py:717] This model supports multiple tasks: {'generate', 'score', 'reward', 'classify', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 21:05:02 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 21:05:02 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 21:05:03 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 21:05:03 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 21:05:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_49535664'), local_subscribe_addr='ipc:///tmp/396eb2be-5260-482c-8468-18f9912dab8c', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 21:05:04 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c8da423c70>
10
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:04 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_32f083a9'), local_subscribe_addr='ipc:///tmp/4580c9ae-5fa5-4852-a517-8d5b635bf792', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 21:05:04 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c8d8ab49d0>
12
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:04 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_df472ef4'), local_subscribe_addr='ipc:///tmp/3bfded55-edfb-4719-9266-d163d3b02918', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 21:05:04 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c8da423bb0>
14
+ WARNING 06-28 21:05:04 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14c8da4238e0>
15
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:04 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_478853c5'), local_subscribe_addr='ipc:///tmp/d8b38252-237a-42ce-937a-196da3b5790e', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:04 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_ae4c4856'), local_subscribe_addr='ipc:///tmp/6ba7770b-4581-4bbb-806f-3a89604afdf9', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:06 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:06 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:06 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:06 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:06 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:06 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:06 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:06 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3629140) WARNING 06-28 21:05:07 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3629139) WARNING 06-28 21:05:07 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3629138) WARNING 06-28 21:05:07 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3629137) WARNING 06-28 21:05:07 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_af396e84'), local_subscribe_addr='ipc:///tmp/5bd0bf44-5b41-424c-9852-44131328ab72', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:07 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
31
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:07 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
32
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:07 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
33
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:07 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:07 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:07 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:07 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=3 pid=3629140) WARNING 06-28 21:05:07 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=2 pid=3629139) WARNING 06-28 21:05:07 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=1 pid=3629138) WARNING 06-28 21:05:07 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:07 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=0 pid=3629137) WARNING 06-28 21:05:07 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:07 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:07 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:07 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:07 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:07 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:07 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:08 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:08 [loader.py:458] Loading weights took 0.73 seconds
50
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:08 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.863475 seconds
51
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:08 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.861492 seconds
52
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:08 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.951411 seconds
53
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:08 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.920031 seconds
54
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:13 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:13 [backends.py:430] Dynamo bytecode transform time: 5.54 s
56
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:14 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
57
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:14 [backends.py:430] Dynamo bytecode transform time: 5.68 s
58
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:14 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
59
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:14 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
60
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:14 [backends.py:430] Dynamo bytecode transform time: 5.79 s
61
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:14 [backends.py:430] Dynamo bytecode transform time: 5.79 s
62
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:18 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.368 s
63
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:19 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.363 s
64
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:19 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.341 s
65
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:19 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.401 s
66
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:25 [monitor.py:33] torch.compile takes 5.54 s in total
67
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:25 [monitor.py:33] torch.compile takes 5.68 s in total
68
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:25 [monitor.py:33] torch.compile takes 5.79 s in total
69
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:25 [monitor.py:33] torch.compile takes 5.79 s in total
70
+ INFO 06-28 21:05:26 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 21:05:26 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 21:05:26 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 21:05:26 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 21:05:26 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 21:05:26 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 21:05:26 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 21:05:26 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=1 pid=3629138) INFO 06-28 21:05:56 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3629140) INFO 06-28 21:05:56 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3629137) INFO 06-28 21:05:56 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
81
+ (VllmWorker rank=2 pid=3629139) INFO 06-28 21:05:56 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
82
+ INFO 06-28 21:05:56 [core.py:159] init engine (profile, create kv cache, warmup model) took 48.16 seconds
83
+ INFO 06-28 21:05:56 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 21:18:38 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 21:18:38 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5101|± |0.0276|
89
+ | | |math_pass@1:1_samples|0.8100|± |0.0368|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6063|± |0.0251|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6336|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5031|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7450|± |0.0206|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8750|± |0.0530|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2975|± |0.0417|
96
+
merge_bench/logs/llama_ties_3.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 21:18:37 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 21:18:38 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 21:18:45 [config.py:717] This model supports multiple tasks: {'reward', 'embed', 'generate', 'classify', 'score'}. Defaulting to 'generate'.
4
+ INFO 06-28 21:18:45 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 21:18:45 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 21:18:47 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 21:18:47 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 21:18:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_397c2c8c'), local_subscribe_addr='ipc:///tmp/9b35e708-cffe-4aa9-be48-15fbf9844d84', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 21:18:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x154f3f43ca90>
10
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:18:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_3d5102a4'), local_subscribe_addr='ipc:///tmp/3c5e6239-1d7c-4267-9676-9ac237de72d8', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 21:18:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x154f44dffc70>
12
+ WARNING 06-28 21:18:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x154f44dffd30>
13
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:18:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e7b03a50'), local_subscribe_addr='ipc:///tmp/f48b8442-bcbe-4798-8547-c0a4ea2f1765', remote_subscribe_addr=None, remote_addr_ipv6=False)
14
+ WARNING 06-28 21:18:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x154f44dff9a0>
15
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:18:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_29183d5c'), local_subscribe_addr='ipc:///tmp/a52d762d-6263-45f1-9161-419a79d4cf72', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:18:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_570fd23d'), local_subscribe_addr='ipc:///tmp/c0c9db08-7e7c-48aa-86fa-0a111f16fdfa', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:18:59 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:18:59 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:18:59 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:18:59 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:18:59 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:18:59 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:18:59 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:18:59 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3631106) WARNING 06-28 21:18:59 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3631105) WARNING 06-28 21:18:59 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3631103) WARNING 06-28 21:18:59 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3631104) WARNING 06-28 21:18:59 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:18:59 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_0515b019'), local_subscribe_addr='ipc:///tmp/b7ff17de-8d57-419c-97f6-014d6d347e56', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:18:59 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
31
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:18:59 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:18:59 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:18:59 [cuda.py:221] Using Flash Attention backend on V1 engine.
34
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:18:59 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
35
+ (VllmWorker rank=2 pid=3631105) WARNING 06-28 21:18:59 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
36
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:18:59 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:18:59 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:18:59 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=3 pid=3631106) WARNING 06-28 21:18:59 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3631103) WARNING 06-28 21:18:59 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3631104) WARNING 06-28 21:18:59 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:18:59 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:18:59 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:18:59 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:18:59 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:19:00 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:19:00 [loader.py:458] Loading weights took 0.67 seconds
48
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:19:00 [loader.py:458] Loading weights took 0.71 seconds
49
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:19:00 [loader.py:458] Loading weights took 0.75 seconds
50
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:19:00 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.847711 seconds
51
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:19:00 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.853132 seconds
52
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:19:00 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.918891 seconds
53
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:19:01 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.961888 seconds
54
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:19:06 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
55
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:19:06 [backends.py:430] Dynamo bytecode transform time: 5.62 s
56
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:19:06 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
57
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:19:06 [backends.py:430] Dynamo bytecode transform time: 5.66 s
58
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:19:06 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
59
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:19:06 [backends.py:430] Dynamo bytecode transform time: 5.66 s
60
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:19:06 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
61
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:19:06 [backends.py:430] Dynamo bytecode transform time: 5.66 s
62
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:19:11 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.368 s
63
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:19:11 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.404 s
64
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:19:11 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.387 s
65
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:19:12 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.400 s
66
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:19:17 [monitor.py:33] torch.compile takes 5.62 s in total
67
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:19:17 [monitor.py:33] torch.compile takes 5.66 s in total
68
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:19:17 [monitor.py:33] torch.compile takes 5.66 s in total
69
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:19:17 [monitor.py:33] torch.compile takes 5.66 s in total
70
+ INFO 06-28 21:19:18 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 21:19:18 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 21:19:18 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 21:19:18 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 21:19:18 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 21:19:18 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 21:19:18 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 21:19:18 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3631105) INFO 06-28 21:19:45 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3631106) INFO 06-28 21:19:45 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3631103) INFO 06-28 21:19:45 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3631104) INFO 06-28 21:19:45 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 21:19:45 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.15 seconds
83
+ INFO 06-28 21:19:45 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 21:32:27 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 21:32:27 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.4954|± |0.0275|
89
+ | | |math_pass@1:1_samples|0.8201|± |0.0365|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5774|± |0.0253|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6452|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4781|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7651|± |0.0201|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8750|± |0.0530|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2810|± |0.0410|
96
+
merge_bench/logs/llama_ties_5.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 21:32:26 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 21:32:28 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 21:32:35 [config.py:717] This model supports multiple tasks: {'embed', 'classify', 'reward', 'generate', 'score'}. Defaulting to 'generate'.
4
+ INFO 06-28 21:32:35 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 21:32:35 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 21:32:37 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 21:32:37 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 21:32:37 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_f77824a9'), local_subscribe_addr='ipc:///tmp/0c065325-9bd3-44ae-b570-badee1d8a29a', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 21:32:37 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14a180e77dc0>
10
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:37 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e1ffa07c'), local_subscribe_addr='ipc:///tmp/188479cc-b446-481e-a6cf-70a39b355001', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 21:32:37 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14a17710cb20>
12
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:37 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_f0f19da6'), local_subscribe_addr='ipc:///tmp/ba9731f7-cc39-4a0e-b49e-d77734e64886', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 21:32:37 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14a180e77d00>
14
+ WARNING 06-28 21:32:37 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14a180e77ac0>
15
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:37 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_6669c6e5'), local_subscribe_addr='ipc:///tmp/b31583d8-d023-4014-bf0c-ae58a5bca37a', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:37 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_9793bc93'), local_subscribe_addr='ipc:///tmp/11e9b52f-d9ea-4a23-b8e3-05261cad20b0', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:40 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:40 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:40 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:40 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:40 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:40 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:40 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:40 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=1 pid=3633070) WARNING 06-28 21:32:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3633071) WARNING 06-28 21:32:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3633069) WARNING 06-28 21:32:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=3 pid=3633072) WARNING 06-28 21:32:40 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:40 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_d275a2db'), local_subscribe_addr='ipc:///tmp/b1872141-efd7-4f10-9c2b-1c8b9c1e6159', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:40 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:40 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:40 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:40 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3633072) WARNING 06-28 21:32:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3633071) WARNING 06-28 21:32:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:40 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=1 pid=3633070) WARNING 06-28 21:32:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3633069) WARNING 06-28 21:32:40 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:40 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:41 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:41 [loader.py:458] Loading weights took 0.67 seconds
48
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:41 [loader.py:458] Loading weights took 0.69 seconds
49
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:41 [loader.py:458] Loading weights took 0.75 seconds
50
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:41 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.852514 seconds
51
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:41 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.853413 seconds
52
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:41 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.973925 seconds
53
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:42 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.914584 seconds
54
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:47 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
55
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:47 [backends.py:430] Dynamo bytecode transform time: 5.87 s
56
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:47 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:47 [backends.py:430] Dynamo bytecode transform time: 5.91 s
58
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:47 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:47 [backends.py:430] Dynamo bytecode transform time: 5.96 s
60
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:48 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
61
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:48 [backends.py:430] Dynamo bytecode transform time: 5.97 s
62
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:53 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.429 s
63
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:53 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.414 s
64
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:53 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.475 s
65
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:53 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.497 s
66
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:32:58 [monitor.py:33] torch.compile takes 5.97 s in total
67
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:32:58 [monitor.py:33] torch.compile takes 5.91 s in total
68
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:32:58 [monitor.py:33] torch.compile takes 5.87 s in total
69
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:32:58 [monitor.py:33] torch.compile takes 5.96 s in total
70
+ INFO 06-28 21:33:00 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 21:33:00 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 21:33:00 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 21:33:00 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 21:33:00 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 21:33:00 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 21:33:00 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 21:33:00 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3633071) INFO 06-28 21:33:27 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3633072) INFO 06-28 21:33:27 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3633069) INFO 06-28 21:33:27 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3633070) INFO 06-28 21:33:27 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
82
+ INFO 06-28 21:33:27 [core.py:159] init engine (profile, create kv cache, warmup model) took 45.17 seconds
83
+ INFO 06-28 21:33:27 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 21:46:09 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 21:46:09 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5174|± |0.0277|
89
+ | | |math_pass@1:1_samples|0.7736|± |0.0423|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6220|± |0.0249|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6304|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5031|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7472|± |0.0206|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8000|± |0.0641|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3140|± |0.0424|
96
+
merge_bench/logs/llama_ties_7.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 21:46:08 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 21:46:10 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 21:46:17 [config.py:717] This model supports multiple tasks: {'classify', 'reward', 'generate', 'score', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 21:46:17 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 21:46:17 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 21:46:18 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 21:46:18 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 21:46:18 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_ba897e64'), local_subscribe_addr='ipc:///tmp/e3fa3541-40e9-45f6-9069-61120d744d93', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 21:46:19 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x146fbc723df0>
10
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:19 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e708e433'), local_subscribe_addr='ipc:///tmp/9b2b5f3c-b689-414b-9c6f-aa51bbbdd8b6', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 21:46:19 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x146fa6d2cb50>
12
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:19 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a3d2107f'), local_subscribe_addr='ipc:///tmp/49648420-1505-4079-a94a-512c322bc00f', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 21:46:19 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x146fbc723d30>
14
+ WARNING 06-28 21:46:19 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x146fbc723af0>
15
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:19 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_028e2b27'), local_subscribe_addr='ipc:///tmp/60eb7f5a-ec4e-4d9f-8aa9-28dff9f36b3c', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:19 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_5d554696'), local_subscribe_addr='ipc:///tmp/e1da4acd-28b5-4113-8560-1bba01f5f16a', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:26 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:26 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:26 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:26 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:26 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:26 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:26 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:26 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3635043) WARNING 06-28 21:46:26 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3635042) WARNING 06-28 21:46:26 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3635040) WARNING 06-28 21:46:26 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3635041) WARNING 06-28 21:46:26 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:26 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_e2916777'), local_subscribe_addr='ipc:///tmp/3e96f702-a53e-4867-97a9-d4ef2f1ac5d1', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:26 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:26 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:26 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:26 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:26 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3635043) WARNING 06-28 21:46:26 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
36
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:26 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:26 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:26 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=2 pid=3635042) WARNING 06-28 21:46:26 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3635040) WARNING 06-28 21:46:26 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3635041) WARNING 06-28 21:46:26 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:26 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:26 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:26 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:26 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:27 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:27 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:27 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:27 [loader.py:458] Loading weights took 0.72 seconds
50
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:27 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.871172 seconds
51
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:27 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.899707 seconds
52
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:28 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.912291 seconds
53
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:28 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.940351 seconds
54
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:33 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:33 [backends.py:430] Dynamo bytecode transform time: 5.50 s
56
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:33 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:33 [backends.py:430] Dynamo bytecode transform time: 5.60 s
58
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:33 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:33 [backends.py:430] Dynamo bytecode transform time: 5.61 s
60
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:33 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:33 [backends.py:430] Dynamo bytecode transform time: 5.65 s
62
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:38 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.333 s
63
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:38 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.358 s
64
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:38 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.400 s
65
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:38 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.373 s
66
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:46:44 [monitor.py:33] torch.compile takes 5.50 s in total
67
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:46:44 [monitor.py:33] torch.compile takes 5.61 s in total
68
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:46:44 [monitor.py:33] torch.compile takes 5.65 s in total
69
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:46:44 [monitor.py:33] torch.compile takes 5.60 s in total
70
+ INFO 06-28 21:46:45 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 21:46:45 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 21:46:45 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 21:46:45 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 21:46:45 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 21:46:45 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 21:46:45 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 21:46:45 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3635043) INFO 06-28 21:47:10 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3635042) INFO 06-28 21:47:10 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3635040) INFO 06-28 21:47:10 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3635041) INFO 06-28 21:47:10 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
82
+ INFO 06-28 21:47:10 [core.py:159] init engine (profile, create kv cache, warmup model) took 42.66 seconds
83
+ INFO 06-28 21:47:11 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 21:59:57 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 21:59:57 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5003|± |0.0276|
89
+ | | |math_pass@1:1_samples|0.7906|± |0.0406|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5774|± |0.0253|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6283|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5062|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7562|± |0.0203|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8250|± |0.0608|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2893|± |0.0414|
96
+
merge_bench/logs/llama_ties_9.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 21:59:56 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 21:59:58 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 22:00:05 [config.py:717] This model supports multiple tasks: {'embed', 'score', 'generate', 'reward', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-28 22:00:05 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 22:00:05 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 22:00:07 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 22:00:07 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 22:00:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_072005e2'), local_subscribe_addr='ipc:///tmp/3b8324da-8e55-4477-8b70-faf81399ad67', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 22:00:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1530cb067d30>
10
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_333f2675'), local_subscribe_addr='ipc:///tmp/6023b97e-9a60-41ff-8484-ab9fbab5e5b6', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 22:00:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1530c9634a90>
12
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_133d5830'), local_subscribe_addr='ipc:///tmp/49b7a9c2-4dc6-4fca-ae72-30cc06e6a06a', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 22:00:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1530cb067c70>
14
+ WARNING 06-28 22:00:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1530cb0679a0>
15
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_84d0fca4'), local_subscribe_addr='ipc:///tmp/9511444a-afdf-4220-aac2-d0231e605465', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_f4fb0ebf'), local_subscribe_addr='ipc:///tmp/7da8aa6f-3ad8-49da-95f2-d8239fe6d553', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:15 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:15 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:15 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:15 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:15 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:15 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:15 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:15 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3637009) WARNING 06-28 22:00:15 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3637008) WARNING 06-28 22:00:15 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3637006) WARNING 06-28 22:00:15 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3637007) WARNING 06-28 22:00:15 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:15 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_911522ef'), local_subscribe_addr='ipc:///tmp/c75ce496-9112-4e97-84ea-5fb9f862786a', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:15 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:15 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:15 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:15 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:15 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:15 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:15 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=2 pid=3637008) WARNING 06-28 22:00:15 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=3 pid=3637009) WARNING 06-28 22:00:15 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=0 pid=3637006) WARNING 06-28 22:00:15 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:15 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=1 pid=3637007) WARNING 06-28 22:00:15 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:15 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:15 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:15 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:15 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:16 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:16 [loader.py:458] Loading weights took 0.67 seconds
48
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:16 [loader.py:458] Loading weights took 0.70 seconds
49
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:16 [loader.py:458] Loading weights took 0.73 seconds
50
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:16 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.855716 seconds
51
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:17 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.858431 seconds
52
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:17 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.920326 seconds
53
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:17 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.960038 seconds
54
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:22 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
55
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:22 [backends.py:430] Dynamo bytecode transform time: 5.70 s
56
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:22 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
57
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:22 [backends.py:430] Dynamo bytecode transform time: 5.72 s
58
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:23 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
59
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:23 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
60
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:23 [backends.py:430] Dynamo bytecode transform time: 5.77 s
61
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:23 [backends.py:430] Dynamo bytecode transform time: 5.77 s
62
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:28 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.422 s
63
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:28 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.445 s
64
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:28 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.425 s
65
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:28 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.461 s
66
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:00:33 [monitor.py:33] torch.compile takes 5.77 s in total
67
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:00:33 [monitor.py:33] torch.compile takes 5.77 s in total
68
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:00:33 [monitor.py:33] torch.compile takes 5.70 s in total
69
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:00:33 [monitor.py:33] torch.compile takes 5.72 s in total
70
+ INFO 06-28 22:00:34 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 22:00:34 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 22:00:34 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 22:00:34 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 22:00:34 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 22:00:34 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 22:00:34 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 22:00:34 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3637008) INFO 06-28 22:01:00 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3637009) INFO 06-28 22:01:00 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3637006) INFO 06-28 22:01:00 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3637007) INFO 06-28 22:01:00 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 22:01:00 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.61 seconds
83
+ INFO 06-28 22:01:01 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 22:13:43 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 22:13:43 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5212|± |0.0279|
89
+ | | |math_pass@1:1_samples|0.7986|± |0.0389|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6142|± |0.0250|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6399|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5000|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7472|± |0.0206|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8500|± |0.0572|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3306|± |0.0429|
96
+
merge_bench/logs/phi_darelinear_1.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 01:21:52 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 01:21:54 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 01:22:01 [config.py:717] This model supports multiple tasks: {'reward', 'generate', 'score', 'classify', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 01:22:01 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 01:22:01 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 01:22:03 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 01:22:03 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 01:22:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_b2217354'), local_subscribe_addr='ipc:///tmp/a3e8bc96-bab3-4345-8a48-730fe105e3e1', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 01:22:03 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1459a4d4fcd0>
10
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_3acfa44b'), local_subscribe_addr='ipc:///tmp/4ef91927-c90f-43eb-a030-37c127c3362d', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 01:22:03 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1459a4d4fc10>
12
+ WARNING 06-28 01:22:03 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1459a4d4f940>
13
+ WARNING 06-28 01:22:03 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14599f62ca30>
14
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_b59b84f7'), local_subscribe_addr='ipc:///tmp/a968292a-9ad5-4bce-89ea-6eaff6531d1c', remote_subscribe_addr=None, remote_addr_ipv6=False)
15
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_3d01424e'), local_subscribe_addr='ipc:///tmp/1d75b13e-e9fb-472c-a09d-757ce058c078', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1b952b25'), local_subscribe_addr='ipc:///tmp/00d52892-dc3a-4cf8-babd-7ba75c78873b', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:05 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:05 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:05 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:05 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:05 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:05 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:05 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:05 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3509493) WARNING 06-28 01:22:06 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=0 pid=3509490) WARNING 06-28 01:22:06 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=2 pid=3509492) WARNING 06-28 01:22:06 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3509491) WARNING 06-28 01:22:06 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:06 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_55349baa'), local_subscribe_addr='ipc:///tmp/c13ba9bd-36a6-4b0c-a6a9-cccb260e6d14', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:06 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:06 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:06 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:06 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:06 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:06 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3509493) WARNING 06-28 01:22:06 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:06 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=2 pid=3509492) WARNING 06-28 01:22:06 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:06 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=0 pid=3509490) WARNING 06-28 01:22:06 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3509491) WARNING 06-28 01:22:06 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:06 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:06 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:06 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:06 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:07 [loader.py:458] Loading weights took 0.75 seconds
47
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:07 [loader.py:458] Loading weights took 0.71 seconds
48
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:07 [loader.py:458] Loading weights took 0.77 seconds
49
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:07 [loader.py:458] Loading weights took 0.79 seconds
50
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:07 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.940849 seconds
51
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:07 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.983826 seconds
52
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:07 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.939358 seconds
53
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:07 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 1.013340 seconds
54
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:13 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:13 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
56
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:13 [backends.py:430] Dynamo bytecode transform time: 5.83 s
57
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:13 [backends.py:430] Dynamo bytecode transform time: 5.83 s
58
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:13 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:13 [backends.py:430] Dynamo bytecode transform time: 5.83 s
60
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:13 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:13 [backends.py:430] Dynamo bytecode transform time: 5.87 s
62
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:18 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.394 s
63
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:18 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.398 s
64
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:18 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.453 s
65
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:18 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.430 s
66
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:24 [monitor.py:33] torch.compile takes 5.83 s in total
67
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:24 [monitor.py:33] torch.compile takes 5.83 s in total
68
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:24 [monitor.py:33] torch.compile takes 5.83 s in total
69
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:24 [monitor.py:33] torch.compile takes 5.87 s in total
70
+ INFO 06-28 01:22:25 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 01:22:25 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 01:22:25 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 01:22:25 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 01:22:25 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 01:22:25 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 01:22:25 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 01:22:25 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=0 pid=3509490) INFO 06-28 01:22:51 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=1 pid=3509491) INFO 06-28 01:22:51 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=3 pid=3509493) INFO 06-28 01:22:51 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=2 pid=3509492) INFO 06-28 01:22:51 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 01:22:51 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.42 seconds
83
+ INFO 06-28 01:22:52 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 01:35:24 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 01:35:24 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5259|± |0.0278|
89
+ | | |math_pass@1:1_samples|0.7702|± |0.0424|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6194|± |0.0249|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6367|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5250|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7405|± |0.0208|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8000|± |0.0641|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3223|± |0.0427|
96
+
merge_bench/logs/phi_darelinear_3.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 01:35:23 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 01:35:25 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 01:35:32 [config.py:717] This model supports multiple tasks: {'score', 'generate', 'classify', 'embed', 'reward'}. Defaulting to 'generate'.
4
+ INFO 06-28 01:35:32 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 01:35:32 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 01:35:34 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 01:35:34 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 01:35:34 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_f1a88531'), local_subscribe_addr='ipc:///tmp/4b8fdbbe-bfe9-49ea-81e6-583208874c6d', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 01:35:34 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x149cd159bd90>
10
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:34 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_191ebca1'), local_subscribe_addr='ipc:///tmp/b850358a-2e43-4778-a548-506d0ca4be92', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 01:35:34 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x149c93bd0af0>
12
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:34 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_dc536c27'), local_subscribe_addr='ipc:///tmp/17308bf4-154b-4d0f-9777-fd0d7e8f6a83', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 01:35:34 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x149cd159bcd0>
14
+ WARNING 06-28 01:35:34 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x149cd159ba90>
15
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:34 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_3aa9fd19'), local_subscribe_addr='ipc:///tmp/d6edfdbd-538b-4086-8421-706a2dbd4119', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:34 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_99ba7d8f'), local_subscribe_addr='ipc:///tmp/714e0361-fdc5-4bb8-b328-ecc573c57fd8', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:36 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:36 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:36 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:36 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:36 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:36 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:36 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:36 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3512149) WARNING 06-28 01:35:37 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3512148) WARNING 06-28 01:35:37 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3512147) WARNING 06-28 01:35:37 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3512146) WARNING 06-28 01:35:37 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:37 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_431a8311'), local_subscribe_addr='ipc:///tmp/8b3b34f6-f867-4925-b5a0-7cd3f6afe61c', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:37 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:37 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:37 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:37 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:37 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:37 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3512149) WARNING 06-28 01:35:37 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:37 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=2 pid=3512148) WARNING 06-28 01:35:37 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=1 pid=3512147) WARNING 06-28 01:35:37 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:37 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=0 pid=3512146) WARNING 06-28 01:35:37 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:37 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:37 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:37 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:37 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:38 [loader.py:458] Loading weights took 0.70 seconds
47
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:38 [loader.py:458] Loading weights took 0.70 seconds
48
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:38 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:38 [loader.py:458] Loading weights took 0.73 seconds
50
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:38 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.892795 seconds
51
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:38 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.888533 seconds
52
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:38 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.959265 seconds
53
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:38 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.910843 seconds
54
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:44 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:44 [backends.py:430] Dynamo bytecode transform time: 5.58 s
56
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:44 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:44 [backends.py:430] Dynamo bytecode transform time: 5.62 s
58
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:44 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:44 [backends.py:430] Dynamo bytecode transform time: 5.74 s
60
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:44 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:44 [backends.py:430] Dynamo bytecode transform time: 5.80 s
62
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:49 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.361 s
63
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:49 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.393 s
64
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:49 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.462 s
65
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:49 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.444 s
66
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:35:55 [monitor.py:33] torch.compile takes 5.62 s in total
67
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:35:55 [monitor.py:33] torch.compile takes 5.74 s in total
68
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:35:55 [monitor.py:33] torch.compile takes 5.58 s in total
69
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:35:55 [monitor.py:33] torch.compile takes 5.80 s in total
70
+ INFO 06-28 01:35:56 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 01:35:56 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 01:35:56 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 01:35:56 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 01:35:56 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 01:35:56 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 01:35:56 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 01:35:56 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3512148) INFO 06-28 01:36:22 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3512149) INFO 06-28 01:36:22 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3512146) INFO 06-28 01:36:22 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3512147) INFO 06-28 01:36:22 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 01:36:22 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.87 seconds
83
+ INFO 06-28 01:36:22 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 01:48:58 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 01:48:58 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5141|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7988|± |0.0371|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5801|± |0.0253|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6209|± |0.0158|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5250|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7226|± |0.0212|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8750|± |0.0530|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3306|± |0.0429|
96
+
merge_bench/logs/phi_darelinear_5.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 01:48:57 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 01:48:59 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 01:49:06 [config.py:717] This model supports multiple tasks: {'classify', 'generate', 'score', 'reward', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 01:49:06 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 01:49:06 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 01:49:07 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 01:49:07 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 01:49:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_e6ed1dc2'), local_subscribe_addr='ipc:///tmp/7eae7c1e-515b-41c5-b887-41e9ee2cf4ea', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 01:49:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b8ef05bd90>
10
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1080e32a'), local_subscribe_addr='ipc:///tmp/7db8a905-31e6-402f-8555-ad1d4729fe0e', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 01:49:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b8ed628af0>
12
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a26fcf28'), local_subscribe_addr='ipc:///tmp/bd04efc5-4fbe-476e-9a5e-f560a6840f61', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 01:49:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b8ef05ba90>
14
+ WARNING 06-28 01:49:07 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b8ef05bcd0>
15
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_189894f8'), local_subscribe_addr='ipc:///tmp/d29cdad2-23d9-499b-9441-e80446dc5912', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:07 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_508e7988'), local_subscribe_addr='ipc:///tmp/96aceb6e-f7bf-41d1-a76c-0c8d028246dc', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:10 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:10 [pynccl.py:69] vLLM is using nccl==2.21.5
19
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:10 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:10 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:10 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:10 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:10 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:10 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3515448) WARNING 06-28 01:49:10 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3515449) WARNING 06-28 01:49:10 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3515446) WARNING 06-28 01:49:10 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3515447) WARNING 06-28 01:49:10 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:10 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_110906cd'), local_subscribe_addr='ipc:///tmp/bae02c6c-f9ee-49c6-81f8-e4d4afacfdf2', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:11 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:11 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:11 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:11 [cuda.py:221] Using Flash Attention backend on V1 engine.
34
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:11 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:11 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=2 pid=3515448) WARNING 06-28 01:49:11 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=3 pid=3515449) WARNING 06-28 01:49:11 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3515446) WARNING 06-28 01:49:11 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:11 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
40
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:11 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:11 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
42
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:11 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=1 pid=3515447) WARNING 06-28 01:49:11 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
44
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:11 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:11 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:11 [loader.py:458] Loading weights took 0.69 seconds
47
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:11 [loader.py:458] Loading weights took 0.72 seconds
48
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:11 [loader.py:458] Loading weights took 0.70 seconds
49
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:11 [loader.py:458] Loading weights took 0.75 seconds
50
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:12 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.911455 seconds
51
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:12 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.883583 seconds
52
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:12 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.973162 seconds
53
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:12 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.938663 seconds
54
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:17 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:17 [backends.py:430] Dynamo bytecode transform time: 5.61 s
56
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:18 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:18 [backends.py:430] Dynamo bytecode transform time: 5.72 s
58
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:18 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
59
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:18 [backends.py:430] Dynamo bytecode transform time: 5.78 s
60
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:18 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
61
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:18 [backends.py:430] Dynamo bytecode transform time: 5.81 s
62
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:22 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.387 s
63
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:23 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.424 s
64
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:23 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.454 s
65
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:23 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.450 s
66
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:28 [monitor.py:33] torch.compile takes 5.78 s in total
67
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:28 [monitor.py:33] torch.compile takes 5.72 s in total
68
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:28 [monitor.py:33] torch.compile takes 5.81 s in total
69
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:28 [monitor.py:33] torch.compile takes 5.61 s in total
70
+ INFO 06-28 01:49:30 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 01:49:30 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 01:49:30 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 01:49:30 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 01:49:30 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 01:49:30 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 01:49:30 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 01:49:30 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=1 pid=3515447) INFO 06-28 01:49:55 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3515449) INFO 06-28 01:49:55 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
80
+ (VllmWorker rank=2 pid=3515448) INFO 06-28 01:49:55 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3515446) INFO 06-28 01:49:55 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
82
+ INFO 06-28 01:49:55 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.09 seconds
83
+ INFO 06-28 01:49:55 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 02:02:37 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 02:02:37 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5234|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7316|± |0.0462|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6089|± |0.0250|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6315|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5062|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7383|± |0.0208|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7250|± |0.0715|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3471|± |0.0435|
96
+
merge_bench/logs/phi_darelinear_7.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 02:02:36 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 02:02:38 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 02:02:45 [config.py:717] This model supports multiple tasks: {'reward', 'classify', 'score', 'generate', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 02:02:45 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 02:02:45 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 02:02:46 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 02:02:46 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 02:02:46 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_e30e996e'), local_subscribe_addr='ipc:///tmp/82a2a047-ddbc-4d57-8204-54b364f14611', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 02:02:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1465321a7df0>
10
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e04fbba5'), local_subscribe_addr='ipc:///tmp/d2b21b36-91d6-4916-b86d-bd92d2be12f8', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 02:02:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14653077cb50>
12
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_cf8334ce'), local_subscribe_addr='ipc:///tmp/565158bd-bf30-47f2-b1bf-3d8242588a09', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 02:02:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1465321a7d30>
14
+ WARNING 06-28 02:02:47 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1465321a7af0>
15
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_c3766619'), local_subscribe_addr='ipc:///tmp/25817364-7ca4-4212-b549-8c2349e2cdf9', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:47 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_c40b7e23'), local_subscribe_addr='ipc:///tmp/d70aee33-5ed7-4dfd-ad56-427d114db39a', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:49 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:49 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:49 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:49 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:49 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:49 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:49 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:49 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3520096) WARNING 06-28 02:02:49 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3520097) WARNING 06-28 02:02:49 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3520095) WARNING 06-28 02:02:49 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3520094) WARNING 06-28 02:02:49 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:49 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_43887bb8'), local_subscribe_addr='ipc:///tmp/9f047318-5779-440e-81ee-e007b90cf083', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:49 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:49 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:49 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:49 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
34
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:49 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:49 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3520097) WARNING 06-28 02:02:49 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3520096) WARNING 06-28 02:02:49 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:49 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:49 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=1 pid=3520095) WARNING 06-28 02:02:49 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3520094) WARNING 06-28 02:02:49 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:49 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:49 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:49 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:49 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:50 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:50 [loader.py:458] Loading weights took 0.67 seconds
48
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:50 [loader.py:458] Loading weights took 0.71 seconds
49
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:50 [loader.py:458] Loading weights took 0.74 seconds
50
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:51 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.870194 seconds
51
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:51 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.857993 seconds
52
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:51 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.926725 seconds
53
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:51 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.956879 seconds
54
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:56 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:02:56 [backends.py:430] Dynamo bytecode transform time: 5.65 s
56
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:56 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:02:56 [backends.py:430] Dynamo bytecode transform time: 5.67 s
58
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:57 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:02:57 [backends.py:430] Dynamo bytecode transform time: 5.73 s
60
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:57 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:02:57 [backends.py:430] Dynamo bytecode transform time: 5.77 s
62
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:03:02 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.403 s
63
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:03:02 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.440 s
64
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:03:02 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.459 s
65
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:03:02 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.458 s
66
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:03:07 [monitor.py:33] torch.compile takes 5.77 s in total
67
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:03:07 [monitor.py:33] torch.compile takes 5.73 s in total
68
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:03:07 [monitor.py:33] torch.compile takes 5.67 s in total
69
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:03:07 [monitor.py:33] torch.compile takes 5.65 s in total
70
+ INFO 06-28 02:03:09 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 02:03:09 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 02:03:09 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 02:03:09 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 02:03:09 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 02:03:09 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 02:03:09 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 02:03:09 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3520096) INFO 06-28 02:03:34 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3520097) INFO 06-28 02:03:34 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3520095) INFO 06-28 02:03:34 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3520094) INFO 06-28 02:03:34 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
82
+ INFO 06-28 02:03:34 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.12 seconds
83
+ INFO 06-28 02:03:34 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 02:16:05 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 02:16:05 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5132|± |0.0281|
89
+ | | |math_pass@1:1_samples|0.7533|± |0.0439|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5932|± |0.0252|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6220|± |0.0158|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4906|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7315|± |0.0210|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7750|± |0.0669|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3471|± |0.0435|
96
+
merge_bench/logs/phi_darelinear_9.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 02:16:04 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 02:16:06 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 02:16:13 [config.py:717] This model supports multiple tasks: {'embed', 'generate', 'reward', 'score', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-28 02:16:13 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 02:16:13 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 02:16:15 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 02:16:15 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 02:16:15 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_3d0d1676'), local_subscribe_addr='ipc:///tmp/53082177-1424-40f4-ba33-0aea5b7c1554', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 02:16:15 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151621b93df0>
10
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:15 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a80e1a15'), local_subscribe_addr='ipc:///tmp/99211d1d-a8ab-458b-ad09-678db2b1d0cc', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 02:16:15 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1516204d8b50>
12
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:15 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_8a177959'), local_subscribe_addr='ipc:///tmp/8a42daca-2ec9-48f4-9764-9e1d8675c552', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 02:16:15 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151621b93d30>
14
+ WARNING 06-28 02:16:15 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x151621b93af0>
15
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:15 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_414db17a'), local_subscribe_addr='ipc:///tmp/1608ff44-ab8e-4556-af45-11be3dbe1b61', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:15 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_feaba932'), local_subscribe_addr='ipc:///tmp/c9cc0f59-c525-49dc-8ca9-f743a3e96e03', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:17 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:17 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:17 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:17 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:17 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:17 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:17 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:17 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3523120) WARNING 06-28 02:16:17 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3523119) WARNING 06-28 02:16:17 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3523118) WARNING 06-28 02:16:17 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3523117) WARNING 06-28 02:16:17 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:17 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_f636ee5c'), local_subscribe_addr='ipc:///tmp/1bb8e3a6-99a5-43af-97d6-b5e194c11329', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:17 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
31
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:17 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
32
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:17 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:17 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:17 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:17 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3523120) WARNING 06-28 02:16:17 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3523119) WARNING 06-28 02:16:17 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:17 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:17 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=1 pid=3523118) WARNING 06-28 02:16:17 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3523117) WARNING 06-28 02:16:17 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:17 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:17 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:17 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:17 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:18 [loader.py:458] Loading weights took 0.65 seconds
47
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:18 [loader.py:458] Loading weights took 0.74 seconds
48
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:18 [loader.py:458] Loading weights took 0.71 seconds
49
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:18 [loader.py:458] Loading weights took 0.75 seconds
50
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:19 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.934303 seconds
51
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:19 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.928185 seconds
52
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:19 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.946575 seconds
53
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:19 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.890070 seconds
54
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:24 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:24 [backends.py:430] Dynamo bytecode transform time: 5.66 s
56
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:24 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:24 [backends.py:430] Dynamo bytecode transform time: 5.69 s
58
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:25 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:25 [backends.py:430] Dynamo bytecode transform time: 5.85 s
60
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:25 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:25 [backends.py:430] Dynamo bytecode transform time: 5.88 s
62
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:29 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.407 s
63
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:30 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.415 s
64
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:30 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.419 s
65
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:30 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.421 s
66
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:16:36 [monitor.py:33] torch.compile takes 5.66 s in total
67
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:16:36 [monitor.py:33] torch.compile takes 5.88 s in total
68
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:16:36 [monitor.py:33] torch.compile takes 5.85 s in total
69
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:16:36 [monitor.py:33] torch.compile takes 5.69 s in total
70
+ INFO 06-28 02:16:37 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 02:16:37 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 02:16:37 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 02:16:37 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 02:16:37 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 02:16:37 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 02:16:37 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 02:16:37 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3523119) INFO 06-28 02:17:03 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=1 pid=3523118) INFO 06-28 02:17:03 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=3 pid=3523120) INFO 06-28 02:17:03 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3523117) INFO 06-28 02:17:03 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 02:17:03 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.03 seconds
83
+ INFO 06-28 02:17:03 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 02:29:52 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 02:29:52 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5253|± |0.0279|
89
+ | | |math_pass@1:1_samples|0.7848|± |0.0420|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6142|± |0.0250|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6378|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5188|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7696|± |0.0199|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8000|± |0.0641|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3306|± |0.0429|
96
+
merge_bench/logs/phi_linear_1.log ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 02:28:22 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 02:28:24 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 02:28:30 [config.py:717] This model supports multiple tasks: {'score', 'reward', 'generate', 'classify', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-27 02:28:31 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 02:28:31 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 02:28:32 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 02:28:32 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 02:28:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_64c2b29a'), local_subscribe_addr='ipc:///tmp/4c604289-52fe-4fbf-9aff-29f47c927adc', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 02:28:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ba9dc07d30>
10
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_ef238695'), local_subscribe_addr='ipc:///tmp/f8eead83-296e-4a81-beb2-2f42053e6457', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 02:28:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ba9c2aca90>
12
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_41841aee'), local_subscribe_addr='ipc:///tmp/9a6df31c-9333-4ead-9b5d-3f7288cd7b00', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 02:28:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ba9dc07c70>
14
+ WARNING 06-27 02:28:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ba9dc079a0>
15
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_7c043b9e'), local_subscribe_addr='ipc:///tmp/360601b1-8515-49b4-955a-b18734598b9d', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_d3023a17'), local_subscribe_addr='ipc:///tmp/82620656-c113-4dc5-bd83-39d18cd8bf2d', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:34 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:34 [pynccl.py:69] vLLM is using nccl==2.21.5
19
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:34 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:34 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:34 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:34 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:34 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:34 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3429733) WARNING 06-27 02:28:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3429734) WARNING 06-27 02:28:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3429732) WARNING 06-27 02:28:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3429731) WARNING 06-27 02:28:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:35 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_1e3a61da'), local_subscribe_addr='ipc:///tmp/b4f1a266-7798-40fa-82df-2c25b5062d45', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:35 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:35 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:35 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:35 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3429734) WARNING 06-27 02:28:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3429733) WARNING 06-27 02:28:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=0 pid=3429731) WARNING 06-27 02:28:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3429732) WARNING 06-27 02:28:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:36 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:36 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:36 [loader.py:458] Loading weights took 0.67 seconds
49
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:36 [loader.py:458] Loading weights took 0.72 seconds
50
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.866745 seconds
51
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.870209 seconds
52
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.912511 seconds
53
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.957005 seconds
54
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:42 [backends.py:430] Dynamo bytecode transform time: 5.68 s
56
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:42 [backends.py:430] Dynamo bytecode transform time: 5.73 s
58
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:42 [backends.py:430] Dynamo bytecode transform time: 5.80 s
60
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:42 [backends.py:430] Dynamo bytecode transform time: 5.88 s
62
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:28:46 [backends.py:136] Cache the graph of shape None for later use
63
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:28:46 [backends.py:136] Cache the graph of shape None for later use
64
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:28:46 [backends.py:136] Cache the graph of shape None for later use
65
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:28:47 [backends.py:136] Cache the graph of shape None for later use
66
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:29:07 [backends.py:148] Compiling a graph for general shape takes 24.70 s
67
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:29:07 [backends.py:148] Compiling a graph for general shape takes 24.72 s
68
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:29:07 [backends.py:148] Compiling a graph for general shape takes 24.83 s
69
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:29:08 [backends.py:148] Compiling a graph for general shape takes 24.88 s
70
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:29:29 [monitor.py:33] torch.compile takes 30.76 s in total
71
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:29:29 [monitor.py:33] torch.compile takes 30.63 s in total
72
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:29:29 [monitor.py:33] torch.compile takes 30.37 s in total
73
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:29:29 [monitor.py:33] torch.compile takes 30.45 s in total
74
+ INFO 06-27 02:29:31 [kv_cache_utils.py:634] GPU KV cache size: 1,999,536 tokens
75
+ INFO 06-27 02:29:31 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 976.34x
76
+ INFO 06-27 02:29:31 [kv_cache_utils.py:634] GPU KV cache size: 1,999,280 tokens
77
+ INFO 06-27 02:29:31 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 976.21x
78
+ INFO 06-27 02:29:31 [kv_cache_utils.py:634] GPU KV cache size: 1,999,280 tokens
79
+ INFO 06-27 02:29:31 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 976.21x
80
+ INFO 06-27 02:29:31 [kv_cache_utils.py:634] GPU KV cache size: 2,000,560 tokens
81
+ INFO 06-27 02:29:31 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 976.84x
82
+ (VllmWorker rank=3 pid=3429734) INFO 06-27 02:30:01 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
83
+ (VllmWorker rank=2 pid=3429733) INFO 06-27 02:30:01 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
84
+ (VllmWorker rank=1 pid=3429732) INFO 06-27 02:30:01 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
85
+ (VllmWorker rank=0 pid=3429731) INFO 06-27 02:30:01 [gpu_model_runner.py:1686] Graph capturing finished in 30 secs, took 2.96 GiB
86
+ INFO 06-27 02:30:01 [core.py:159] init engine (profile, create kv cache, warmup model) took 84.99 seconds
87
+ INFO 06-27 02:30:01 [core_client.py:439] Core engine process 0 ready.
88
+ INFO 06-27 02:42:44 [importing.py:53] Triton module has been replaced with a placeholder.
89
+ INFO 06-27 02:42:44 [__init__.py:239] Automatically detected platform cuda.
90
+ | Task |Version| Metric |Value | |Stderr|
91
+ |------------------|------:|---------------------|-----:|---|-----:|
92
+ |all | |sem |0.5197|± |0.0280|
93
+ | | |math_pass@1:1_samples|0.7193|± |0.0465|
94
+ |mm\|arc_challenge\|0| 0|sem |0.5906|± |0.0252|
95
+ |mm\|arc_easy\|0 | 0|sem |0.6367|± |0.0156|
96
+ |mm\|commonsenseqa\|0| 0|sem |0.5125|± |0.0280|
97
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7136|± |0.0214|
98
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7250|± |0.0715|
99
+ |mm\|truthfulqa\|0 | 0|sem |0.3388|± |0.0432|
100
+
merge_bench/logs/phi_linear_2.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 02:42:43 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 02:42:45 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 02:42:52 [config.py:717] This model supports multiple tasks: {'classify', 'reward', 'generate', 'score', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-27 02:42:52 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 02:42:52 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 02:42:54 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 02:42:54 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 02:42:54 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_f75c7f2e'), local_subscribe_addr='ipc:///tmp/128f64f5-39ca-4318-b6d6-702afc25e764', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 02:42:54 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ea69747d60>
10
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:54 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_8c6a899a'), local_subscribe_addr='ipc:///tmp/bfe1aa7b-3caf-4823-aee3-3b040821364d', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 02:42:54 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ea5bcd0ac0>
12
+ WARNING 06-27 02:42:54 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ea69747ca0>
13
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:54 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_91075db3'), local_subscribe_addr='ipc:///tmp/fb40d5e0-49c2-414a-b62e-75f1e168124f', remote_subscribe_addr=None, remote_addr_ipv6=False)
14
+ WARNING 06-27 02:42:54 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ea697479d0>
15
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:54 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_56dad22d'), local_subscribe_addr='ipc:///tmp/bdc6a620-d195-4169-96e6-e1afc95097d1', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:54 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_bbd791fe'), local_subscribe_addr='ipc:///tmp/8bc2120a-e48f-4551-be13-deeee6692171', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:56 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:56 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3438994) WARNING 06-27 02:42:56 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3438993) WARNING 06-27 02:42:56 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3438992) WARNING 06-27 02:42:56 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3438991) WARNING 06-27 02:42:56 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:56 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_835e5bd0'), local_subscribe_addr='ipc:///tmp/53797ae2-24e6-41c5-a70a-50496a13bf1a', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:56 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
31
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:56 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
32
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:56 [cuda.py:221] Using Flash Attention backend on V1 engine.
33
+ (VllmWorker rank=1 pid=3438992) WARNING 06-27 02:42:56 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
34
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:56 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:56 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
36
+ (VllmWorker rank=0 pid=3438991) WARNING 06-27 02:42:56 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:56 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
38
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:56 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=3 pid=3438994) WARNING 06-27 02:42:56 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:56 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=2 pid=3438993) WARNING 06-27 02:42:56 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:56 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:56 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:56 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:56 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:57 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:57 [loader.py:458] Loading weights took 0.66 seconds
48
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:57 [loader.py:458] Loading weights took 0.69 seconds
49
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:57 [loader.py:458] Loading weights took 0.69 seconds
50
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:42:57 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.881809 seconds
51
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:42:57 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.906570 seconds
52
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:42:58 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.904886 seconds
53
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:42:58 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.923940 seconds
54
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:43:03 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:43:03 [backends.py:430] Dynamo bytecode transform time: 5.62 s
56
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:43:04 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:43:04 [backends.py:430] Dynamo bytecode transform time: 5.69 s
58
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:43:04 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:43:04 [backends.py:430] Dynamo bytecode transform time: 5.76 s
60
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:43:04 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:43:04 [backends.py:430] Dynamo bytecode transform time: 5.94 s
62
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.426 s
63
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.417 s
64
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.500 s
65
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:43:09 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.542 s
66
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:43:15 [monitor.py:33] torch.compile takes 5.69 s in total
67
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:43:15 [monitor.py:33] torch.compile takes 5.94 s in total
68
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:43:15 [monitor.py:33] torch.compile takes 5.62 s in total
69
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:43:15 [monitor.py:33] torch.compile takes 5.76 s in total
70
+ INFO 06-27 02:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 02:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 02:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 02:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 02:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 02:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 02:43:16 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 02:43:16 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3438994) INFO 06-27 02:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3438993) INFO 06-27 02:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3438992) INFO 06-27 02:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3438991) INFO 06-27 02:43:42 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-27 02:43:42 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.34 seconds
83
+ INFO 06-27 02:43:43 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 02:56:22 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 02:56:22 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5228|± |0.0278|
89
+ | | |math_pass@1:1_samples|0.7669|± |0.0425|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6168|± |0.0249|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6241|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5281|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7338|± |0.0209|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8000|± |0.0641|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3223|± |0.0427|
96
+
merge_bench/logs/phi_linear_3.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 02:56:21 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 02:56:23 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 02:56:30 [config.py:717] This model supports multiple tasks: {'classify', 'generate', 'reward', 'embed', 'score'}. Defaulting to 'generate'.
4
+ INFO 06-27 02:56:30 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 02:56:30 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 02:56:32 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 02:56:32 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 02:56:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_033fa3da'), local_subscribe_addr='ipc:///tmp/a1051549-2dec-4688-9494-0718440d0c2a', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 02:56:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d51a9dbd60>
10
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_6383bb93'), local_subscribe_addr='ipc:///tmp/94832504-1a54-45c4-a0b8-da03a934d306', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 02:56:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d518db4ac0>
12
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_4b8e0cc5'), local_subscribe_addr='ipc:///tmp/9db3c5c5-8593-4669-b9ec-1b2dfcd93261', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 02:56:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d51a9db9d0>
14
+ WARNING 06-27 02:56:32 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d51a9dbca0>
15
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_b6e0d56f'), local_subscribe_addr='ipc:///tmp/3ebc8f6d-94f2-4041-b265-1596b1a7e5c3', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:32 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_43b80aeb'), local_subscribe_addr='ipc:///tmp/33b51447-7fe1-46a4-a147-ba820a37718b', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:34 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:34 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:34 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:34 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:34 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:34 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:34 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:34 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3447778) WARNING 06-27 02:56:34 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3447776) WARNING 06-27 02:56:34 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3447772) WARNING 06-27 02:56:34 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3447773) WARNING 06-27 02:56:34 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:34 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_4f17e707'), local_subscribe_addr='ipc:///tmp/cc1369ad-3d5f-4a77-a9bf-bfb609385bef', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:34 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:34 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:34 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:34 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:34 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:34 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=2 pid=3447776) WARNING 06-27 02:56:34 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=3 pid=3447778) WARNING 06-27 02:56:34 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:34 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:34 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=0 pid=3447772) WARNING 06-27 02:56:34 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3447773) WARNING 06-27 02:56:34 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:34 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:34 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:34 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:34 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:35 [loader.py:458] Loading weights took 0.69 seconds
47
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:35 [loader.py:458] Loading weights took 0.69 seconds
48
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:35 [loader.py:458] Loading weights took 0.67 seconds
49
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:35 [loader.py:458] Loading weights took 0.70 seconds
50
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.878122 seconds
51
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.878944 seconds
52
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.937358 seconds
53
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.907204 seconds
54
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:41 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
55
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:41 [backends.py:430] Dynamo bytecode transform time: 5.68 s
56
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
57
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:42 [backends.py:430] Dynamo bytecode transform time: 5.80 s
58
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
59
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:42 [backends.py:430] Dynamo bytecode transform time: 5.82 s
60
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
61
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:42 [backends.py:430] Dynamo bytecode transform time: 5.88 s
62
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:47 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.405 s
63
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:47 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.381 s
64
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:47 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.368 s
65
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:47 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.486 s
66
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:56:52 [monitor.py:33] torch.compile takes 5.82 s in total
67
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:56:52 [monitor.py:33] torch.compile takes 5.68 s in total
68
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:56:52 [monitor.py:33] torch.compile takes 5.80 s in total
69
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:56:52 [monitor.py:33] torch.compile takes 5.88 s in total
70
+ INFO 06-27 02:56:54 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 02:56:54 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 02:56:54 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 02:56:54 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 02:56:54 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 02:56:54 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 02:56:54 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 02:56:54 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3447778) INFO 06-27 02:57:20 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3447776) INFO 06-27 02:57:20 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3447772) INFO 06-27 02:57:20 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3447773) INFO 06-27 02:57:20 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-27 02:57:20 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.10 seconds
83
+ INFO 06-27 02:57:20 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 03:10:01 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 03:10:01 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5275|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7814|± |0.0421|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6115|± |0.0250|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6315|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5281|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7629|± |0.0201|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8000|± |0.0641|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3388|± |0.0432|
96
+
merge_bench/logs/phi_linear_4.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 03:10:00 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 03:10:02 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 03:10:08 [config.py:717] This model supports multiple tasks: {'embed', 'classify', 'score', 'generate', 'reward'}. Defaulting to 'generate'.
4
+ INFO 06-27 03:10:09 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 03:10:09 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 03:10:10 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 03:10:10 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 03:10:10 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_8f74b1a6'), local_subscribe_addr='ipc:///tmp/d6a3a804-af20-451f-bbde-88cafad6d09e', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 03:10:10 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d5646ffcd0>
10
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:10 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_b833bb60'), local_subscribe_addr='ipc:///tmp/9b12d81b-f5a0-4a07-abff-2a89c6e7efc3', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 03:10:10 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d552c40a30>
12
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:10 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_df1b06e4'), local_subscribe_addr='ipc:///tmp/5fbbbe13-7f22-462d-806c-b109618e0050', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 03:10:10 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d5646ffc10>
14
+ WARNING 06-27 03:10:10 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14d5646ff940>
15
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:10 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_fd88f513'), local_subscribe_addr='ipc:///tmp/5519fa92-0b12-4661-87d3-60cc8720f215', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:10 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_b47742a0'), local_subscribe_addr='ipc:///tmp/df1ac424-4f25-4f30-8aa8-4fa48b068a28', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:12 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:12 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:12 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:12 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:12 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:12 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:12 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:12 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3455003) WARNING 06-27 03:10:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=0 pid=3455001) WARNING 06-27 03:10:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=3 pid=3455004) WARNING 06-27 03:10:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3455002) WARNING 06-27 03:10:13 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:13 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_c686f274'), local_subscribe_addr='ipc:///tmp/6acb8797-fe95-4e20-b79c-c3db8cabe649', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:13 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:13 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:13 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
33
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:13 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:13 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=2 pid=3455003) WARNING 06-27 03:10:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=3 pid=3455004) WARNING 06-27 03:10:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3455001) WARNING 06-27 03:10:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3455002) WARNING 06-27 03:10:13 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:13 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:14 [loader.py:458] Loading weights took 0.69 seconds
47
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:14 [loader.py:458] Loading weights took 0.70 seconds
48
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:14 [loader.py:458] Loading weights took 0.70 seconds
49
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:14 [loader.py:458] Loading weights took 0.72 seconds
50
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:14 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.892547 seconds
51
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:14 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.893978 seconds
52
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:14 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.888170 seconds
53
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:15 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.959422 seconds
54
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:20 [backends.py:430] Dynamo bytecode transform time: 5.52 s
56
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:20 [backends.py:430] Dynamo bytecode transform time: 5.59 s
58
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
59
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:20 [backends.py:430] Dynamo bytecode transform time: 5.64 s
60
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:20 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
61
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:20 [backends.py:430] Dynamo bytecode transform time: 5.79 s
62
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.333 s
63
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.369 s
64
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:25 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.354 s
65
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:26 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.428 s
66
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:31 [monitor.py:33] torch.compile takes 5.59 s in total
67
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:31 [monitor.py:33] torch.compile takes 5.52 s in total
68
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:31 [monitor.py:33] torch.compile takes 5.79 s in total
69
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:31 [monitor.py:33] torch.compile takes 5.64 s in total
70
+ INFO 06-27 03:10:32 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 03:10:32 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 03:10:32 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 03:10:32 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 03:10:32 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 03:10:32 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 03:10:32 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 03:10:32 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3455003) INFO 06-27 03:10:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=0 pid=3455001) INFO 06-27 03:10:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=3 pid=3455004) INFO 06-27 03:10:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3455002) INFO 06-27 03:10:58 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-27 03:10:59 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.01 seconds
83
+ INFO 06-27 03:10:59 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 03:23:34 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 03:23:34 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5022|± |0.0273|
89
+ | | |math_pass@1:1_samples|0.7850|± |0.0407|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6142|± |0.0250|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6283|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4938|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7450|± |0.0206|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8250|± |0.0608|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2727|± |0.0407|
96
+
merge_bench/logs/phi_linear_5.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 03:23:33 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 03:23:35 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 03:23:42 [config.py:717] This model supports multiple tasks: {'score', 'embed', 'reward', 'generate', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-27 03:23:42 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 03:23:42 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 03:23:43 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 03:23:43 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 03:23:43 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_f203064f'), local_subscribe_addr='ipc:///tmp/580126a8-c433-4576-8303-f43bfd9ef804', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 03:23:44 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x15411adc7d60>
10
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:44 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_22ad3f81'), local_subscribe_addr='ipc:///tmp/17f69624-7605-41fa-9a3c-7ef518f34993', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 03:23:44 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x154119194ac0>
12
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:44 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_48bc76d6'), local_subscribe_addr='ipc:///tmp/d0301c4f-b794-4755-a83d-b48f0a57b58d', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 03:23:44 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x15411adc7ca0>
14
+ WARNING 06-27 03:23:44 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x15411adc79d0>
15
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:44 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_32db5fc4'), local_subscribe_addr='ipc:///tmp/8781eb5d-4016-45ed-85cf-f2e24cd0b7f0', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:44 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_34e3bba9'), local_subscribe_addr='ipc:///tmp/f7be10cc-06a3-46b4-ba2f-e37bbc8351e1', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:45 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:45 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:45 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:45 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:45 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:45 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:45 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:45 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3457338) WARNING 06-27 03:23:46 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3457339) WARNING 06-27 03:23:46 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3457335) WARNING 06-27 03:23:46 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3457333) WARNING 06-27 03:23:46 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:46 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_77f60c8a'), local_subscribe_addr='ipc:///tmp/7bc40d89-5a96-45a9-b7ee-9c97d05ab0cb', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:46 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
31
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:46 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
32
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:46 [cuda.py:221] Using Flash Attention backend on V1 engine.
33
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:46 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
34
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:46 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:46 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=1 pid=3457335) WARNING 06-27 03:23:46 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3457338) WARNING 06-27 03:23:46 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3457333) WARNING 06-27 03:23:46 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:46 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
40
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:46 [cuda.py:221] Using Flash Attention backend on V1 engine.
41
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:46 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
42
+ (VllmWorker rank=3 pid=3457339) WARNING 06-27 03:23:46 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
43
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:46 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:46 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:46 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:47 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:47 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:47 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:47 [loader.py:458] Loading weights took 0.73 seconds
50
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.863905 seconds
51
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.873327 seconds
52
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.905206 seconds
53
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.967568 seconds
54
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:53 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:53 [backends.py:430] Dynamo bytecode transform time: 5.66 s
56
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:53 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
57
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:53 [backends.py:430] Dynamo bytecode transform time: 5.68 s
58
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:53 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:53 [backends.py:430] Dynamo bytecode transform time: 5.69 s
60
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:53 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
61
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:53 [backends.py:430] Dynamo bytecode transform time: 5.75 s
62
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:23:58 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.403 s
63
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:23:58 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.397 s
64
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:23:58 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.397 s
65
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:23:58 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.404 s
66
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:24:04 [monitor.py:33] torch.compile takes 5.66 s in total
67
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:24:04 [monitor.py:33] torch.compile takes 5.68 s in total
68
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:24:04 [monitor.py:33] torch.compile takes 5.75 s in total
69
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:24:04 [monitor.py:33] torch.compile takes 5.69 s in total
70
+ INFO 06-27 03:24:05 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 03:24:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 03:24:05 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 03:24:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 03:24:05 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 03:24:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 03:24:05 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 03:24:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3457338) INFO 06-27 03:24:31 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=3 pid=3457339) INFO 06-27 03:24:31 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3457335) INFO 06-27 03:24:31 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3457333) INFO 06-27 03:24:31 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-27 03:24:31 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.73 seconds
83
+ INFO 06-27 03:24:31 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 03:37:15 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 03:37:15 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5198|± |0.0283|
89
+ | | |math_pass@1:1_samples|0.7397|± |0.0452|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5748|± |0.0254|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6220|± |0.0158|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5188|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7293|± |0.0210|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7500|± |0.0693|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3636|± |0.0439|
96
+
merge_bench/logs/phi_linear_6.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 03:37:14 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 03:37:16 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 03:37:23 [config.py:717] This model supports multiple tasks: {'score', 'classify', 'generate', 'reward', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-27 03:37:23 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 03:37:23 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 03:37:25 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 03:37:25 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 03:37:25 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_580fe053'), local_subscribe_addr='ipc:///tmp/75627ddd-1278-43f9-9357-73bdfd261eb0', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 03:37:25 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14bbc75b3ca0>
10
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:25 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_38aa069a'), local_subscribe_addr='ipc:///tmp/27c05aef-fc8a-4305-9b84-771d29610914', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 03:37:25 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14bbc5994a00>
12
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:25 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a6e9872f'), local_subscribe_addr='ipc:///tmp/40a3df37-ef3f-4f32-8162-8a898c97b508', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 03:37:25 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14bbc75b3be0>
14
+ WARNING 06-27 03:37:25 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14bbc75b3910>
15
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:25 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_738e08a9'), local_subscribe_addr='ipc:///tmp/af4d4034-226e-4702-9021-b27ebe87aab7', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:25 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_21fb3f2e'), local_subscribe_addr='ipc:///tmp/fbbf66d1-93ad-4810-be1a-c28e80ccb952', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:27 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:27 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:27 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:27 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:27 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:27 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:27 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:27 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3459154) WARNING 06-27 03:37:27 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3459152) WARNING 06-27 03:37:27 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3459150) WARNING 06-27 03:37:27 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3459149) WARNING 06-27 03:37:27 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:27 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_ff219e32'), local_subscribe_addr='ipc:///tmp/f13b016b-7c21-4dec-9987-b15d524f465b', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:27 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:27 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:27 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:27 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:27 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:27 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=2 pid=3459152) WARNING 06-27 03:37:27 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=3 pid=3459154) WARNING 06-27 03:37:27 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:27 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:27 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=1 pid=3459150) WARNING 06-27 03:37:27 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3459149) WARNING 06-27 03:37:27 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:27 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:27 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:27 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:27 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:28 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:28 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:28 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:28 [loader.py:458] Loading weights took 0.73 seconds
50
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:28 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.868632 seconds
51
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:29 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.862858 seconds
52
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:29 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.945122 seconds
53
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:29 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.898845 seconds
54
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:34 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:34 [backends.py:430] Dynamo bytecode transform time: 5.62 s
56
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:34 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:34 [backends.py:430] Dynamo bytecode transform time: 5.63 s
58
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:34 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:34 [backends.py:430] Dynamo bytecode transform time: 5.74 s
60
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:35 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:35 [backends.py:430] Dynamo bytecode transform time: 5.90 s
62
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:39 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.378 s
63
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:39 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.414 s
64
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:40 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.504 s
65
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:40 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.487 s
66
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:37:46 [monitor.py:33] torch.compile takes 5.62 s in total
67
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:37:46 [monitor.py:33] torch.compile takes 5.74 s in total
68
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:37:46 [monitor.py:33] torch.compile takes 5.63 s in total
69
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:37:46 [monitor.py:33] torch.compile takes 5.90 s in total
70
+ INFO 06-27 03:37:47 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 03:37:47 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 03:37:47 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 03:37:47 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 03:37:47 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 03:37:47 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 03:37:47 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 03:37:47 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3459152) INFO 06-27 03:38:13 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=0 pid=3459149) INFO 06-27 03:38:13 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=3 pid=3459154) INFO 06-27 03:38:13 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3459150) INFO 06-27 03:38:13 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-27 03:38:13 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.52 seconds
83
+ INFO 06-27 03:38:14 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 03:50:53 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 03:50:53 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5278|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7443|± |0.0441|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6220|± |0.0249|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6325|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5094|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7136|± |0.0214|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7750|± |0.0669|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3471|± |0.0435|
96
+
merge_bench/logs/phi_linear_7.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 03:50:52 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 03:50:54 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 03:51:01 [config.py:717] This model supports multiple tasks: {'embed', 'score', 'generate', 'reward', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-27 03:51:01 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 03:51:01 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 03:51:02 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 03:51:02 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 03:51:02 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_714b3dea'), local_subscribe_addr='ipc:///tmp/2f2fd7a1-d76f-40e9-9afb-0889889d0481', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 03:51:02 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1460b75f8b20>
10
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:02 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_862c6351'), local_subscribe_addr='ipc:///tmp/aea04bd6-bf04-43a9-b63b-7e1d4d38328a', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 03:51:02 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1460bcf9bdc0>
12
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:02 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a03f8b8d'), local_subscribe_addr='ipc:///tmp/4b33314d-7b7a-4a0f-a474-f73a7288f7e5', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 03:51:02 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1460bcf9bac0>
14
+ WARNING 06-27 03:51:03 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1460bcf9bd00>
15
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_334cdaef'), local_subscribe_addr='ipc:///tmp/09f5481c-e720-4801-a905-6174d455376a', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:03 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_f940b541'), local_subscribe_addr='ipc:///tmp/8c21ebc4-9e4a-4abe-9b85-0b5185ba2ca4', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:04 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:04 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:04 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:04 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:04 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:04 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:04 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:04 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3461659) WARNING 06-27 03:51:05 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3461660) WARNING 06-27 03:51:05 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3461658) WARNING 06-27 03:51:05 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3461657) WARNING 06-27 03:51:05 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:05 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_61595aee'), local_subscribe_addr='ipc:///tmp/cf3c5a85-6794-4c17-b25b-e36130b1eef1', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:05 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:05 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:05 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:05 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:05 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:05 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3461660) WARNING 06-27 03:51:05 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3461659) WARNING 06-27 03:51:05 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:05 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:05 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=0 pid=3461657) WARNING 06-27 03:51:05 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3461658) WARNING 06-27 03:51:05 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:05 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:05 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:05 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:05 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:06 [loader.py:458] Loading weights took 0.67 seconds
47
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:06 [loader.py:458] Loading weights took 0.66 seconds
48
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:06 [loader.py:458] Loading weights took 0.69 seconds
49
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:06 [loader.py:458] Loading weights took 0.71 seconds
50
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:06 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.882061 seconds
51
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:06 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.863736 seconds
52
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:06 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.935206 seconds
53
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:06 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.874226 seconds
54
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:12 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:12 [backends.py:430] Dynamo bytecode transform time: 5.56 s
56
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:12 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
57
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:12 [backends.py:430] Dynamo bytecode transform time: 5.64 s
58
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:12 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
59
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:12 [backends.py:430] Dynamo bytecode transform time: 5.68 s
60
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:12 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:12 [backends.py:430] Dynamo bytecode transform time: 5.71 s
62
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:17 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.375 s
63
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:17 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.451 s
64
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:17 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.414 s
65
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:17 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.425 s
66
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:23 [monitor.py:33] torch.compile takes 5.56 s in total
67
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:23 [monitor.py:33] torch.compile takes 5.64 s in total
68
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:23 [monitor.py:33] torch.compile takes 5.68 s in total
69
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:23 [monitor.py:33] torch.compile takes 5.71 s in total
70
+ INFO 06-27 03:51:24 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 03:51:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 03:51:24 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 03:51:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 03:51:24 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 03:51:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 03:51:24 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 03:51:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3461660) INFO 06-27 03:51:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3461659) INFO 06-27 03:51:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3461658) INFO 06-27 03:51:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3461657) INFO 06-27 03:51:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-27 03:51:50 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.27 seconds
83
+ INFO 06-27 03:51:51 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 04:04:33 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 04:04:33 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5260|± |0.0279|
89
+ | | |math_pass@1:1_samples|0.7533|± |0.0439|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6220|± |0.0249|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6357|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5156|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7315|± |0.0210|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7750|± |0.0669|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3306|± |0.0429|
96
+
merge_bench/logs/phi_linear_8.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 04:04:32 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 04:04:34 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 04:04:41 [config.py:717] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-27 04:04:41 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 04:04:41 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 04:04:43 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 04:04:43 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 04:04:43 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_99c4f216'), local_subscribe_addr='ipc:///tmp/f947237b-7f6b-4094-8434-f89ab25aa6b2', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 04:04:43 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1534b0fb0af0>
10
+ WARNING 06-27 04:04:43 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1534b2bd3d90>
11
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:43 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_b587ff9a'), local_subscribe_addr='ipc:///tmp/6fbaae7a-578f-44bf-b908-cc00996cd192', remote_subscribe_addr=None, remote_addr_ipv6=False)
12
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:43 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_c0f62c48'), local_subscribe_addr='ipc:///tmp/7eb0d6c0-c7f6-4df3-95f8-a9553f4454b3', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 04:04:43 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1534b2bd3cd0>
14
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:43 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_7482f04a'), local_subscribe_addr='ipc:///tmp/7a18c3c1-b1e2-445b-88a8-cd69ddcee828', remote_subscribe_addr=None, remote_addr_ipv6=False)
15
+ WARNING 06-27 04:04:43 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1534b2bd3a90>
16
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:43 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_90cc6f29'), local_subscribe_addr='ipc:///tmp/24b5aa5a-c1df-4f7e-bf59-5856daa4fa6b', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:45 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:45 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:45 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:45 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:45 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:45 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:45 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:45 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=0 pid=3463534) WARNING 06-27 04:04:45 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=1 pid=3463535) WARNING 06-27 04:04:45 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=2 pid=3463536) WARNING 06-27 04:04:45 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=3 pid=3463537) WARNING 06-27 04:04:45 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:45 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_6ce5769e'), local_subscribe_addr='ipc:///tmp/54247821-49ed-48c0-a116-4078da2aaa21', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:45 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:45 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:45 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:45 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:45 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:45 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=0 pid=3463534) WARNING 06-27 04:04:45 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:45 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=1 pid=3463535) WARNING 06-27 04:04:45 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:45 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=2 pid=3463536) WARNING 06-27 04:04:45 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=3 pid=3463537) WARNING 06-27 04:04:45 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:45 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:45 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:45 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:45 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:46 [loader.py:458] Loading weights took 0.66 seconds
47
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:46 [loader.py:458] Loading weights took 0.65 seconds
48
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:46 [loader.py:458] Loading weights took 0.73 seconds
49
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:46 [loader.py:458] Loading weights took 0.76 seconds
50
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.856983 seconds
51
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.856359 seconds
52
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.928136 seconds
53
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:47 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.963085 seconds
54
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:52 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:52 [backends.py:430] Dynamo bytecode transform time: 5.60 s
56
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:53 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:53 [backends.py:430] Dynamo bytecode transform time: 5.66 s
58
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:53 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:53 [backends.py:430] Dynamo bytecode transform time: 5.72 s
60
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:53 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:53 [backends.py:430] Dynamo bytecode transform time: 5.74 s
62
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:04:57 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.346 s
63
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:04:58 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.373 s
64
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:04:58 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.446 s
65
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:04:58 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.457 s
66
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:05:03 [monitor.py:33] torch.compile takes 5.60 s in total
67
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:05:03 [monitor.py:33] torch.compile takes 5.72 s in total
68
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:05:03 [monitor.py:33] torch.compile takes 5.66 s in total
69
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:05:03 [monitor.py:33] torch.compile takes 5.74 s in total
70
+ INFO 06-27 04:05:05 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 04:05:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 04:05:05 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 04:05:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 04:05:05 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 04:05:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 04:05:05 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 04:05:05 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3463537) INFO 06-27 04:05:29 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
79
+ (VllmWorker rank=1 pid=3463535) INFO 06-27 04:05:29 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
80
+ (VllmWorker rank=0 pid=3463534) INFO 06-27 04:05:29 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
81
+ (VllmWorker rank=2 pid=3463536) INFO 06-27 04:05:29 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
82
+ INFO 06-27 04:05:29 [core.py:159] init engine (profile, create kv cache, warmup model) took 42.49 seconds
83
+ INFO 06-27 04:05:30 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 04:18:11 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 04:18:11 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5092|± |0.0280|
89
+ | | |math_pass@1:1_samples|0.7633|± |0.0437|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5748|± |0.0254|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6283|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5031|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7517|± |0.0205|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7750|± |0.0669|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3306|± |0.0429|
96
+
merge_bench/logs/phi_linear_9.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-27 04:18:10 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-27 04:18:12 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-27 04:18:19 [config.py:717] This model supports multiple tasks: {'embed', 'score', 'classify', 'generate', 'reward'}. Defaulting to 'generate'.
4
+ INFO 06-27 04:18:19 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-27 04:18:19 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-27 04:18:20 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-27 04:18:20 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-27 04:18:20 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_38b9ed00'), local_subscribe_addr='ipc:///tmp/927bf6ed-7718-4102-810c-743be7f346f7', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-27 04:18:21 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14f57f85fd30>
10
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:21 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_88552a69'), local_subscribe_addr='ipc:///tmp/d6f5d2c8-7c9b-46ca-a769-431b1606bc1e', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-27 04:18:21 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14f57def8a90>
12
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:21 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a1cdac22'), local_subscribe_addr='ipc:///tmp/1004e10b-1501-42e5-8d81-9e7ba71d12c2', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-27 04:18:21 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14f57f85f9a0>
14
+ WARNING 06-27 04:18:21 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14f57f85fc70>
15
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:21 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_2de4b4e3'), local_subscribe_addr='ipc:///tmp/7ceea1d7-a038-4bab-815e-73854524454e', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:21 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_c8ff2cd5'), local_subscribe_addr='ipc:///tmp/b866d7f8-e437-4e63-b251-1bdee58d9c83', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:22 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:22 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:22 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:22 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:22 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:22 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:22 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:22 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3465349) WARNING 06-27 04:18:23 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3465350) WARNING 06-27 04:18:23 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3465347) WARNING 06-27 04:18:23 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=1 pid=3465348) WARNING 06-27 04:18:23 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_a1278e9a'), local_subscribe_addr='ipc:///tmp/d57b6989-89f2-4332-a268-20cdd1112732', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:23 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:23 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:23 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:23 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
34
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:23 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:23 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=3 pid=3465350) WARNING 06-27 04:18:23 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:23 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:23 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=2 pid=3465349) WARNING 06-27 04:18:23 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3465347) WARNING 06-27 04:18:23 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3465348) WARNING 06-27 04:18:23 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:23 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:23 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:23 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:23 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:24 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:24 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:24 [loader.py:458] Loading weights took 0.66 seconds
49
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:24 [loader.py:458] Loading weights took 0.71 seconds
50
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:24 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.875188 seconds
51
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:24 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.871170 seconds
52
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:24 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.937340 seconds
53
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:24 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.886571 seconds
54
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:30 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:30 [backends.py:430] Dynamo bytecode transform time: 5.63 s
56
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:30 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
57
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:30 [backends.py:430] Dynamo bytecode transform time: 5.64 s
58
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:30 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
59
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:30 [backends.py:430] Dynamo bytecode transform time: 5.71 s
60
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:30 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
61
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:30 [backends.py:430] Dynamo bytecode transform time: 5.84 s
62
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:35 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.371 s
63
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:35 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.381 s
64
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:35 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.384 s
65
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:35 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.415 s
66
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:18:41 [monitor.py:33] torch.compile takes 5.71 s in total
67
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:18:41 [monitor.py:33] torch.compile takes 5.63 s in total
68
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:18:41 [monitor.py:33] torch.compile takes 5.84 s in total
69
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:18:41 [monitor.py:33] torch.compile takes 5.64 s in total
70
+ INFO 06-27 04:18:42 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-27 04:18:42 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-27 04:18:42 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-27 04:18:42 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-27 04:18:42 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-27 04:18:42 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-27 04:18:42 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-27 04:18:42 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3465350) INFO 06-27 04:19:08 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3465349) INFO 06-27 04:19:08 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3465348) INFO 06-27 04:19:08 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3465347) INFO 06-27 04:19:08 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-27 04:19:08 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.42 seconds
83
+ INFO 06-27 04:19:08 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-27 04:31:48 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-27 04:31:48 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5194|± |0.0279|
89
+ | | |math_pass@1:1_samples|0.8031|± |0.0388|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5906|± |0.0252|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6272|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5375|± |0.0279|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7562|± |0.0203|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8500|± |0.0572|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3223|± |0.0427|
96
+
merge_bench/logs/phi_ties_1.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 00:04:19 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 00:04:20 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 00:04:28 [config.py:717] This model supports multiple tasks: {'reward', 'generate', 'score', 'classify', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 00:04:28 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 00:04:28 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 00:04:30 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 00:04:30 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 00:04:30 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_2abe6abe'), local_subscribe_addr='ipc:///tmp/457416df-872d-4317-b106-2134e675d0da', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 00:04:30 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1505d1c4fd90>
10
+ WARNING 06-28 00:04:30 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1505d1c4fa90>
11
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:30 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_685504c9'), local_subscribe_addr='ipc:///tmp/2a5ea551-f9c4-451f-adec-163349ef193a', remote_subscribe_addr=None, remote_addr_ipv6=False)
12
+ WARNING 06-28 00:04:30 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1505d021caf0>
13
+ WARNING 06-28 00:04:30 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1505d1c4fcd0>
14
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:30 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_ee5b58f4'), local_subscribe_addr='ipc:///tmp/dcfa2e9f-cb07-43e4-89da-be93880cfb53', remote_subscribe_addr=None, remote_addr_ipv6=False)
15
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:30 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_d60d37fc'), local_subscribe_addr='ipc:///tmp/27722dbf-c5aa-42c3-a7aa-0f0e9d7aa849', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:30 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_120c3c8a'), local_subscribe_addr='ipc:///tmp/e38be149-f63e-4ee1-b9e2-7857fd7453f4', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:37 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:37 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:37 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:37 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:37 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:37 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:37 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:37 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3498583) WARNING 06-28 00:04:38 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=1 pid=3498581) WARNING 06-28 00:04:38 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=0 pid=3498580) WARNING 06-28 00:04:38 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=2 pid=3498582) WARNING 06-28 00:04:38 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:38 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_99e5fc0d'), local_subscribe_addr='ipc:///tmp/46fa865f-a132-40a8-8b2d-8aa796c96a28', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:38 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:38 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:38 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:38 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:38 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:38 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:38 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:38 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=0 pid=3498580) WARNING 06-28 00:04:38 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
39
+ (VllmWorker rank=1 pid=3498581) WARNING 06-28 00:04:38 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=3 pid=3498583) WARNING 06-28 00:04:38 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=2 pid=3498582) WARNING 06-28 00:04:38 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:38 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:38 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:38 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:38 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:39 [loader.py:458] Loading weights took 0.73 seconds
47
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:39 [loader.py:458] Loading weights took 0.76 seconds
48
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:39 [loader.py:458] Loading weights took 0.76 seconds
49
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:39 [loader.py:458] Loading weights took 0.77 seconds
50
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:39 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.962431 seconds
51
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:39 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.961445 seconds
52
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:39 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.955792 seconds
53
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:39 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 1.009121 seconds
54
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:45 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:45 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
56
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:45 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
57
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:45 [backends.py:430] Dynamo bytecode transform time: 6.12 s
58
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:45 [backends.py:430] Dynamo bytecode transform time: 6.12 s
59
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:45 [backends.py:430] Dynamo bytecode transform time: 6.12 s
60
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:45 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:45 [backends.py:430] Dynamo bytecode transform time: 6.12 s
62
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:51 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.659 s
63
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:51 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.731 s
64
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:51 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.703 s
65
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:51 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.765 s
66
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:04:56 [monitor.py:33] torch.compile takes 6.12 s in total
67
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:04:56 [monitor.py:33] torch.compile takes 6.12 s in total
68
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:04:56 [monitor.py:33] torch.compile takes 6.12 s in total
69
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:04:56 [monitor.py:33] torch.compile takes 6.12 s in total
70
+ INFO 06-28 00:04:58 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 00:04:58 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 00:04:58 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 00:04:58 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 00:04:58 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 00:04:58 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 00:04:58 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 00:04:58 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3498582) INFO 06-28 00:05:23 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
79
+ (VllmWorker rank=0 pid=3498580) INFO 06-28 00:05:23 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3498581) INFO 06-28 00:05:23 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
81
+ (VllmWorker rank=3 pid=3498583) INFO 06-28 00:05:23 [gpu_model_runner.py:1686] Graph capturing finished in 25 secs, took 2.96 GiB
82
+ INFO 06-28 00:05:23 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.71 seconds
83
+ INFO 06-28 00:05:23 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 00:18:11 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 00:18:12 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5163|± |0.0281|
89
+ | | |math_pass@1:1_samples|0.7770|± |0.0422|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6010|± |0.0251|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6325|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4844|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7539|± |0.0204|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8000|± |0.0641|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3471|± |0.0435|
96
+
merge_bench/logs/phi_ties_3.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 00:18:11 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 00:18:12 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 00:18:20 [config.py:717] This model supports multiple tasks: {'reward', 'score', 'generate', 'classify', 'embed'}. Defaulting to 'generate'.
4
+ INFO 06-28 00:18:20 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 00:18:20 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 00:18:21 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 00:18:21 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 00:18:21 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_64a3cf43'), local_subscribe_addr='ipc:///tmp/48692c60-28a5-46d0-84af-9a1b5e3fde25', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 00:18:22 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1490b810fd90>
10
+ WARNING 06-28 00:18:22 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1490966e0af0>
11
+ WARNING 06-28 00:18:22 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1490b810fa90>
12
+ WARNING 06-28 00:18:22 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x1490b810fcd0>
13
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:22 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_3b32a873'), local_subscribe_addr='ipc:///tmp/47b03dd2-7cda-4a81-b056-a71be7086532', remote_subscribe_addr=None, remote_addr_ipv6=False)
14
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:22 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_9031601a'), local_subscribe_addr='ipc:///tmp/4f5a3729-d58c-4c0c-9aff-59cfa0cf970b', remote_subscribe_addr=None, remote_addr_ipv6=False)
15
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:22 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_0333bbf3'), local_subscribe_addr='ipc:///tmp/f8ecc6d9-14fe-4bb9-9f76-449d44779b1b', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:22 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_d7d0086a'), local_subscribe_addr='ipc:///tmp/6342ef1b-0bd2-437e-a6a8-31352458218f', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:34 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:34 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:34 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:34 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:34 [pynccl.py:69] vLLM is using nccl==2.21.5
22
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:34 [pynccl.py:69] vLLM is using nccl==2.21.5
23
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:34 [utils.py:1055] Found nccl from library libnccl.so.2
24
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:34 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=1 pid=3502091) WARNING 06-28 00:18:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=0 pid=3502090) WARNING 06-28 00:18:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=2 pid=3502092) WARNING 06-28 00:18:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=3 pid=3502093) WARNING 06-28 00:18:35 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:35 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_d8c777a2'), local_subscribe_addr='ipc:///tmp/36440de9-abb4-4834-bdcd-4771f8e0a2eb', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:35 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:35 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:35 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
33
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:35 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=1 pid=3502091) WARNING 06-28 00:18:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=0 pid=3502090) WARNING 06-28 00:18:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:35 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=3 pid=3502093) WARNING 06-28 00:18:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=2 pid=3502092) WARNING 06-28 00:18:35 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:35 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:36 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:36 [loader.py:458] Loading weights took 0.68 seconds
48
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:36 [loader.py:458] Loading weights took 0.75 seconds
49
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:36 [loader.py:458] Loading weights took 0.80 seconds
50
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.876938 seconds
51
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.883893 seconds
52
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.973901 seconds
53
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:36 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 1.031346 seconds
54
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
55
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
56
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:42 [backends.py:430] Dynamo bytecode transform time: 6.00 s
57
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:42 [backends.py:430] Dynamo bytecode transform time: 6.00 s
58
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
59
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:42 [backends.py:430] Dynamo bytecode transform time: 6.14 s
60
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:42 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
61
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:42 [backends.py:430] Dynamo bytecode transform time: 6.19 s
62
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.704 s
63
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.755 s
64
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 5.082 s
65
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:48 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 5.240 s
66
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:18:54 [monitor.py:33] torch.compile takes 6.00 s in total
67
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:18:54 [monitor.py:33] torch.compile takes 6.00 s in total
68
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:18:54 [monitor.py:33] torch.compile takes 6.19 s in total
69
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:18:54 [monitor.py:33] torch.compile takes 6.14 s in total
70
+ INFO 06-28 00:18:55 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 00:18:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 00:18:55 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 00:18:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 00:18:55 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 00:18:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 00:18:55 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 00:18:55 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=0 pid=3502090) INFO 06-28 00:19:22 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
79
+ (VllmWorker rank=1 pid=3502091) INFO 06-28 00:19:22 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
80
+ (VllmWorker rank=3 pid=3502093) INFO 06-28 00:19:23 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
81
+ (VllmWorker rank=2 pid=3502092) INFO 06-28 00:19:23 [gpu_model_runner.py:1686] Graph capturing finished in 27 secs, took 2.96 GiB
82
+ INFO 06-28 00:19:23 [core.py:159] init engine (profile, create kv cache, warmup model) took 46.42 seconds
83
+ INFO 06-28 00:19:23 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 00:31:57 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 00:31:57 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5150|± |0.0277|
89
+ | | |math_pass@1:1_samples|0.7385|± |0.0452|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6220|± |0.0249|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6241|± |0.0157|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5000|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7271|± |0.0211|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.7500|± |0.0693|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3140|± |0.0424|
96
+
merge_bench/logs/phi_ties_5.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 02:29:51 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 02:29:53 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 02:30:00 [config.py:717] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
4
+ INFO 06-28 02:30:00 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 02:30:00 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 02:30:01 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 02:30:01 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 02:30:01 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_604e8e56'), local_subscribe_addr='ipc:///tmp/6a5d04f5-6450-485a-b7da-b2fb9b33f0d9', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 02:30:02 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b29162caf0>
10
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:02 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_8e0bd40b'), local_subscribe_addr='ipc:///tmp/14b598b5-accc-4bf5-a43a-15a15c4b1a6f', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 02:30:02 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b293063d90>
12
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:02 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_937ccfd1'), local_subscribe_addr='ipc:///tmp/e1d4207d-bd2c-4669-bdb5-d2808557c43c', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 02:30:02 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b293063cd0>
14
+ WARNING 06-28 02:30:02 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14b293063a90>
15
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:02 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_850fb964'), local_subscribe_addr='ipc:///tmp/dfdeb6ab-4cc1-4a1c-b3d0-6a304eeeaadd', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:02 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_9434e564'), local_subscribe_addr='ipc:///tmp/789ee0fa-631c-4e35-a794-701f51a258f1', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:04 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:04 [pynccl.py:69] vLLM is using nccl==2.21.5
19
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:04 [utils.py:1055] Found nccl from library libnccl.so.2
20
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:04 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:04 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:04 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:04 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:04 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=2 pid=3525089) WARNING 06-28 02:30:04 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=3 pid=3525090) WARNING 06-28 02:30:04 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3525088) WARNING 06-28 02:30:04 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3525087) WARNING 06-28 02:30:04 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:04 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_8a453fcb'), local_subscribe_addr='ipc:///tmp/3b68802e-dd95-43b0-a83e-ed9aadda8540', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:04 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:04 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
32
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:04 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
33
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:04 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
34
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:04 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:04 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=2 pid=3525089) WARNING 06-28 02:30:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=3 pid=3525090) WARNING 06-28 02:30:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:04 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:04 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=1 pid=3525088) WARNING 06-28 02:30:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=0 pid=3525087) WARNING 06-28 02:30:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:04 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:04 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:04 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:04 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:05 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:05 [loader.py:458] Loading weights took 0.73 seconds
48
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:05 [loader.py:458] Loading weights took 0.73 seconds
49
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:05 [loader.py:458] Loading weights took 0.74 seconds
50
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:05 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.918278 seconds
51
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:06 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.922065 seconds
52
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:06 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.895514 seconds
53
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:06 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.973592 seconds
54
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:11 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
55
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:11 [backends.py:430] Dynamo bytecode transform time: 5.65 s
56
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:11 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
57
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:11 [backends.py:430] Dynamo bytecode transform time: 5.73 s
58
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:11 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
59
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:11 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
60
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:11 [backends.py:430] Dynamo bytecode transform time: 5.81 s
61
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:11 [backends.py:430] Dynamo bytecode transform time: 5.81 s
62
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:16 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.407 s
63
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:17 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.434 s
64
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:17 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.482 s
65
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:17 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.454 s
66
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:22 [monitor.py:33] torch.compile takes 5.81 s in total
67
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:22 [monitor.py:33] torch.compile takes 5.81 s in total
68
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:22 [monitor.py:33] torch.compile takes 5.65 s in total
69
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:22 [monitor.py:33] torch.compile takes 5.73 s in total
70
+ INFO 06-28 02:30:24 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 02:30:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 02:30:24 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 02:30:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 02:30:24 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 02:30:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 02:30:24 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 02:30:24 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3525090) INFO 06-28 02:30:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=0 pid=3525087) INFO 06-28 02:30:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=2 pid=3525089) INFO 06-28 02:30:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3525088) INFO 06-28 02:30:50 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 02:30:50 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.44 seconds
83
+ INFO 06-28 02:30:50 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 02:43:29 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 02:43:29 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5091|± |0.0279|
89
+ | | |math_pass@1:1_samples|0.7928|± |0.0405|
90
+ |mm\|arc_challenge\|0| 0|sem |0.5827|± |0.0253|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6410|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4906|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7606|± |0.0202|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8250|± |0.0608|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3223|± |0.0427|
96
+
merge_bench/logs/phi_ties_7.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 02:43:28 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 02:43:30 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 02:43:37 [config.py:717] This model supports multiple tasks: {'classify', 'generate', 'reward', 'embed', 'score'}. Defaulting to 'generate'.
4
+ INFO 06-28 02:43:37 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 02:43:37 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 02:43:39 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 02:43:39 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 02:43:39 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_901a0879'), local_subscribe_addr='ipc:///tmp/087980f4-6715-439e-ade5-e490bb2ff57e', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 02:43:39 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152887198a30>
10
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:39 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_a12311a9'), local_subscribe_addr='ipc:///tmp/25170eb0-7a2f-4e8d-ad5a-bf695aad19fd', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 02:43:39 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152890c43c10>
12
+ WARNING 06-28 02:43:39 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152890c43cd0>
13
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:39 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e733a3eb'), local_subscribe_addr='ipc:///tmp/5e2ea1ec-c5f6-4d6f-90a6-4c58def244aa', remote_subscribe_addr=None, remote_addr_ipv6=False)
14
+ WARNING 06-28 02:43:39 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x152890c43940>
15
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:39 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_65524c3c'), local_subscribe_addr='ipc:///tmp/66934d46-2cd6-48fd-861e-d91c8468b582', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:39 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_96394c91'), local_subscribe_addr='ipc:///tmp/ef2160b0-e13f-4fbf-bc5a-3a12e6da0e19', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:41 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:41 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:41 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:41 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:41 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:41 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:41 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:41 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3527443) WARNING 06-28 02:43:41 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3527442) WARNING 06-28 02:43:41 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3527441) WARNING 06-28 02:43:41 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3527440) WARNING 06-28 02:43:41 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:41 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_91ddd545'), local_subscribe_addr='ipc:///tmp/79bc8596-ad99-4743-9101-657250aa290c', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:41 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
31
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:41 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
32
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:41 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
33
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:41 [cuda.py:221] Using Flash Attention backend on V1 engine.
34
+ (VllmWorker rank=2 pid=3527442) WARNING 06-28 02:43:41 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
35
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:41 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
36
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:41 [cuda.py:221] Using Flash Attention backend on V1 engine.
37
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:41 [cuda.py:221] Using Flash Attention backend on V1 engine.
38
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:41 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=3 pid=3527443) WARNING 06-28 02:43:41 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
40
+ (VllmWorker rank=0 pid=3527440) WARNING 06-28 02:43:41 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3527441) WARNING 06-28 02:43:41 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:41 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:41 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:41 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:41 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:42 [loader.py:458] Loading weights took 0.69 seconds
47
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:42 [loader.py:458] Loading weights took 0.69 seconds
48
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:42 [loader.py:458] Loading weights took 0.70 seconds
49
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:42 [loader.py:458] Loading weights took 0.73 seconds
50
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:42 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.878173 seconds
51
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:43 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.875533 seconds
52
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:43 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.944981 seconds
53
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:43 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.922516 seconds
54
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:48 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
55
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:48 [backends.py:430] Dynamo bytecode transform time: 5.56 s
56
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:49 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
57
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:49 [backends.py:430] Dynamo bytecode transform time: 5.58 s
58
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:49 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
59
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:49 [backends.py:430] Dynamo bytecode transform time: 5.78 s
60
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:49 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
61
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:49 [backends.py:430] Dynamo bytecode transform time: 5.86 s
62
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:43:53 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.346 s
63
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:43:54 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.360 s
64
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:43:54 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.368 s
65
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:43:54 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.479 s
66
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:44:00 [monitor.py:33] torch.compile takes 5.56 s in total
67
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:44:00 [monitor.py:33] torch.compile takes 5.78 s in total
68
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:44:00 [monitor.py:33] torch.compile takes 5.86 s in total
69
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:44:00 [monitor.py:33] torch.compile takes 5.58 s in total
70
+ INFO 06-28 02:44:01 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 02:44:01 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 02:44:01 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 02:44:01 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 02:44:01 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 02:44:01 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 02:44:01 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 02:44:01 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=2 pid=3527442) INFO 06-28 02:44:27 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=0 pid=3527440) INFO 06-28 02:44:27 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=3 pid=3527443) INFO 06-28 02:44:27 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=1 pid=3527441) INFO 06-28 02:44:27 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 02:44:27 [core.py:159] init engine (profile, create kv cache, warmup model) took 43.92 seconds
83
+ INFO 06-28 02:44:27 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 02:57:13 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 02:57:13 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5215|± |0.0275|
89
+ | | |math_pass@1:1_samples|0.7805|± |0.0409|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6352|± |0.0247|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6410|± |0.0156|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.5125|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7360|± |0.0209|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.8250|± |0.0608|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.2975|± |0.0417|
96
+
merge_bench/logs/phi_ties_9.log ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ INFO 06-28 02:57:12 [__init__.py:239] Automatically detected platform cuda.
2
+ INFO 06-28 02:57:14 [config.py:209] Replacing legacy 'type' key with 'rope_type'
3
+ INFO 06-28 02:57:21 [config.py:717] This model supports multiple tasks: {'reward', 'embed', 'generate', 'classify', 'score'}. Defaulting to 'generate'.
4
+ INFO 06-28 02:57:21 [config.py:1770] Defaulting to use mp for distributed inference
5
+ INFO 06-28 02:57:21 [config.py:2003] Chunked prefill is enabled with max_num_batched_tokens=16384.
6
+ INFO 06-28 02:57:22 [core.py:58] Initializing a V1 LLM engine (v0.8.5.post1) with config: model='./models/R-Phi4', speculative_config=None, tokenizer='./models/R-Phi4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='auto', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=./models/R-Phi4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
7
+ WARNING 06-28 02:57:22 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 128 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
8
+ INFO 06-28 02:57:22 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3], buffer_handle=(4, 10485760, 10, 'psm_e5197895'), local_subscribe_addr='ipc:///tmp/b0a44ef9-bda3-4b70-a828-3d8153cb25e8', remote_subscribe_addr=None, remote_addr_ipv6=False)
9
+ WARNING 06-28 02:57:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ff9a4afd00>
10
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_66e46478'), local_subscribe_addr='ipc:///tmp/3efa8e16-e41b-4f7c-94f9-95ea91664c1f', remote_subscribe_addr=None, remote_addr_ipv6=False)
11
+ WARNING 06-28 02:57:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ff98a78a60>
12
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_b98a5dc9'), local_subscribe_addr='ipc:///tmp/2870b470-eb7a-4d36-bd20-881fd3cf2c8c', remote_subscribe_addr=None, remote_addr_ipv6=False)
13
+ WARNING 06-28 02:57:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ff9a4afc40>
14
+ WARNING 06-28 02:57:23 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x14ff9a4af970>
15
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_263260cf'), local_subscribe_addr='ipc:///tmp/833452ad-0120-435e-b553-3cb039a6a2c9', remote_subscribe_addr=None, remote_addr_ipv6=False)
16
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:23 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e741f63c'), local_subscribe_addr='ipc:///tmp/6de270e3-27d7-4282-8278-117949320a93', remote_subscribe_addr=None, remote_addr_ipv6=False)
17
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:24 [utils.py:1055] Found nccl from library libnccl.so.2
18
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:24 [utils.py:1055] Found nccl from library libnccl.so.2
19
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:24 [pynccl.py:69] vLLM is using nccl==2.21.5
20
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:24 [pynccl.py:69] vLLM is using nccl==2.21.5
21
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:24 [utils.py:1055] Found nccl from library libnccl.so.2
22
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:24 [utils.py:1055] Found nccl from library libnccl.so.2
23
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:24 [pynccl.py:69] vLLM is using nccl==2.21.5
24
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:24 [pynccl.py:69] vLLM is using nccl==2.21.5
25
+ (VllmWorker rank=3 pid=3529696) WARNING 06-28 02:57:25 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
26
+ (VllmWorker rank=2 pid=3529695) WARNING 06-28 02:57:25 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
27
+ (VllmWorker rank=1 pid=3529694) WARNING 06-28 02:57:25 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
28
+ (VllmWorker rank=0 pid=3529693) WARNING 06-28 02:57:25 [custom_all_reduce.py:136] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
29
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:25 [shm_broadcast.py:266] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3], buffer_handle=(3, 4194304, 6, 'psm_aafaa12e'), local_subscribe_addr='ipc:///tmp/67dc1f4e-7410-4b14-99a6-9177125c8985', remote_subscribe_addr=None, remote_addr_ipv6=False)
30
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:25 [parallel_state.py:1004] rank 3 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 3
31
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:25 [parallel_state.py:1004] rank 2 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 2
32
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:25 [parallel_state.py:1004] rank 0 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 0
33
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:25 [parallel_state.py:1004] rank 1 in world size 4 is assigned as DP rank 0, PP rank 0, TP rank 1
34
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:25 [cuda.py:221] Using Flash Attention backend on V1 engine.
35
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:25 [cuda.py:221] Using Flash Attention backend on V1 engine.
36
+ (VllmWorker rank=2 pid=3529695) WARNING 06-28 02:57:25 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
37
+ (VllmWorker rank=3 pid=3529696) WARNING 06-28 02:57:25 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
38
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:25 [cuda.py:221] Using Flash Attention backend on V1 engine.
39
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:25 [cuda.py:221] Using Flash Attention backend on V1 engine.
40
+ (VllmWorker rank=0 pid=3529693) WARNING 06-28 02:57:25 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
41
+ (VllmWorker rank=1 pid=3529694) WARNING 06-28 02:57:25 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
42
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:25 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
43
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:25 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
44
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:25 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
45
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:25 [gpu_model_runner.py:1329] Starting to load model ./models/R-Phi4...
46
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:26 [loader.py:458] Loading weights took 0.68 seconds
47
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:26 [loader.py:458] Loading weights took 0.69 seconds
48
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:26 [loader.py:458] Loading weights took 0.68 seconds
49
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:26 [loader.py:458] Loading weights took 0.72 seconds
50
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:26 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.878988 seconds
51
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:26 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.886175 seconds
52
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:26 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.919960 seconds
53
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:26 [gpu_model_runner.py:1347] Model loading took 1.8196 GiB and 0.959284 seconds
54
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:32 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_0_0 for vLLM's torch.compile
55
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:32 [backends.py:430] Dynamo bytecode transform time: 5.59 s
56
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:32 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_1_0 for vLLM's torch.compile
57
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:32 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_2_0 for vLLM's torch.compile
58
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:32 [backends.py:430] Dynamo bytecode transform time: 5.76 s
59
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:32 [backends.py:430] Dynamo bytecode transform time: 5.76 s
60
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:32 [backends.py:420] Using cache directory: /home/jiangli/.cache/vllm/torch_compile_cache/bc6735f00d/rank_3_0 for vLLM's torch.compile
61
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:32 [backends.py:430] Dynamo bytecode transform time: 5.80 s
62
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:37 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.381 s
63
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:37 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.387 s
64
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:37 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.395 s
65
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:37 [backends.py:118] Directly load the compiled graph(s) for shape None from the cache, took 4.436 s
66
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:57:43 [monitor.py:33] torch.compile takes 5.76 s in total
67
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:57:43 [monitor.py:33] torch.compile takes 5.59 s in total
68
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:57:43 [monitor.py:33] torch.compile takes 5.76 s in total
69
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:57:43 [monitor.py:33] torch.compile takes 5.80 s in total
70
+ INFO 06-28 02:57:44 [kv_cache_utils.py:634] GPU KV cache size: 2,007,088 tokens
71
+ INFO 06-28 02:57:44 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.02x
72
+ INFO 06-28 02:57:44 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
73
+ INFO 06-28 02:57:44 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
74
+ INFO 06-28 02:57:44 [kv_cache_utils.py:634] GPU KV cache size: 2,006,832 tokens
75
+ INFO 06-28 02:57:44 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 979.90x
76
+ INFO 06-28 02:57:44 [kv_cache_utils.py:634] GPU KV cache size: 2,008,112 tokens
77
+ INFO 06-28 02:57:44 [kv_cache_utils.py:637] Maximum concurrency for 2,048 tokens per request: 980.52x
78
+ (VllmWorker rank=3 pid=3529696) INFO 06-28 02:58:10 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
79
+ (VllmWorker rank=2 pid=3529695) INFO 06-28 02:58:10 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
80
+ (VllmWorker rank=1 pid=3529694) INFO 06-28 02:58:10 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
81
+ (VllmWorker rank=0 pid=3529693) INFO 06-28 02:58:11 [gpu_model_runner.py:1686] Graph capturing finished in 26 secs, took 2.96 GiB
82
+ INFO 06-28 02:58:11 [core.py:159] init engine (profile, create kv cache, warmup model) took 44.11 seconds
83
+ INFO 06-28 02:58:11 [core_client.py:439] Core engine process 0 ready.
84
+ INFO 06-28 03:10:49 [importing.py:53] Triton module has been replaced with a placeholder.
85
+ INFO 06-28 03:10:49 [__init__.py:239] Automatically detected platform cuda.
86
+ | Task |Version| Metric |Value | |Stderr|
87
+ |------------------|------:|---------------------|-----:|---|-----:|
88
+ |all | |sem |0.5092|± |0.0276|
89
+ | | |math_pass@1:1_samples|0.8303|± |0.0341|
90
+ |mm\|arc_challenge\|0| 0|sem |0.6352|± |0.0247|
91
+ |mm\|arc_easy\|0 | 0|sem |0.6177|± |0.0158|
92
+ |mm\|commonsenseqa\|0| 0|sem |0.4781|± |0.0280|
93
+ |mm\|gsm8k\|0 | 0|math_pass@1:1_samples|0.7606|± |0.0202|
94
+ |mm\|math_500\|0 | 3|math_pass@1:1_samples|0.9000|± |0.0480|
95
+ |mm\|truthfulqa\|0 | 0|sem |0.3058|± |0.0421|
96
+
merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|arc_challenge|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce3917ead928e2c0b135b0486913789ebce9bf887ab523b118e4dbf44b3d98fb
3
+ size 3529231
merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|arc_easy|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66410b03c3e808718cb27f85ed11686da011eb81f1b30ce185527d6cb428735e
3
+ size 8156073
merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|commonsenseqa|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7377a29f21f91b2bfaf75bbf93a1b2f128fa753f1cefc634829bcb1583732bbd
3
+ size 2862028
merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|gsm8k|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbf37c3148192aa67066977a11a4870049c30d3f71317e44e3e1c65c4cc03dcf
3
+ size 3039563
merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|math_500|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b84257797190eecdbc0ef3e6cb8d4b4e1140e78fbe200889ef405bcc876f98e1
3
+ size 316667
merge_bench/outputs/._merged1_llama_darelinear_1/2025-06-23T01-52-10.258150/outputs_mm|truthfulqa|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:613a394111e16909114efc1a671aa7916329532d45f0eb62d35997e3219189a9
3
+ size 1148077
merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|arc_challenge|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66e45619e88ade20bf58ac95773dd63dff38803210a4384bab83d8b889231794
3
+ size 3518921
merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|arc_easy|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7d59b8f7c561bc7d2e58d99395c5b50d9d7f0b8c7c590003c1f92a8db017111
3
+ size 8166781
merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|commonsenseqa|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a67f6187601b7140b54906bb296894958bc31d84dbcf18c4923a62d11105c33e
3
+ size 2859764
merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|gsm8k|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:081b137da3e9752b00de835a085365af6036fb0b27cfafcc2f495f3fbdad462a
3
+ size 3039375
merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|math_500|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:589a4bb3ac1b5db19871cb67d76a9a3d636887d0ab07c1ceefb7cee55fab7ced
3
+ size 317874
merge_bench/outputs/._merged1_llama_darelinear_3/2025-06-23T01-52-10.258150/outputs_mm|truthfulqa|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4937003c6389d8dddc60b3dcefcfeb410aae72eb792fac7116d4a94913d2230f
3
+ size 1146204
merge_bench/outputs/._merged1_llama_darelinear_5/2025-06-23T01-52-10.258150/outputs_mm|arc_challenge|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b66af07142cf41943266bfde37ff44bc060dccd2e6b9b05f85db2e7c4884e33f
3
+ size 3524830
merge_bench/outputs/._merged1_llama_darelinear_5/2025-06-23T01-52-10.258150/outputs_mm|arc_easy|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb24ecc922da45dc457507d7d08ac984d0b5c371bba77756a5e951bda588ee0c
3
+ size 8167635
merge_bench/outputs/._merged1_llama_darelinear_5/2025-06-23T01-52-10.258150/outputs_mm|commonsenseqa|0_2025-06-23T01-52-10.258150.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2be5f6739cba12997dc33b547ae73dae4b91e66747ba2198768b883683cea7d
3
+ size 2860274