File size: 12,781 Bytes
e998d4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
INFO 10-26 08:02:52 [__init__.py:235] Automatically detected platform cuda.
[2025-10-26 08:02:53,803] [    INFO]: --- INIT SEEDS --- (pipeline.py:249)
[2025-10-26 08:02:53,804] [    INFO]: --- LOADING TASKS --- (pipeline.py:210)
[2025-10-26 08:02:53,807] [ WARNING]: Careful, the task aime25 is using evaluation data to build the few shot examples. (lighteval_task.py:269)
[2025-10-26 08:02:59,213] [    INFO]: --- LOADING MODEL --- (pipeline.py:177)
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-26 08:03:06,080] [    INFO]: Using max model len 32768 (config.py:1604)
[2025-10-26 08:03:06,785] [    INFO]: Chunked prefill is enabled with max_num_batched_tokens=2048. (config.py:2434)
INFO 10-26 08:03:11 [__init__.py:235] Automatically detected platform cuda.
INFO 10-26 08:03:13 [core.py:572] Waiting for init message from front-end.
INFO 10-26 08:03:13 [core.py:71] Initializing a V1 LLM engine (v0.10.0) with config: model='/mnt/public/wucanhui/outputs/Qwen3-4B-math-reasoning/checkpoint-2562', speculative_config=None, tokenizer='/mnt/public/wucanhui/outputs/Qwen3-4B-math-reasoning/checkpoint-2562', skip_tokenizer_init=False, tokenizer_mode=auto, revision=main, override_neuron_config={}, tokenizer_revision=main, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto,  device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=1234, served_model_name=/mnt/public/wucanhui/outputs/Qwen3-4B-math-reasoning/checkpoint-2562, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":0,"local_cache_dir":null}
INFO 10-26 08:03:17 [parallel_state.py:1102] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
WARNING 10-26 08:03:17 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
INFO 10-26 08:03:17 [gpu_model_runner.py:1843] Starting to load model /mnt/public/wucanhui/outputs/Qwen3-4B-math-reasoning/checkpoint-2562...
INFO 10-26 08:03:18 [gpu_model_runner.py:1875] Loading model from scratch...
INFO 10-26 08:03:18 [cuda.py:290] Using Flash Attention backend on V1 engine.

Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]

Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:31<00:31, 31.41s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:52<00:00, 25.48s/it]

Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:52<00:00, 26.37s/it]

INFO 10-26 08:04:11 [default_loader.py:262] Loading weights took 53.17 seconds
INFO 10-26 08:04:11 [gpu_model_runner.py:1892] Model loading took 7.5552 GiB and 53.295208 seconds
INFO 10-26 08:04:12 [gpu_worker.py:255] Available KV cache memory: 117.60 GiB
INFO 10-26 08:04:12 [kv_cache_utils.py:833] GPU KV cache size: 856,336 tokens
INFO 10-26 08:04:12 [kv_cache_utils.py:837] Maximum concurrency for 32,768 tokens per request: 26.13x
INFO 10-26 08:04:13 [core.py:193] init engine (profile, create kv cache, warmup model) took 1.40 seconds
[2025-10-26 08:04:13,651] [    INFO]: [CACHING] Initializing data cache (cache_management.py:105)
[2025-10-26 08:04:13,659] [    INFO]: --- RUNNING MODEL --- (pipeline.py:330)
[2025-10-26 08:04:13,659] [    INFO]: Running SamplingMethod.GENERATIVE requests (pipeline.py:313)
[2025-10-26 08:04:14,650] [    INFO]: Cache: Starting to process 30/30 samples (not found in cache) for tasks lighteval|aime25|0 (824021a82e1c701e, GENERATIVE) (cache_management.py:399)
[2025-10-26 08:04:14,652] [ WARNING]: You cannot select the number of dataset splits for a generative evaluation at the moment. Automatically inferring. (data.py:206)

Splits:   0%|          | 0/1 [00:00<?, ?it/s][2025-10-26 08:04:14,687] [ WARNING]: context_size + max_new_tokens=33622 which is greater than self.max_length=32768. Truncating context to 0 tokens. (vllm_model.py:367)


Adding requests:   0%|          | 0/30 [00:00<?, ?it/s]
Adding requests: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:00<00:00, 2662.26it/s]


Processed prompts:   0%|          | 0/480 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]

Processed prompts:   3%|β–Ž         | 16/480 [00:31<15:04,  1.95s/it, est. speed input: 100.59 toks/s, output: 600.28 toks/s]

Processed prompts:   7%|β–‹         | 32/480 [02:27<37:51,  5.07s/it, est. speed input: 56.72 toks/s, output: 381.04 toks/s] 

Processed prompts:  10%|β–ˆ         | 48/480 [05:36<58:44,  8.16s/it, est. speed input: 57.95 toks/s, output: 402.16 toks/s]

Processed prompts:  13%|β–ˆβ–Ž        | 64/480 [13:22<1:54:00, 16.44s/it, est. speed input: 41.31 toks/s, output: 398.09 toks/s]

Processed prompts:  17%|β–ˆβ–‹        | 80/480 [19:16<2:03:12, 18.48s/it, est. speed input: 31.03 toks/s, output: 297.18 toks/s]

Processed prompts:  20%|β–ˆβ–ˆ        | 96/480 [19:34<1:20:29, 12.58s/it, est. speed input: 35.71 toks/s, output: 492.45 toks/s]

Processed prompts:  23%|β–ˆβ–ˆβ–Ž       | 112/480 [19:35<52:04,  8.49s/it, est. speed input: 40.49 toks/s, output: 722.20 toks/s] 

Processed prompts:  27%|β–ˆβ–ˆβ–‹       | 128/480 [19:35<33:59,  5.80s/it, est. speed input: 45.17 toks/s, output: 913.72 toks/s]

Processed prompts:  30%|β–ˆβ–ˆβ–ˆ       | 144/480 [19:38<22:34,  4.03s/it, est. speed input: 48.94 toks/s, output: 1125.63 toks/s]

Processed prompts:  30%|β–ˆβ–ˆβ–ˆ       | 144/480 [19:50<22:34,  4.03s/it, est. speed input: 48.94 toks/s, output: 1125.63 toks/s]

Processed prompts:  33%|β–ˆβ–ˆβ–ˆβ–Ž      | 160/480 [19:59<17:04,  3.20s/it, est. speed input: 50.80 toks/s, output: 1227.06 toks/s]

Processed prompts:  37%|β–ˆβ–ˆβ–ˆβ–‹      | 176/480 [20:05<11:49,  2.33s/it, est. speed input: 53.00 toks/s, output: 1238.89 toks/s]

Processed prompts:  40%|β–ˆβ–ˆβ–ˆβ–ˆ      | 192/480 [20:06<07:50,  1.63s/it, est. speed input: 55.39 toks/s, output: 1290.57 toks/s]

Processed prompts:  40%|β–ˆβ–ˆβ–ˆβ–ˆ      | 192/480 [20:20<07:50,  1.63s/it, est. speed input: 55.39 toks/s, output: 1290.57 toks/s]

Processed prompts:  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž     | 208/480 [20:56<09:27,  2.09s/it, est. speed input: 55.62 toks/s, output: 1443.63 toks/s]

Processed prompts:  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹     | 224/480 [21:27<08:44,  2.05s/it, est. speed input: 56.47 toks/s, output: 1568.26 toks/s]

Processed prompts:  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     | 240/480 [21:32<06:03,  1.52s/it, est. speed input: 57.75 toks/s, output: 1577.05 toks/s]

Processed prompts:  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     | 240/480 [21:44<06:03,  1.52s/it, est. speed input: 57.75 toks/s, output: 1577.05 toks/s]

Processed prompts:  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž    | 256/480 [21:49<05:10,  1.39s/it, est. speed input: 58.42 toks/s, output: 1568.61 toks/s]

Processed prompts:  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹    | 272/480 [21:53<03:35,  1.04s/it, est. speed input: 60.40 toks/s, output: 1658.71 toks/s]

Processed prompts:  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹    | 272/480 [22:04<03:35,  1.04s/it, est. speed input: 60.40 toks/s, output: 1658.71 toks/s]

Processed prompts:  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ    | 288/480 [22:10<03:21,  1.05s/it, est. speed input: 63.05 toks/s, output: 1727.32 toks/s]

Processed prompts:  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž   | 304/480 [22:24<02:55,  1.00it/s, est. speed input: 64.07 toks/s, output: 1735.45 toks/s]

Processed prompts:  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹   | 320/480 [23:26<04:59,  1.87s/it, est. speed input: 63.77 toks/s, output: 1844.10 toks/s]

Processed prompts:  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   | 336/480 [25:13<07:57,  3.32s/it, est. speed input: 61.00 toks/s, output: 1793.67 toks/s]

Processed prompts:  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž  | 352/480 [25:26<05:26,  2.55s/it, est. speed input: 62.88 toks/s, output: 1887.97 toks/s]

Processed prompts:  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹  | 368/480 [25:42<03:54,  2.09s/it, est. speed input: 64.04 toks/s, output: 1935.60 toks/s]

Processed prompts:  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  | 384/480 [25:51<02:36,  1.63s/it, est. speed input: 65.85 toks/s, output: 2000.84 toks/s]

Processed prompts:  83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 400/480 [26:01<01:46,  1.33s/it, est. speed input: 66.89 toks/s, output: 2071.80 toks/s]

Processed prompts:  87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 416/480 [26:20<01:22,  1.30s/it, est. speed input: 67.95 toks/s, output: 2220.24 toks/s]

Processed prompts:  90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 432/480 [26:35<00:56,  1.17s/it, est. speed input: 68.87 toks/s, output: 2249.41 toks/s]

Processed prompts:  93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 448/480 [26:54<00:37,  1.18s/it, est. speed input: 69.58 toks/s, output: 2321.91 toks/s]

Processed prompts:  97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 464/480 [27:55<00:31,  1.98s/it, est. speed input: 68.62 toks/s, output: 2367.29 toks/s]

Processed prompts: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 480/480 [28:41<00:00,  2.25s/it, est. speed input: 68.25 toks/s, output: 2368.48 toks/s]

Processed prompts: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 480/480 [28:41<00:00,  2.25s/it, est. speed input: 68.25 toks/s, output: 2368.48 toks/s]
Processed prompts: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 480/480 [28:41<00:00,  3.59s/it, est. speed input: 68.25 toks/s, output: 2368.48 toks/s]

Splits: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [28:41<00:00, 1721.99s/it]
Splits: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [28:41<00:00, 1721.99s/it]

Creating parquet from Arrow format:   0%|          | 0/1 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00,  5.70ba/s]
Creating parquet from Arrow format: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00,  5.67ba/s]
[2025-10-26 08:33:01,888] [    INFO]: Cached 30 samples of lighteval|aime25|0 (824021a82e1c701e, GENERATIVE) at /mnt/public/wucanhui/outputs/Qwen3-4B-math-reasoning/checkpoint-2562/0619260e1176b049/lighteval|aime25|0/824021a82e1c701e/GENERATIVE.parquet. (cache_management.py:345)

Generating train split: 0 examples [00:00, ? examples/s]
Generating train split: 30 examples [00:00, 234.35 examples/s]
Generating train split: 30 examples [00:00, 230.88 examples/s]
[rank0]:[W1026 08:33:06.472513423 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
[2025-10-26 08:33:07,407] [    INFO]: --- POST-PROCESSING MODEL RESPONSES --- (pipeline.py:344)
[2025-10-26 08:33:07,416] [    INFO]: --- COMPUTING METRICS --- (pipeline.py:371)
[2025-10-26 08:33:07,417] [ WARNING]: n undefined in the pass@k. We assume it's the same as the sample's number of predictions. (metrics_sample.py:1302)
[2025-10-26 08:33:09,279] [    INFO]: --- DISPLAYING RESULTS --- (pipeline.py:432)
[2025-10-26 08:33:09,291] [    INFO]: --- SAVING AND PUSHING RESULTS --- (pipeline.py:422)
[2025-10-26 08:33:09,292] [    INFO]: Saving experiment tracker (evaluation_tracker.py:246)
[2025-10-26 08:33:11,624] [    INFO]: Saving results to /mnt/public/wucanhui/lighteval/results/results/mnt/public/wucanhui/outputs/Qwen3-4B-math-reasoning/checkpoint-2562/results_2025-10-26T08-33-09.292915.json (evaluation_tracker.py:310)
|       Task       |Version|   Metric    |Value |   |Stderr|
|------------------|-------|-------------|-----:|---|-----:|
|all               |       |pass@k_with_k|0.5333|Β±  |0.0926|
|                  |       |avg@k_with_k |0.2750|Β±  |0.0672|
|lighteval:aime25:0|       |pass@k_with_k|0.5333|Β±  |0.0926|
|                  |       |avg@k_with_k |0.2750|Β±  |0.0672|