Hamerlate commited on
Commit
c3680e4
·
verified ·
1 Parent(s): d957f76

Upload folder using huggingface_hub

Browse files
traverse_bs_util_sim_prefill.log ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WARNING: CPU IP/backtrace sampling not supported, disabling.
2
+ Try the 'nsys status --environment' command to learn more.
3
+
4
+ WARNING: CPU context switch tracing not supported, disabling.
5
+ Try the 'nsys status --environment' command to learn more.
6
+
7
+ INFO 08-13 20:12:19 [__init__.py:235] Automatically detected platform cuda.
8
+ CUDA_VISIBLE_DEVICES = 3
9
+ --- vLLM V1 基准测试(含 NVTX 标记)---
10
+ 模型: Qwen/Qwen2-1.5B
11
+ 批量大小: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
12
+ 场景: ['prefill640_decode1']
13
+ ------------------------------------------------------------
14
+ 加载分词器/模型中...
15
+ INFO 08-13 20:12:29 [config.py:1604] Using max model len 4096
16
+ INFO 08-13 20:12:30 [config.py:2434] Chunked prefill is enabled with max_num_batched_tokens=8192.
17
+ INFO 08-13 20:12:35 [__init__.py:235] Automatically detected platform cuda.
18
+ INFO 08-13 20:12:36 [core.py:572] Waiting for init message from front-end.
19
+ INFO 08-13 20:12:37 [core.py:71] Initializing a V1 LLM engine (v0.10.0) with config: model='Qwen/Qwen2-1.5B', speculative_config=None, tokenizer='Qwen/Qwen2-1.5B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2-1.5B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}
20
+ INFO 08-13 20:12:38 [parallel_state.py:1102] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
21
+ WARNING 08-13 20:12:38 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
22
+ INFO 08-13 20:12:38 [gpu_model_runner.py:1843] Starting to load model Qwen/Qwen2-1.5B...
23
+ INFO 08-13 20:12:39 [gpu_model_runner.py:1875] Loading model from scratch...
24
+ INFO 08-13 20:12:39 [cuda.py:290] Using Flash Attention backend on V1 engine.
25
+ INFO 08-13 20:12:39 [weight_utils.py:296] Using model weights format ['*.safetensors']
26
+ INFO 08-13 20:12:40 [weight_utils.py:349] No model.safetensors.index.json found in remote.
27
+
28
+
29
+
30
+
31
+ INFO 08-13 20:12:41 [default_loader.py:262] Loading weights took 0.71 seconds
32
+ INFO 08-13 20:12:41 [gpu_model_runner.py:1892] Model loading took 2.9105 GiB and 2.167169 seconds
33
+ INFO 08-13 20:12:48 [backends.py:530] Using cache directory: /home/cy/.cache/vllm/torch_compile_cache/40b61c71e9/rank_0_0/backbone for vLLM's torch.compile
34
+ INFO 08-13 20:12:48 [backends.py:541] Dynamo bytecode transform time: 6.30 s
35
+ INFO 08-13 20:12:53 [backends.py:161] Directly load the compiled graph(s) for dynamic shape from the cache, took 4.762 s
36
+ INFO 08-13 20:12:54 [monitor.py:34] torch.compile takes 6.30 s in total
37
+ INFO 08-13 20:12:55 [gpu_worker.py:255] Available KV cache memory: 12.81 GiB
38
+ INFO 08-13 20:12:55 [kv_cache_utils.py:833] GPU KV cache size: 479,536 tokens
39
+ INFO 08-13 20:12:55 [kv_cache_utils.py:837] Maximum concurrency for 4,096 tokens per request: 117.07x
40
+
41
+ INFO 08-13 20:12:57 [gpu_model_runner.py:2485] Graph capturing finished in 2 secs, took 0.49 GiB
42
+ INFO 08-13 20:12:57 [core.py:193] init engine (profile, create kv cache, warmup model) took 15.90 seconds
43
+ 模型加载完成。
44
+
45
+ ===== 场景:prefill640_decode1 | prefill=640, decode=1 =====
46
+
47
+ --- 批量大小 bs=1 ---
48
+ 预热中...
49
+
50
+
51
+
52
+
53
+ 执行时间: 0.0157 s
54
+ 实际平均输入 tokens: 640.00(目标 640)
55
+ 生成总 tokens: 1
56
+ 吞吐(生成tokens/秒): 63.80
57
+ TTFT (V1 metrics): 0.0145 s
58
+ 解码吞吐 (V1 metrics): nan tok/s
59
+
60
+ --- 批量大小 bs=2 ---
61
+
62
+
63
+ 执行时间: 0.0228 s
64
+ 实际平均输入 tokens: 640.00(目标 640)
65
+ 生成总 tokens: 2
66
+ 吞吐(生成tokens/秒): 87.74
67
+ TTFT (V1 metrics): 0.0166 s
68
+ 解码吞吐 (V1 metrics): nan tok/s
69
+
70
+ --- 批量大小 bs=4 ---
71
+
72
+
73
+ 执行时间: 0.0285 s
74
+ 实际平均输入 tokens: 640.00(目标 640)
75
+ 生成总 tokens: 4
76
+ 吞吐(生成tokens/秒): 140.41
77
+ TTFT (V1 metrics): 0.0219 s
78
+ 解码吞吐 (V1 metrics): nan tok/s
79
+
80
+ --- 批量大小 bs=8 ---
81
+
82
+
83
+ 执行时间: 0.0351 s
84
+ 实际平均输入 tokens: 640.00(目标 640)
85
+ 生成总 tokens: 8
86
+ 吞吐(生成tokens/秒): 227.96
87
+ TTFT (V1 metrics): 0.0199 s
88
+ 解码吞吐 (V1 metrics): nan tok/s
89
+
90
+ --- 批量大小 bs=16 ---
91
+
92
+
93
+ 执行时间: 0.0422 s
94
+ 实际平均输入 tokens: 640.00(目标 640)
95
+ 生成总 tokens: 16
96
+ 吞吐(生成tokens/秒): 379.60
97
+ TTFT (V1 metrics): 0.0205 s
98
+ 解码吞吐 (V1 metrics): nan tok/s
99
+
100
+ --- 批量大小 bs=32 ---
101
+
102
+
103
+ 执行时间: 0.0695 s
104
+ 实际平均输入 tokens: 640.00(目标 640)
105
+ 生成总 tokens: 32
106
+ 吞吐(生成tokens/秒): 460.19
107
+ TTFT (V1 metrics): 0.0328 s
108
+ 解码吞吐 (V1 metrics): nan tok/s
109
+
110
+ --- 批量大小 bs=64 ---
111
+
112
+
113
+ 执行时间: 0.1352 s
114
+ 实际平均输入 tokens: 640.00(目标 640)
115
+ 生成总 tokens: 64
116
+ 吞吐(生成tokens/秒): 473.22
117
+ TTFT (V1 metrics): 0.0642 s
118
+ 解码吞吐 (V1 metrics): nan tok/s
119
+
120
+ --- 批量大小 bs=128 ---
121
+
122
+
123
+ 执行时间: 0.2528 s
124
+ 实际平均输入 tokens: 640.00(目标 640)
125
+ 生成总 tokens: 128
126
+ 吞吐(生成tokens/秒): 506.39
127
+ TTFT (V1 metrics): 0.1242 s
128
+ 解码吞吐 (V1 metrics): nan tok/s
129
+
130
+ --- 批量大小 bs=256 ---
131
+
132
+
133
+ 执行时间: 0.4944 s
134
+ 实际平均输入 tokens: 640.00(目标 640)
135
+ 生成总 tokens: 256
136
+ 吞吐(生成tokens/秒): 517.82
137
+ TTFT (V1 metrics): 0.2468 s
138
+ 解码吞吐 (V1 metrics): nan tok/s
139
+
140
+ --- 批量大小 bs=512 ---
141
+
142
+
143
+ 执行时间: 1.0340 s
144
+ 实际平均输入 tokens: 640.00(目标 640)
145
+ 生成总 tokens: 512
146
+ 吞吐(生成tokens/秒): 495.15
147
+ TTFT (V1 metrics): 0.3919 s
148
+ 解码吞吐 (V1 metrics): nan tok/s
149
+
150
+ --- 批量大小 bs=1024 ---
151
+
152
+
153
+ [rank0]:[W813 20:13:03.090062170 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
154
+ 执行时间: 1.4186 s
155
+ 实际平均输入 tokens: 640.00(目标 640)
156
+ 生成总 tokens: 1024
157
+ 吞吐(生成tokens/秒): 721.83
158
+ TTFT (V1 metrics): 0.7074 s
159
+ 解码吞吐 (V1 metrics): nan tok/s
160
+
161
+ 完成。提示:在 Nsight Systems 中可通过 NVTX 区间快速定位各场景/批量的调用。
162
+ GPU 3: General Metrics for NVIDIA AD10x (any frequency)
163
+ Generating '/tmp/nsys-report-1cac.qdstrm'
164
+
165
+
166
+ [3/8] Executing 'nvtx_sum' stats report
167
+
168
+ Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Style Range
169
+ -------- --------------- --------- ---------------- ---------------- -------------- -------------- ----------- ------- --------------------------------------
170
+ 91.1 36,315,249,574 1 36,315,249,574.0 36,315,249,574.0 36,315,249,574 36,315,249,574 0.0 PushPop :LLM_init
171
+ 3.6 1,418,534,991 1 1,418,534,991.0 1,418,534,991.0 1,418,534,991 1,418,534,991 0.0 PushPop :generate [prefill640_decode1] bs=1024
172
+ 2.6 1,033,939,782 1 1,033,939,782.0 1,033,939,782.0 1,033,939,782 1,033,939,782 0.0 PushPop :generate [prefill640_decode1] bs=512
173
+ 1.2 494,241,378 1 494,241,378.0 494,241,378.0 494,241,378 494,241,378 0.0 PushPop :generate [prefill640_decode1] bs=256
174
+ 0.6 252,646,178 1 252,646,178.0 252,646,178.0 252,646,178 252,646,178 0.0 PushPop :generate [prefill640_decode1] bs=128
175
+ 0.3 135,127,951 1 135,127,951.0 135,127,951.0 135,127,951 135,127,951 0.0 PushPop :generate [prefill640_decode1] bs=64
176
+ 0.2 69,450,854 1 69,450,854.0 69,450,854.0 69,450,854 69,450,854 0.0 PushPop :generate [prefill640_decode1] bs=32
177
+ 0.1 42,089,250 1 42,089,250.0 42,089,250.0 42,089,250 42,089,250 0.0 PushPop :generate [prefill640_decode1] bs=16
178
+ 0.1 35,031,755 1 35,031,755.0 35,031,755.0 35,031,755 35,031,755 0.0 PushPop :generate [prefill640_decode1] bs=8
179
+ 0.1 28,423,503 1 28,423,503.0 28,423,503.0 28,423,503 28,423,503 0.0 PushPop :generate [prefill640_decode1] bs=4
180
+ 0.1 22,718,985 1 22,718,985.0 22,718,985.0 22,718,985 22,718,985 0.0 PushPop :generate [prefill640_decode1] bs=2
181
+ 0.0 15,558,545 1 15,558,545.0 15,558,545.0 15,558,545 15,558,545 0.0 PushPop :generate [prefill640_decode1] bs=1
182
+ 0.0 92,480 2 46,240.0 46,240.0 41,971 50,509 6,037.3 PushPop CCCL:cub::DeviceSegmentedRadixSort
183
+
184
+ [4/8] Executing 'osrt_sum' stats report
185
+
186
+ Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
187
+ -------- --------------- --------- ---------------- ------------ --------- -------------- ---------------- ----------------------
188
+ 31.8 335,061,049,950 15,462 21,669,968.3 31,508.0 1,027 25,080,207,331 453,913,796.5 pthread_cond_timedwait
189
+ 26.2 276,007,486,268 19,416 14,215,465.9 10,062,861.0 1,005 27,603,601,776 347,038,894.4 epoll_wait
190
+ 23.8 250,490,546,334 23 10,890,893,318.9 7,400,821.0 21,569 25,081,558,163 12,694,153,539.1 pthread_cond_wait
191
+ 7.8 82,525,279,637 11,230 7,348,644.7 1,389.0 1,000 10,009,801,326 154,269,901.7 poll
192
+ 6.3 66,301,652,297 28 2,367,916,153.5 116,250.0 9,100 10,000,153,340 3,927,350,696.2 sem_timedwait
193
+ 3.5 37,381,463,529 31,270 1,195,441.8 2,579.0 1,000 23,088,226,482 140,605,330.4 read
194
+ 0.5 4,755,984,407 320 14,862,451.3 8,706,984.0 41,870 540,473,836 47,731,413.7 sem_wait
195
+ 0.0 478,453,393 193,303 2,475.1 1,408.0 1,000 91,223,270 207,494.9 munmap
196
+ 0.0 296,385,099 8,486 34,926.4 10,283.5 1,000 29,216,321 398,282.7 ioctl
197
+ 0.0 228,905,507 369 620,340.1 2,490.0 1,043 19,768,552 3,236,619.5 fopen
198
+ 0.0 121,659,919 24 5,069,163.3 5,067,371.0 5,054,626 5,082,287 7,538.1 nanosleep
199
+ 0.0 102,379,027 30,621 3,343.4 2,440.0 1,001 16,541,117 94,528.8 open64
200
+ 0.0 77,771,458 96 810,119.4 3,456.5 1,028 21,709,807 3,907,740.9 open
201
+ 0.0 77,071,843 56 1,376,282.9 4,193.5 1,028 70,804,314 9,450,971.2 waitpid
202
+ 0.0 68,778,895 37 1,858,889.1 519,525.0 4,726 10,412,403 3,419,046.0 pthread_join
203
+ 0.0 63,993,249 10 6,399,324.9 19,014.0 8,828 63,772,333 20,158,832.6 connect
204
+ 0.0 56,861,984 14,196 4,005.5 2,503.5 1,000 1,630,225 20,174.2 mmap64
205
+ 0.0 38,142,478 12,937 2,948.3 1,889.0 1,000 258,955 8,506.9 pthread_cond_signal
206
+ 0.0 27,257,986 2,242 12,157.9 7,923.5 1,008 688,168 26,040.8 pthread_mutex_lock
207
+ 0.0 25,726,228 2,377 10,823.0 4,749.0 1,035 3,376,076 73,155.9 recv
208
+ 0.0 22,153,880 2,375 9,327.9 5,963.0 1,158 82,833 7,747.7 send
209
+ 0.0 17,565,390 4,709 3,730.2 2,384.0 1,010 92,100 5,593.0 write
210
+ 0.0 16,255,517 147 110,581.7 69,738.0 54,944 6,267,917 511,352.5 sleep
211
+ 0.0 7,020,386 131 53,590.7 49,634.0 20,946 126,412 20,394.3 pthread_create
212
+ 0.0 5,924,147 868 6,825.1 3,907.0 1,002 91,544 9,198.1 fgets
213
+ 0.0 5,826,134 3,268 1,782.8 1,385.5 1,000 21,218 986.4 epoll_ctl
214
+ 0.0 1,923,455 344 5,591.4 5,442.0 1,935 27,773 1,722.2 fopen64
215
+ 0.0 1,826,241 66 27,670.3 3,438.5 1,071 474,401 79,094.6 futex
216
+ 0.0 1,732,688 18 96,260.4 26,899.5 8,748 245,240 92,237.3 pthread_rwlock_wrlock
217
+ 0.0 1,636,056 1,312 1,247.0 1,038.0 1,000 78,550 2,200.8 fclose
218
+ 0.0 1,090,174 196 5,562.1 3,604.0 1,133 103,634 10,241.7 mmap
219
+ 0.0 883,856 12 73,654.7 39,475.5 13,506 187,069 62,084.1 pthread_rwlock_rdlock
220
+ 0.0 670,975 1 670,975.0 670,975.0 670,975 670,975 0.0 fork
221
+ 0.0 328,651 65 5,056.2 4,415.0 1,970 16,278 2,760.0 pipe2
222
+ 0.0 234,867 41 5,728.5 4,534.0 1,841 16,082 3,571.0 socket
223
+ 0.0 151,976 22 6,908.0 2,651.5 1,042 44,144 11,362.2 bind
224
+ 0.0 150,615 34 4,429.9 3,884.0 2,207 18,131 2,997.2 pthread_cond_broadcast
225
+ 0.0 70,015 5 14,003.0 13,251.0 8,981 18,627 4,207.5 accept4
226
+ 0.0 67,093 7 9,584.7 10,547.0 3,522 18,926 5,410.0 fread
227
+ 0.0 53,245 36 1,479.0 1,196.5 1,004 5,228 814.6 fcntl
228
+ 0.0 38,075 15 2,538.3 1,920.0 1,258 6,023 1,250.5 stat
229
+ 0.0 32,372 18 1,798.4 1,452.5 1,020 3,364 691.4 dup2
230
+ 0.0 29,899 17 1,758.8 1,732.0 1,108 2,551 344.6 sigaction
231
+ 0.0 24,531 8 3,066.4 2,926.5 1,071 5,140 1,984.2 fflush
232
+ 0.0 21,953 5 4,390.6 3,382.0 1,359 8,558 2,918.8 fwrite
233
+ 0.0 21,120 4 5,280.0 5,377.5 4,532 5,833 656.3 lstat
234
+ 0.0 13,601 3 4,533.7 4,541.0 4,490 4,570 40.5 fputs_unlocked
235
+ 0.0 13,455 4 3,363.8 3,777.5 1,414 4,486 1,439.2 flock
236
+ 0.0 13,202 9 1,466.9 1,336.0 1,004 2,245 447.3 listen
237
+ 0.0 12,638 8 1,579.8 1,733.0 1,181 1,860 283.5 pread
238
+ 0.0 12,388 5 2,477.6 2,624.0 1,900 3,051 527.8 mprotect
239
+ 0.0 8,401 4 2,100.3 1,824.5 1,477 3,275 813.7 flockfile
240
+ 0.0 6,483 1 6,483.0 6,483.0 6,483 6,483 0.0 kill
241
+ 0.0 5,192 2 2,596.0 2,596.0 1,381 3,811 1,718.3 openat64
242
+ 0.0 4,401 3 1,467.0 1,539.0 1,208 1,654 231.6 fstat
243
+ 0.0 3,753 1 3,753.0 3,753.0 3,753 3,753 0.0 fputs
244
+ 0.0 1,248 1 1,248.0 1,248.0 1,248 1,248 0.0 pthread_mutex_trylock
245
+
246
+ [5/8] Executing 'cuda_api_sum' stats report
247
+
248
+ Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
249
+ -------- --------------- --------- ----------- ----------- -------- ----------- ----------- ------------------------------------------
250
+ 30.0 688,049,735 2,906 236,768.7 8,227.0 3,284 97,196,278 1,983,071.9 cudaMemcpyAsync
251
+ 23.8 546,797,585 37,585 14,548.3 7,631.0 728 58,754,473 436,031.2 cudaLaunchKernel
252
+ 9.5 218,365,129 1,943 112,385.6 72,360.0 41,222 1,905,369 188,235.0 cudaGraphInstantiateWithFlags_v11040
253
+ 8.8 202,327,312 1,168 173,225.4 3,764.0 1,976 143,725,891 4,227,605.7 cudaStreamSynchronize
254
+ 8.4 193,015,994 2,136 90,363.3 31,226.5 3,978 125,181,717 2,707,930.0 cudaDeviceSynchronize
255
+ 6.1 140,823,050 7,801 18,051.9 16,707.0 7,793 254,831 6,795.6 cudaGraphLaunch_v10000
256
+ 4.4 101,659,976 32,732 3,105.8 3,705.0 626 160,065 2,282.5 cuLaunchKernel
257
+ 2.4 54,229,097 222 244,275.2 116,418.0 61,902 2,337,113 356,888.6 cudaFree
258
+ 1.8 40,398,624 348 116,088.0 108,948.5 4,252 1,002,111 54,455.3 cudaMalloc
259
+ 1.1 25,567,332 10 2,556,733.2 2,555,021.5 59,735 4,537,949 1,470,380.5 cuLibraryLoadData
260
+ 0.7 15,924,550 5,590 2,848.8 1,741.5 189 44,915 2,406.6 cudaMemsetAsync
261
+ 0.6 13,631,570 169 80,660.2 77,741.0 27,403 384,409 48,768.2 cuModuleLoadData
262
+ 0.5 10,472,511 10,102 1,036.7 994.0 293 8,915 282.5 cudaStreamIsCapturing_v10000
263
+ 0.4 10,278,562 9,579 1,073.0 409.0 260 4,198,916 44,483.9 cuKernelGetFunction
264
+ 0.4 9,003,125 18,895 476.5 456.0 312 7,386 124.7 cudaStreamGetCaptureInfo_v2_v11030
265
+ 0.4 8,175,802 1,943 4,207.8 4,132.0 3,120 32,465 920.7 cudaStreamBeginCapture_v10000
266
+ 0.3 7,331,918 1,943 3,773.5 3,720.0 2,347 8,633 532.6 cudaGraphDestroy_v10000
267
+ 0.1 2,783,662 128 21,747.4 1,996.5 1,241 908,222 111,366.7 cudaStreamCreateWithPriority
268
+ 0.1 2,509,452 1,943 1,291.5 1,269.0 997 6,815 171.9 cudaStreamEndCapture_v10000
269
+ 0.1 1,590,997 1,943 818.8 748.0 637 2,707 253.3 cudaGraphGetNodes_v10000
270
+ 0.1 1,320,258 14 94,304.1 4,836.0 3,895 1,252,140 333,251.7 cudaHostAlloc
271
+ 0.0 250,092 8 31,261.5 26,792.0 13,197 75,448 20,443.5 cudaMemGetInfo
272
+ 0.0 137,127 810 169.3 137.0 76 1,886 115.3 cuGetProcAddress_v2
273
+ 0.0 16,894 19 889.2 387.0 317 3,628 963.6 cudaEventCreateWithFlags
274
+ 0.0 14,298 16 893.6 714.0 417 1,921 492.4 cuLibraryGetKernel
275
+ 0.0 8,063 14 575.9 546.0 353 1,051 174.0 cudaThreadExchangeStreamCaptureMode_v10010
276
+ 0.0 5,360 1 5,360.0 5,360.0 5,360 5,360 0.0 cudaEventRecord
277
+ 0.0 4,649 3 1,549.7 1,354.0 1,253 2,042 429.4 cuInit
278
+ 0.0 4,525 4 1,131.3 1,033.0 150 2,309 1,115.1 cuModuleGetLoadingMode
279
+ 0.0 4,284 1 4,284.0 4,284.0 4,284 4,284 0.0 cudaStreamWaitEvent
280
+ 0.0 1,471 1 1,471.0 1,471.0 1,471 1,471 0.0 cudaEventDestroy
281
+ 0.0 1,202 2 601.0 601.0 362 840 338.0 cudaGetDriverEntryPoint_v11030
282
+
283
+ [6/8] Executing 'cuda_gpu_kern_sum' stats report
284
+
285
+ Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
286
+ -------- --------------- --------- ----------- ----------- --------- --------- ----------- ----------------------------------------------------------------------------------------------------
287
+ 17.5 191,550,599 3,839 49,896.0 22,047.0 7,648 543,459 41,385.4 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x64_32x6_tn_align8>(T1::Param…
288
+ 14.4 157,558,295 924 170,517.6 162,449.0 40,128 1,415,014 225,561.8 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_f2f_tn
289
+ 13.1 142,668,106 699 204,103.2 60,672.0 10,624 522,435 230,912.6 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x2_tn_align8>(T1::Par…
290
+ 12.3 134,474,867 7,560 17,787.7 22,592.0 6,816 25,952 6,457.5 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
291
+ 7.2 78,464,100 28 2,802,289.3 2,802,252.5 2,789,388 2,809,485 5,079.5 ampere_bf16_s16816gemm_bf16_128x64_ldg8_f2f_tn
292
+ 6.3 68,341,424 3,192 21,410.2 21,376.0 21,121 22,496 109.2 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
293
+ 3.6 39,126,959 8 4,890,869.9 4,845,366.0 4,793,013 5,095,799 116,592.1 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy…
294
+ 3.0 32,440,875 140 231,720.5 224,001.0 115,904 378,306 85,161.7 ampere_bf16_s1688gemm_bf16_64x128_sliced1x2_ldg8_f2f_tn
295
+ 2.1 22,649,605 784 28,889.8 12,512.0 11,552 62,369 20,645.7 ampere_bf16_s16816gemm_bf16_64x64_ldg8_f2f_stages_64x5_tn
296
+ 2.0 21,990,678 1,960 11,219.7 4,448.0 1,055 461,538 54,153.2 triton_poi_fused_mul_silu_1
297
+ 1.9 20,366,013 4 5,091,503.3 5,084,263.5 4,905,142 5,292,344 198,808.4 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy…
298
+ 1.3 14,291,843 28 510,423.0 511,890.5 469,219 512,834 8,092.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor<signed char>, std::a…
299
+ 1.2 12,987,958 142 91,464.5 51,584.5 50,432 3,032,238 335,523.7 ampere_bf16_s1688gemm_bf16_128x128_ldg8_f2f_stages_32x1_tn
300
+ 1.0 11,329,312 7,560 1,498.6 1,472.0 1,023 2,112 139.1 void vllm::reshape_and_cache_flash_kernel<__nv_bfloat16, __nv_bfloat16, (vllm::Fp8KVCacheDataType)0…
301
+ 0.9 10,243,695 1,960 5,226.4 3,488.0 1,536 112,129 12,589.0 triton_red_fused__to_copy_add_mean_mul_pow_rsqrt_2
302
+ 0.9 9,732,588 4 2,433,147.0 2,433,035.0 2,388,459 2,478,059 48,024.9 void at::native::<unnamed>::cunn_SoftMaxForward<(int)4, float, float, float, at::native::<unnamed>:…
303
+ 0.9 9,428,040 99 95,232.7 8,896.0 7,008 498,530 168,715.0 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, __nv_bfloat16, __n…
304
+ 0.8 9,145,605 28 326,628.8 326,593.0 324,641 330,145 1,002.2 ampere_bf16_s1688gemm_bf16_128x128_ldg8_relu_f2f_stages_32x1_tn
305
+ 0.8 8,423,244 224 37,603.8 37,568.0 36,384 39,808 495.2 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x128_32x6_tn_align8>(T1::Para…
306
+ 0.7 7,775,362 2 3,887,681.0 3,887,681.0 3,703,472 4,071,890 260,510.9 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte…
307
+ 0.7 7,229,615 336 21,516.7 21,440.0 21,056 22,592 348.9 ampere_bf16_s16816gemm_bf16_128x64_ldg8_relu_f2f_stages_64x3_tn
308
+ 0.6 6,376,996 1,960 3,253.6 2,017.0 1,535 79,521 9,041.8 triton_red_fused__to_copy_add_mean_mul_pow_rsqrt_0
309
+ 0.6 6,119,897 476 12,856.9 12,800.0 11,616 14,433 722.8 ampere_bf16_s16816gemm_bf16_64x64_ldg8_relu_f2f_stages_64x5_tn
310
+ 0.5 5,903,642 4 1,475,910.5 1,475,478.5 1,472,550 1,480,135 3,165.9 void at::native::vectorized_elementwise_kernel<(int)4, at::native::<unnamed>::masked_fill_kernel(at…
311
+ 0.5 4,930,279 3,192 1,544.6 1,536.0 1,407 1,985 66.0 void vllm::merge_attn_states_kernel<__nv_bfloat16, (unsigned int)128>(T1 *, float *, const T1 *, co…
312
+ 0.4 4,568,542 274 16,673.5 6,720.0 5,376 716,035 84,783.1 void at::native::reduce_kernel<(int)512, (int)1, at::native::ReduceOp<float, at::native::ArgMaxOps<…
313
+ 0.4 4,386,095 140 31,329.3 27,200.0 26,048 49,792 8,289.1 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_relu_f2f_tn
314
+ 0.4 4,084,074 1,890 2,160.9 1,824.0 1,344 22,432 2,262.9 triton_poi_fused_cat_3
315
+ 0.4 4,000,436 2 2,000,218.0 2,000,218.0 1,999,786 2,000,650 610.9 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BinaryFunctor<float, float, floa…
316
+ 0.3 3,592,081 56 64,144.3 64,128.0 62,784 65,216 457.4 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_64x1_tn_align8>(T1::Para…
317
+ 0.3 3,433,776 4 858,444.0 858,692.0 855,844 860,548 2,369.3 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n…
318
+ 0.3 3,426,307 4,161 823.4 800.0 767 1,632 92.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<long>, std::array<ch…
319
+ 0.3 3,364,329 272 12,368.9 5,023.0 2,112 1,005,604 85,629.0 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator…
320
+ 0.3 3,187,886 2 1,593,943.0 1,593,943.0 1,564,999 1,622,887 40,933.0 void at::native::tensor_kernel_scan_innermost_dim<float, std::plus<float>>(T1 *, const T1 *, unsign…
321
+ 0.3 2,857,125 1,512 1,889.6 1,728.0 1,311 2,752 447.3 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo…
322
+ 0.2 2,694,784 1,890 1,425.8 1,151.5 863 16,864 1,794.7 triton_poi_fused_view_5
323
+ 0.2 2,645,324 1,890 1,399.6 1,344.0 1,215 6,367 543.7 triton_poi_fused_cat_4
324
+ 0.2 2,582,316 2 1,291,158.0 1,291,158.0 1,290,918 1,291,398 339.4 at::native::<unnamed>::fill_reverse_indices_kernel(long *, int, at::cuda::detail::IntDivider<unsign…
325
+ 0.2 2,580,331 2 1,290,165.5 1,290,165.5 1,288,614 1,291,717 2,194.2 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n…
326
+ 0.2 2,424,966 112 21,651.5 21,616.0 9,503 34,592 12,040.5 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_128x2_tn_align8>(T1::Par…
327
+ 0.1 1,372,550 2 686,275.0 686,275.0 675,171 697,379 15,703.4 void at::native::<unnamed>::distribution_elementwise_grid_stride_kernel<float, (int)4, void at::nat…
328
+ 0.1 1,325,372 406 3,264.5 3,040.0 2,848 7,424 489.7 void at::native::index_elementwise_kernel<(int)128, (int)4, void at::native::gpu_index_kernel<void …
329
+ 0.1 1,145,318 28 40,904.2 40,864.0 40,160 42,624 439.5 ampere_bf16_s1688gemm_bf16_64x64_sliced1x4_ldg8_f2f_tn
330
+ 0.1 956,516 28 34,161.3 34,768.0 17,696 35,200 3,233.6 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, float, float, floa…
331
+ 0.1 641,700 28 22,917.9 22,880.0 22,783 23,296 126.2 ampere_bf16_s16816gemm_bf16_128x64_ldg8_f2f_stages_32x6_tn
332
+ 0.0 524,995 1 524,995.0 524,995.0 524,995 524,995 0.0 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x1_tn_align8>(T1::Par…
333
+ 0.0 368,104 270 1,363.3 1,408.0 1,183 1,697 154.1 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator…
334
+ 0.0 340,132 272 1,250.5 1,248.0 1,056 1,696 57.8 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator…
335
+ 0.0 294,849 168 1,755.1 1,760.0 1,535 2,016 115.8 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo…
336
+ 0.0 262,177 70 3,745.4 3,072.0 1,984 36,032 3,980.2 triton_red_fused__to_copy_add_embedding_mean_mul_pow_rsqrt_0
337
+ 0.0 245,535 270 909.4 896.0 895 1,024 29.1 void at::native::unrolled_elementwise_kernel<at::native::CUDAFunctorOnSelf_add<int>, std::array<cha…
338
+ 0.0 238,140 270 882.0 864.0 863 993 32.2 void at::native::unrolled_elementwise_kernel<at::native::FillFunctor<int>, std::array<char *, (unsi…
339
+ 0.0 190,396 70 2,719.9 2,095.5 1,632 39,775 4,505.8 triton_poi_fused_cat_1
340
+ 0.0 156,865 1 156,865.0 156,865.0 156,865 156,865 0.0 void at::native::<unnamed>::CatArrayBatchedCopy_aligned16_contig<at::native::<unnamed>::OpaqueType<…
341
+ 0.0 147,485 165 893.8 896.0 863 993 33.4 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<int>, std::array<cha…
342
+ 0.0 101,117 70 1,444.5 1,344.0 1,215 8,960 913.9 triton_poi_fused_cat_2
343
+ 0.0 99,524 70 1,421.8 1,184.0 863 14,304 1,590.7 triton_poi_fused_view_3
344
+ 0.0 97,406 109 893.6 895.0 863 1,823 94.2 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor<int>, std::array<cha…
345
+ 0.0 80,256 1 80,256.0 80,256.0 80,256 80,256 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::bfloat16_copy_kernel_cuda(at::Te…
346
+ 0.0 63,841 58 1,100.7 896.0 864 11,328 1,368.6 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor<c10::BFloat16>, std:…
347
+ 0.0 43,169 1 43,169.0 43,169.0 43,169 43,169 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::sin_kernel_cuda(at::TensorIterat…
348
+ 0.0 36,609 28 1,307.5 1,312.0 1,280 1,344 16.8 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, float, __nv_bfloat16, float, (bool)0, __n…
349
+ 0.0 26,656 1 26,656.0 26,656.0 26,656 26,656 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::cos_kernel_cuda(at::TensorIterat…
350
+ 0.0 19,521 1 19,521.0 19,521.0 19,521 19,521 0.0 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n…
351
+ 0.0 11,649 11 1,059.0 896.0 768 1,600 318.6 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor<float>, std::array<c…
352
+ 0.0 10,592 2 5,296.0 5,296.0 4,960 5,632 475.2 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte…
353
+ 0.0 8,801 2 4,400.5 4,400.5 4,288 4,513 159.1 void at::native::<unnamed>::distribution_elementwise_grid_stride_kernel<float, (int)4, void at::nat…
354
+ 0.0 3,616 2 1,808.0 1,808.0 1,600 2,016 294.2 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n…
355
+ 0.0 3,392 2 1,696.0 1,696.0 1,664 1,728 45.3 void at::native::vectorized_elementwise_kernel<(int)2, at::native::CUDAFunctorOnOther_add<long>, st…
356
+ 0.0 3,200 2 1,600.0 1,600.0 1,536 1,664 90.5 void at::native::vectorized_elementwise_kernel<(int)2, at::native::<unnamed>::where_kernel_impl(at:…
357
+ 0.0 2,975 2 1,487.5 1,487.5 991 1,984 702.2 void <unnamed>::elementwise_kernel_with_index<int, at::native::arange_cuda_out(const c10::Scalar &,…
358
+ 0.0 2,943 2 1,471.5 1,471.5 1,376 1,567 135.1 void at::native::vectorized_elementwise_kernel<(int)4, at::native::CUDAFunctorOnOther_add<float>, s…
359
+ 0.0 2,912 2 1,456.0 1,456.0 1,344 1,568 158.4 void at::native::vectorized_elementwise_kernel<(int)4, void at::native::compare_scalar_kernel<float…
360
+ 0.0 2,304 1 2,304.0 2,304.0 2,304 2,304 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::…
361
+ 0.0 1,184 1 1,184.0 1,184.0 1,184 1,184 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::reciprocal_kernel_cuda(at::Tenso…
362
+ 0.0 1,024 1 1,024.0 1,024.0 1,024 1,024 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::AUnaryFunctor<float, float, floa…
363
+ 0.0 993 1 993.0 993.0 993 993 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BUnaryFunctor<float, float, floa…
364
+ 0.0 896 1 896.0 896.0 896 896 0.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<double>, std::array<…
365
+
366
+ [7/8] Executing 'cuda_gpu_mem_time_sum' stats report
367
+
368
+ Time (%) Total Time (ns) Count Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Operation
369
+ -------- --------------- ----- ----------- ----------- --------- ---------- ----------- ------------------------------
370
+ 98.7 534,414,500 2,632 203,045.0 352.0 287 96,903,049 2,062,487.3 [CUDA memcpy Host-to-Device]
371
+ 1.0 5,441,946 4 1,360,486.5 1,360,806.5 1,358,310 1,362,023 1,577.8 [CUDA memcpy Device-to-Device]
372
+ 0.3 1,549,951 2,678 578.8 417.0 288 2,144 251.5 [CUDA memset]
373
+ 0.1 318,143 270 1,178.3 1,120.0 864 1,632 102.7 [CUDA memcpy Device-to-Host]
374
+
375
+ [8/8] Executing 'cuda_gpu_mem_size_sum' stats report
376
+
377
+ Total (MB) Count Avg (MB) Med (MB) Min (MB) Max (MB) StdDev (MB) Operation
378
+ ---------- ----- -------- -------- -------- -------- ----------- ------------------------------
379
+ 3,090.501 2,632 1.174 0.001 0.000 466.747 10.300 [CUDA memcpy Host-to-Device]
380
+ 2,489.319 4 622.330 622.330 622.330 622.330 0.000 [CUDA memcpy Device-to-Device]
381
+ 2.029 2,678 0.001 0.001 0.000 0.006 0.001 [CUDA memset]
382
+ 0.008 270 0.000 0.000 0.000 0.000 0.000 [CUDA memcpy Device-to-Host]
383
+
384
+ Generated:
385
+ /data/cy/kv_cache_vs_util/sim_traverse_bs/traverse_bs_util_sim_prefill.nsys-rep
386
+ /data/cy/kv_cache_vs_util/sim_traverse_bs/traverse_bs_util_sim_prefill.sqlite
traverse_bs_util_sim_prefill.nsys-rep ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf588d28adfe4b9e4d230140ddd563f53a2fce6a502eff4aeab63fd94e10b820
3
+ size 17796402
traverse_bs_util_sim_prefill.py ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import time
3
+ import statistics
4
+ from typing import List, Tuple, Dict
5
+
6
+ import torch
7
+ import torch.cuda.nvtx as nvtx
8
+
9
+ from vllm import LLM, SamplingParams
10
+ from transformers import AutoTokenizer
11
+
12
+ # ========= 强制使用 vLLM V1 =========
13
+ os.environ.setdefault("VLLM_USE_V1", "1")
14
+ os.environ.setdefault("VLLM_WORKER_MULTIPROC_METHOD", "spawn")
15
+
16
+ # 可选:打开 V1 metrics 统计
17
+ os.environ.setdefault("VLLM_LOGGING_LEVEL", "INFO")
18
+
19
+ # ========= 试图导入 V1 metrics 类型(兼容不同版本)=========
20
+ try:
21
+ from vllm.v1.metrics.reader import Counter, Gauge, Histogram, Vector # type: ignore
22
+ except Exception:
23
+ Counter = Gauge = Histogram = Vector = type("X", (), {}) # dummy
24
+
25
+ # ========= 配置 =========
26
+ MODEL_NAME = "Qwen/Qwen2-1.5B"
27
+ DTYPE = "bfloat16"
28
+ TP = 1
29
+ GPU_MEM_UTIL = 0.90
30
+ TRUST_REMOTE_CODE = True
31
+
32
+ # 场景:prefill=输入tokens,decode=输出tokens
33
+ SCENARIOS = [
34
+ {"name": "prefill640_decode1", "prompt_tokens": 640, "max_new_tokens": 1},
35
+ # {"name": "prefill1_decode512", "prompt_tokens": 1, "max_new_tokens": 512},
36
+ # {"name": "prefill640_decode512", "prompt_tokens": 640, "max_new_tokens": 512},
37
+ ]
38
+
39
+ BATCH_SIZES = [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
40
+
41
+ SEED = 1234
42
+ TEMPERATURE = 0.0
43
+ TOP_P = 1.0
44
+ WARMUP_PER_BS = 1 # 每个批次做一次预热
45
+
46
+ # ========= 构造“精确 token 数量”的 prompt =========
47
+ def build_exact_token_prompt(tokenizer, target_len: int) -> str:
48
+ if target_len <= 1:
49
+ # 最小化 prompt:用一个简单 token(避免空串导致0 token)
50
+ ids = tokenizer("A", add_special_tokens=False)["input_ids"]
51
+ if len(ids) >= 1:
52
+ return tokenizer.decode(ids[:1], skip_special_tokens=True, clean_up_tokenization_spaces=False)
53
+
54
+ base_text = (
55
+ "You are a helpful assistant. "
56
+ "Please analyze the following input and respond succinctly. "
57
+ )
58
+ chunk = " ".join(["data"] * 100) + ". "
59
+ text = base_text + chunk * 200 # 足够长的文本
60
+
61
+ lo, hi = 0, len(text)
62
+ target_ids = None
63
+ while lo <= hi:
64
+ mid = (lo + hi) // 2
65
+ ids = tokenizer(text[:mid], add_special_tokens=False)["input_ids"]
66
+ if len(ids) == target_len:
67
+ target_ids = ids
68
+ break
69
+ if len(ids) < target_len:
70
+ lo = mid + 1
71
+ else:
72
+ hi = mid - 1
73
+
74
+ if target_ids is None:
75
+ ids = tokenizer(text[:lo], add_special_tokens=False)["input_ids"]
76
+ if len(ids) > target_len:
77
+ target_ids = ids[:target_len]
78
+ else:
79
+ filler = " data"
80
+ while len(ids) < target_len:
81
+ ids = tokenizer(tokenizer.decode(ids) + filler, add_special_tokens=False)["input_ids"]
82
+ target_ids = ids[:target_len]
83
+
84
+ prompt = tokenizer.decode(target_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
85
+ # 断言精确长度
86
+ assert len(tokenizer(prompt, add_special_tokens=False)["input_ids"]) == target_len
87
+ return prompt
88
+
89
+ # ========= V1 metrics 抽取工具 =========
90
+ TTFT_METRIC_NAME = "vllm:time_to_first_token_seconds"
91
+ TPOT_METRIC_NAME = "vllm:time_per_output_token_seconds" # per-output-token latency
92
+
93
+ def _iter_children_of_vector(vec_obj):
94
+ for attr in ("children", "metrics", "series", "values", "samples", "items"):
95
+ if hasattr(vec_obj, attr):
96
+ val = getattr(vec_obj, attr)
97
+ if isinstance(val, dict):
98
+ for v in val.values():
99
+ yield v
100
+ else:
101
+ try:
102
+ for v in val:
103
+ yield v
104
+ except TypeError:
105
+ pass
106
+
107
+ def _collect_hist_sum_count(metrics, metric_name: str):
108
+ total_sum = 0.0
109
+ total_count = 0.0
110
+ for m in metrics:
111
+ mname = getattr(m, "name", None)
112
+ if mname != metric_name:
113
+ continue
114
+ # 直接 Histogram
115
+ if isinstance(m, Histogram) or m.__class__.__name__ == "Histogram":
116
+ total_sum += float(getattr(m, "sum", 0.0))
117
+ total_count += float(getattr(m, "count", 0.0))
118
+ continue
119
+ # Vector[Histogram]
120
+ if isinstance(m, Vector) or m.__class__.__name__ == "Vector":
121
+ for child in _iter_children_of_vector(m):
122
+ if isinstance(child, Histogram) or child.__class__.__name__ == "Histogram":
123
+ total_sum += float(getattr(child, "sum", 0.0))
124
+ total_count += float(getattr(child, "count", 0.0))
125
+ return total_sum, total_count
126
+
127
+ def _metrics_snapshot(llm) -> Dict[str, float]:
128
+ try:
129
+ mets = llm.get_metrics() # V1: 返回 Metric 列表(包含 Histogram/Vector 等)
130
+ except Exception:
131
+ return {"ttft_sum": 0.0, "ttft_cnt": 0.0, "tpot_sum": 0.0, "tpot_cnt": 0.0}
132
+ ttft_sum, ttft_cnt = _collect_hist_sum_count(mets, TTFT_METRIC_NAME)
133
+ tpot_sum, tpot_cnt = _collect_hist_sum_count(mets, TPOT_METRIC_NAME)
134
+ return {"ttft_sum": ttft_sum, "ttft_cnt": ttft_cnt, "tpot_sum": tpot_sum, "tpot_cnt": tpot_cnt}
135
+
136
+ def _metrics_delta(before: dict, after: dict):
137
+ return {
138
+ "ttft_sum": after["ttft_sum"] - before["ttft_sum"],
139
+ "ttft_cnt": after["ttft_cnt"] - before["ttft_cnt"],
140
+ "tpot_sum": after["tpot_sum"] - before["tpot_sum"],
141
+ "tpot_cnt": after["tpot_cnt"] - before["tpot_cnt"],
142
+ }
143
+
144
+ # ========= 带 NVTX 的 generate 包装 =========
145
+ def decorated_generate(llm: LLM, prompts: List[str], params: SamplingParams):
146
+ return llm.generate(prompts, params)
147
+
148
+ # ========= 统计格式化 =========
149
+ def fmt_stats(x: List[float]) -> Tuple[float, float, float]:
150
+ xs = [v for v in x if (v == v)] # 过滤 NaN
151
+ if not xs:
152
+ return (float("nan"), float("nan"), float("nan"))
153
+ return (statistics.mean(xs), statistics.median(xs), statistics.quantiles(xs, n=10)[-1]) # p90
154
+
155
+ def main():
156
+ print("--- vLLM V1 基准测试(含 NVTX 标记)---")
157
+ print(f"模型: {MODEL_NAME}")
158
+ print(f"批量大小: {BATCH_SIZES}")
159
+ print(f"场景: {[s['name'] for s in SCENARIOS]}")
160
+ print("-" * 60)
161
+
162
+ if not torch.cuda.is_available():
163
+ print("错误:需要 CUDA GPU。")
164
+ return
165
+
166
+ print("加载分词器/模型中...")
167
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True, trust_remote_code=TRUST_REMOTE_CODE)
168
+
169
+ # 用 NVTX 标记模型加载阶段
170
+ nvtx.range_push("LLM_init")
171
+ llm = LLM(
172
+ model=MODEL_NAME,
173
+ tensor_parallel_size=TP,
174
+ dtype=DTYPE,
175
+ trust_remote_code=TRUST_REMOTE_CODE,
176
+ gpu_memory_utilization=GPU_MEM_UTIL,
177
+ max_num_seqs=1024, # 足够覆盖本次扫描
178
+ max_model_len=4096,
179
+ disable_log_stats=False, # 开启 V1 metrics 收集
180
+ )
181
+ nvtx.range_pop()
182
+ print("模型加载完成。")
183
+
184
+ for sc in SCENARIOS:
185
+ name = sc["name"]
186
+ prompt_tokens = sc["prompt_tokens"]
187
+ max_new_tokens = sc["max_new_tokens"]
188
+
189
+ print(f"\n===== 场景:{name} | prefill={prompt_tokens}, decode={max_new_tokens} =====")
190
+
191
+ # 准备精确长度 prompt
192
+ prompt_text = build_exact_token_prompt(tokenizer, prompt_tokens)
193
+
194
+ # 采样参数(贪心)
195
+ sampling_params = SamplingParams(
196
+ max_tokens=max_new_tokens,
197
+ temperature=TEMPERATURE,
198
+ top_p=TOP_P,
199
+ seed=SEED,
200
+ n=1,
201
+ )
202
+
203
+ # 记录每个 bs 的结果(便于后续统计或外部解析)
204
+ for bs in BATCH_SIZES:
205
+ print(f"\n--- 批量大小 bs={bs} ---")
206
+
207
+ prompts = [prompt_text] * bs
208
+
209
+ # 预热
210
+ # print("预热中...")
211
+ # nvtx.range_push(f"WARMUP [{name}] bs={bs}")
212
+ # _ = decorated_generate(llm, [prompts[0]], sampling_params)
213
+ # torch.cuda.synchronize()
214
+ # nvtx.range_pop()
215
+ if bs== 1:
216
+ print("预热中...")
217
+ for _ in range(WARMUP_PER_BS):
218
+ _ = decorated_generate(llm, [prompts[0]], sampling_params)
219
+ torch.cuda.synchronize()
220
+
221
+ # 正式计时与 V1 metrics
222
+ # nvtx.range_push(f"RUN [{name}] bs={bs}")
223
+ torch.cuda.synchronize()
224
+ snap_before = _metrics_snapshot(llm)
225
+ t0 = time.perf_counter()
226
+
227
+ nvtx.range_push(f"generate [{name}] bs={bs}")
228
+ outputs = decorated_generate(llm, prompts, sampling_params)
229
+ nvtx.range_pop() # generate
230
+
231
+ torch.cuda.synchronize()
232
+ t1 = time.perf_counter()
233
+ snap_after = _metrics_snapshot(llm)
234
+ # nvtx.range_pop() # RUN
235
+
236
+ duration = t1 - t0
237
+
238
+ # 统计 token 与吞吐
239
+ total_output_tokens = sum(len(o.outputs[0].token_ids) for o in outputs)
240
+ avg_prompt_tokens = sum(len(o.prompt_token_ids) for o in outputs) / bs
241
+ throughput = total_output_tokens / duration if duration > 0 else float("inf")
242
+
243
+ # 解析 V1 TTFT / 解码吞吐
244
+ delta = _metrics_delta(snap_before, snap_after)
245
+ if delta["ttft_cnt"] > 0:
246
+ ttft = delta["ttft_sum"] / delta["ttft_cnt"]
247
+ else:
248
+ ttft = float("nan")
249
+
250
+ if delta["tpot_cnt"] > 0:
251
+ avg_tpot = delta["tpot_sum"] / delta["tpot_cnt"] # seconds/token
252
+ decode_tps = 1.0 / avg_tpot
253
+ else:
254
+ decode_tps = float("nan")
255
+
256
+ print(f"执行时间: {duration:.4f} s")
257
+ print(f"实际平均输入 tokens: {avg_prompt_tokens:.2f}(目标 {prompt_tokens})")
258
+ print(f"生成总 tokens: {total_output_tokens}")
259
+ print(f"吞吐(生成tokens/秒): {throughput:.2f}")
260
+ print(f"TTFT (V1 metrics): {ttft:.4f} s")
261
+ print(f"解码吞吐 (V1 metrics): {decode_tps:.2f} tok/s")
262
+
263
+ print("\n完成。提示:在 Nsight Systems 中可通过 NVTX 区间快速定位各场景/批量的调用。")
264
+
265
+ if __name__ == "__main__":
266
+ print(f"CUDA_VISIBLE_DEVICES = {os.getenv('CUDA_VISIBLE_DEVICES')}")
267
+ main()