Hamerlate commited on
Commit
3397006
·
verified ·
1 Parent(s): fa64c3f

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. qwen_util_bs.log +119 -113
  2. qwen_util_bs.nsys-rep +2 -2
qwen_util_bs.log CHANGED
@@ -4,168 +4,174 @@ Try the 'nsys status --environment' command to learn more.
4
  WARNING: CPU context switch tracing not supported, disabling.
5
  Try the 'nsys status --environment' command to learn more.
6
 
7
- INFO 08-11 19:11:41 [__init__.py:244] Automatically detected platform cuda.
8
  --- vLLM 性能基准测试 (含 NVTX 标记) ---
9
  模型: Qwen/Qwen2-1.5B
10
  批量大小: [128, 64, 32, 16]
11
  输入/输出 Token (近似): 128 / 512
12
  ---------------------------------------------
13
  正在加载模型... 这可能需要一些时间。
14
- INFO 08-11 19:11:51 [config.py:841] This model supports multiple tasks: {'reward', 'generate', 'classify', 'embed'}. Defaulting to 'generate'.
15
- INFO 08-11 19:11:51 [config.py:1472] Using max model len 131072
16
- INFO 08-11 19:11:51 [config.py:2285] Chunked prefill is enabled with max_num_batched_tokens=8192.
17
- INFO 08-11 19:11:53 [core.py:526] Waiting for init message from front-end.
18
- INFO 08-11 19:11:53 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='Qwen/Qwen2-1.5B', speculative_config=None, tokenizer='Qwen/Qwen2-1.5B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2-1.5B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}
19
- INFO 08-11 19:11:54 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
20
- WARNING 08-11 19:11:54 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
21
- INFO 08-11 19:11:54 [gpu_model_runner.py:1770] Starting to load model Qwen/Qwen2-1.5B...
22
- INFO 08-11 19:11:54 [gpu_model_runner.py:1775] Loading model from scratch...
23
- INFO 08-11 19:11:54 [cuda.py:284] Using Flash Attention backend on V1 engine.
24
- INFO 08-11 19:11:55 [weight_utils.py:292] Using model weights format ['*.safetensors']
25
- INFO 08-11 19:11:55 [weight_utils.py:308] Time spent downloading weights for Qwen/Qwen2-1.5B: 0.522589 seconds
26
- INFO 08-11 19:11:56 [weight_utils.py:345] No model.safetensors.index.json found in remote.
27
 
28
-
29
-
30
 
31
- INFO 08-11 19:11:56 [default_loader.py:272] Loading weights took 0.89 seconds
32
- INFO 08-11 19:11:57 [gpu_model_runner.py:1801] Model loading took 2.9110 GiB and 2.424753 seconds
33
- INFO 08-11 19:12:04 [backends.py:508] Using cache directory: /home/cy/.cache/vllm/torch_compile_cache/19cf05a3aa/rank_0_0/backbone for vLLM's torch.compile
34
- INFO 08-11 19:12:04 [backends.py:519] Dynamo bytecode transform time: 6.61 s
35
- INFO 08-11 19:12:08 [backends.py:155] Directly load the compiled graph(s) for shape None from the cache, took 4.174 s
36
- INFO 08-11 19:12:09 [monitor.py:34] torch.compile takes 6.61 s in total
37
- INFO 08-11 19:12:10 [gpu_worker.py:232] Available KV cache memory: 16.88 GiB
38
- INFO 08-11 19:12:10 [kv_cache_utils.py:716] GPU KV cache size: 632,176 tokens
39
- INFO 08-11 19:12:10 [kv_cache_utils.py:720] Maximum concurrency for 131,072 tokens per request: 4.82x
40
-
41
- INFO 08-11 19:12:26 [gpu_model_runner.py:2326] Graph capturing finished in 16 secs, took 0.47 GiB
42
- INFO 08-11 19:12:26 [core.py:172] init engine (profile, create kv cache, warmup model) took 29.36 seconds
43
  模型加载完成。
44
  使用长度为 2760 的示例文本作为输入 Prompt。
45
 
46
  ===== 正在运行批量大小为 128 的基准测试 =====
47
  正在进行预热...
48
-
49
-
50
  开始计时和剖析...
51
-
52
-
53
  --- 结果 (批量大小: 128) ---
54
- 执行时间: 5.142
55
  实际平均输入 Token 数: 541.00
56
- 总计生成 Token 数: 65534
57
- 吞吐量 (Throughput): 12745.89 tokens/second
58
 
59
  ===== 正在运行批量大小为 64 的基准测试 =====
60
  正在进行预热...
61
-
62
-
63
  开始计时和剖析...
64
-
65
-
66
  --- 结果 (批量大小: 64) ---
67
- 执行时间: 3.979
68
  实际平均输入 Token 数: 541.00
69
  总计生成 Token 数: 32768
70
- 吞吐量 (Throughput): 8235.59 tokens/second
71
 
72
  ===== 正在运行批量大小为 32 的基准测试 =====
73
  正在进行预热...
74
-
75
-
76
  开始计时和剖析...
77
-
78
-
79
  --- 结果 (批量大小: 32) ---
80
- 执行时间: 3.408
81
  实际平均输入 Token 数: 541.00
82
  总计生成 Token 数: 16384
83
- 吞吐量 (Throughput): 4807.24 tokens/second
84
 
85
  ===== 正在运行批量大小为 16 的基准测试 =====
86
  正在进行预热...
87
-
88
-
89
  开始计时和剖析...
90
-
91
-
92
  --- 结果 (批量大小: 16) ---
93
- 执行时间: 3.066
94
  实际平均输入 Token 数: 541.00
95
  总计生成 Token 数: 8192
96
- 吞吐量 (Throughput): 2671.95 tokens/second
97
  GPU 3: General Metrics for NVIDIA AD10x (any frequency)
98
- Generating '/tmp/nsys-report-b6a4.qdstrm'
99
-
100
 
101
- SKIPPED: No data available.
102
  [3/8] Executing 'nvtx_sum' stats report
 
 
 
 
 
 
 
 
103
  [4/8] Executing 'osrt_sum' stats report
104
 
105
  Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
106
  -------- ----------------- --------- ---------------- ---------------- --------- -------------- --------------- ----------------------
107
- 80.8 1,316,401,531,684 96 13,712,515,955.0 12,735,288,428.5 173,376 15,077,000,078 1,885,845,753.5 pthread_cond_wait
108
- 9.2 149,488,620,646 4,680 31,942,013.0 4,998,638.5 1,080 34,489,398,805 824,807,828.7 epoll_wait
109
- 5.9 95,329,273,419 10,814 8,815,357.3 1,450.0 1,000 10,010,050,788 179,315,085.6 poll
110
- 1.6 26,737,688,064 12 2,228,140,672.0 168,659.5 92,695 10,000,082,423 4,109,907,418.4 sem_timedwait
111
- 1.5 24,256,687,442 4,105 5,909,059.1 5,624,747.0 135,044 19,182,006 1,253,243.9 sem_wait
112
- 1.0 15,508,403,228 70 221,548,617.5 409,454.0 4,172 500,093,813 250,108,104.9 pthread_cond_timedwait
113
- 0.1 993,011,624 20,011 49,623.3 2,522.0 1,000 241,598,899 2,824,985.7 read
114
- 0.0 150,708,163 71,161 2,117.8 1,338.0 1,000 143,323 2,884.1 munmap
115
- 0.0 126,977,808 234 542,640.2 2,318.5 1,065 20,643,834 3,080,898.9 fopen
116
- 0.0 88,254,464 1,461 60,406.9 7,757.0 1,020 31,636,400 864,986.1 ioctl
117
- 0.0 81,004,509 16 5,062,781.8 5,062,915.0 5,053,743 5,074,146 4,767.2 nanosleep
118
- 0.0 75,692,289 18,126 4,175.9 2,564.0 1,009 23,499,766 174,532.7 open64
119
- 0.0 68,718,668 459 149,713.9 1,070.0 1,000 62,152,067 2,902,407.9 waitpid
120
- 0.0 57,403,976 66 869,757.2 3,393.0 1,042 22,370,923 4,028,517.2 open
121
- 0.0 50,385,956 2 25,192,978.0 25,192,978.0 971,165 49,414,791 34,254,816.4 fork
122
- 0.0 37,577,142 98 383,440.2 17,467.5 10,169 9,068,538 1,374,492.3 pthread_join
123
- 0.0 17,645,503 4,166 4,235.6 3,600.0 1,061 28,839 2,210.4 recv
124
- 0.0 14,081,767 3,210 4,386.8 2,402.5 1,000 1,851,335 33,686.7 mmap64
125
- 0.0 11,032,769 4,573 2,412.6 1,938.0 1,006 118,235 2,611.0 write
126
- 0.0 6,630,899 98 67,662.2 69,180.0 49,884 77,867 4,854.4 sleep
127
- 0.0 5,571,075 58 96,053.0 73,668.0 33,776 415,696 73,221.2 pthread_create
128
- 0.0 4,604,598 499 9,227.7 1,912.0 1,000 87,359 12,344.0 fgets
129
- 0.0 1,394,084 255 5,467.0 5,088.0 1,136 26,156 2,139.6 send
130
- 0.0 1,310,419 1,107 1,183.8 1,054.0 1,000 12,416 660.0 fclose
131
- 0.0 708,201 48 14,754.2 2,643.5 1,811 288,844 51,216.4 futex
132
- 0.0 592,906 378 1,568.5 1,378.5 1,000 8,587 1,014.6 epoll_ctl
133
- 0.0 519,985 161 3,229.7 2,675.0 1,092 32,233 2,740.2 pthread_cond_signal
134
- 0.0 253,194 42 6,028.4 4,997.0 1,869 16,252 3,640.3 pipe2
135
- 0.0 197,022 18 10,945.7 5,368.5 1,408 74,228 16,532.4 mmap
136
- 0.0 162,812 6 27,135.3 22,212.0 2,546 71,892 26,887.7 bind
137
- 0.0 123,295 25 4,931.8 3,466.0 2,169 10,744 2,960.7 fopen64
138
- 0.0 98,953 9 10,994.8 10,105.0 4,623 17,643 4,086.1 socket
139
- 0.0 70,577 6 11,762.8 14,542.5 1,165 22,484 8,515.1 pthread_mutex_lock
140
- 0.0 45,925 15 3,061.7 2,004.0 1,695 9,510 1,986.5 stat
141
- 0.0 39,631 24 1,651.3 1,315.0 1,013 5,528 962.1 fcntl
142
- 0.0 38,556 2 19,278.0 19,278.0 15,634 22,922 5,153.4 connect
143
- 0.0 32,708 3 10,902.7 9,384.0 6,028 17,296 5,785.5 accept4
144
- 0.0 29,641 4 7,410.3 6,130.5 3,290 14,090 4,986.6 fread
145
- 0.0 26,997 13 2,076.7 2,340.0 1,145 2,990 651.7 dup2
146
- 0.0 25,520 14 1,822.9 1,896.0 1,058 2,488 348.0 sigaction
147
- 0.0 25,179 7 3,597.0 4,768.0 1,123 5,648 2,033.8 fflush
148
- 0.0 22,308 4 5,577.0 5,378.0 5,067 6,485 638.7 lstat
149
- 0.0 18,002 9 2,000.2 1,722.0 1,006 3,405 803.5 pread
150
- 0.0 17,322 3 5,774.0 4,877.0 4,674 7,771 1,732.4 pthread_cond_broadcast
151
- 0.0 12,954 2 6,477.0 6,477.0 5,866 7,088 864.1 fwrite
152
- 0.0 12,526 1 12,526.0 12,526.0 12,526 12,526 0.0 kill
153
- 0.0 12,478 3 4,159.3 4,097.0 4,061 4,320 140.3 fputs_unlocked
154
- 0.0 11,283 4 2,820.8 2,893.5 2,497 2,999 234.3 mprotect
155
- 0.0 7,550 4 1,887.5 1,868.0 1,615 2,199 239.5 listen
156
- 0.0 5,446 3 1,815.3 1,667.0 1,602 2,177 314.9 fstat
157
- 0.0 3,822 2 1,911.0 1,911.0 1,424 2,398 688.7 openat64
158
- 0.0 3,294 1 3,294.0 3,294.0 3,294 3,294 0.0 fputs
 
 
 
159
 
160
  [5/8] Executing 'cuda_api_sum' stats report
161
 
162
  Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
163
  -------- --------------- --------- ------------ -------- -------- ----------- ------------ ----------------------
164
- 100.0 147,884,600 12 12,3SKIPPED: /data/cy/qwen_util_bs.sqlite does not contain CUDA kernel data.
165
- SKIPPED: /data/cy/qwen_util_bs.sqlite does not contain GPU memory data.
166
- SKIPPED: /data/cy/qwen_util_bs.sqlite does not contain GPU memory data.
167
- 23,716.7 21,398.0 3,612 147,706,958 42,634,665.1 cudaDeviceSynchronize
168
- 0.0 2,546 1 2,546.0 2,546.0 2,546 2,546 0.0 cuModuleGetLoadingMode
169
 
170
  [6/8] Executing 'cuda_gpu_kern_sum' stats report
171
  [7/8] Executing 'cuda_gpu_mem_time_sum' stats report
 
4
  WARNING: CPU context switch tracing not supported, disabling.
5
  Try the 'nsys status --environment' command to learn more.
6
 
7
+ INFO 08-11 21:41:28 [__init__.py:244] Automatically detected platform cuda.
8
  --- vLLM 性能基准测试 (含 NVTX 标记) ---
9
  模型: Qwen/Qwen2-1.5B
10
  批量大小: [128, 64, 32, 16]
11
  输入/输出 Token (近似): 128 / 512
12
  ---------------------------------------------
13
  正在加载模型... 这可能需要一些时间。
14
+ INFO 08-11 21:41:38 [config.py:841] This model supports multiple tasks: {'classify', 'embed', 'reward', 'generate'}. Defaulting to 'generate'.
15
+ INFO 08-11 21:41:38 [config.py:1472] Using max model len 131072
16
+ INFO 08-11 21:41:39 [config.py:2285] Chunked prefill is enabled with max_num_batched_tokens=8192.
17
+ INFO 08-11 21:41:40 [core.py:526] Waiting for init message from front-end.
18
+ INFO 08-11 21:41:40 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='Qwen/Qwen2-1.5B', speculative_config=None, tokenizer='Qwen/Qwen2-1.5B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2-1.5B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}
19
+ INFO 08-11 21:41:41 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
20
+ WARNING 08-11 21:41:41 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
21
+ INFO 08-11 21:41:41 [gpu_model_runner.py:1770] Starting to load model Qwen/Qwen2-1.5B...
22
+ INFO 08-11 21:41:41 [gpu_model_runner.py:1775] Loading model from scratch...
23
+ INFO 08-11 21:41:41 [cuda.py:284] Using Flash Attention backend on V1 engine.
24
+ INFO 08-11 21:41:42 [weight_utils.py:292] Using model weights format ['*.safetensors']
25
+ INFO 08-11 21:41:43 [weight_utils.py:345] No model.safetensors.index.json found in remote.
 
26
 
27
+
28
+
29
 
30
+ INFO 08-11 21:41:44 [default_loader.py:272] Loading weights took 0.81 seconds
31
+ INFO 08-11 21:41:44 [gpu_model_runner.py:1801] Model loading took 2.9110 GiB and 2.327024 seconds
32
+ INFO 08-11 21:41:51 [backends.py:508] Using cache directory: /home/cy/.cache/vllm/torch_compile_cache/19cf05a3aa/rank_0_0/backbone for vLLM's torch.compile
33
+ INFO 08-11 21:41:51 [backends.py:519] Dynamo bytecode transform time: 6.51 s
34
+ INFO 08-11 21:41:55 [backends.py:155] Directly load the compiled graph(s) for shape None from the cache, took 4.107 s
35
+ INFO 08-11 21:41:56 [monitor.py:34] torch.compile takes 6.51 s in total
36
+ INFO 08-11 21:41:57 [gpu_worker.py:232] Available KV cache memory: 16.88 GiB
37
+ INFO 08-11 21:41:57 [kv_cache_utils.py:716] GPU KV cache size: 632,176 tokens
38
+ INFO 08-11 21:41:57 [kv_cache_utils.py:720] Maximum concurrency for 131,072 tokens per request: 4.82x
39
+
40
+ INFO 08-11 21:42:14 [gpu_model_runner.py:2326] Graph capturing finished in 17 secs, took 0.47 GiB
41
+ INFO 08-11 21:42:14 [core.py:172] init engine (profile, create kv cache, warmup model) took 29.59 seconds
42
  模型加载完成。
43
  使用长度为 2760 的示例文本作为输入 Prompt。
44
 
45
  ===== 正在运行批量大小为 128 的基准测试 =====
46
  正在进行预热...
47
+
48
+
49
  开始计时和剖析...
50
+
51
+
52
  --- 结果 (批量大小: 128) ---
53
+ 执行时间: 5.116
54
  实际平均输入 Token 数: 541.00
55
+ 总计生成 Token 数: 65536
56
+ 吞吐量 (Throughput): 12811.03 tokens/second
57
 
58
  ===== 正在运行批量大小为 64 的基准测试 =====
59
  正在进行预热...
60
+
61
+
62
  开始计时和剖析...
63
+
64
+
65
  --- 结果 (批量大小: 64) ---
66
+ 执行时间: 3.964
67
  实际平均输入 Token 数: 541.00
68
  总计生成 Token 数: 32768
69
+ 吞吐量 (Throughput): 8265.40 tokens/second
70
 
71
  ===== 正在运行批量大小为 32 的基准测试 =====
72
  正在进行预热...
73
+
74
+
75
  开始计时和剖析...
76
+
77
+
78
  --- 结果 (批量大小: 32) ---
79
+ 执行时间: 3.412
80
  实际平均输入 Token 数: 541.00
81
  总计生成 Token 数: 16384
82
+ 吞吐量 (Throughput): 4801.29 tokens/second
83
 
84
  ===== 正在运行批量大小为 16 的基准测试 =====
85
  正在进行预热...
86
+
87
+
88
  开始计时和剖析...
89
+
90
+
91
  --- 结果 (批量大小: 16) ---
92
+ 执行时间: 3.084
93
  实际平均输入 Token 数: 541.00
94
  总计生成 Token 数: 8192
95
+ 吞吐量 (Throughput): 2656.39 tokens/second
96
  GPU 3: General Metrics for NVIDIA AD10x (any frequency)
97
+ Generating '/tmp/nsys-report-d8d0.qdstrm'
98
+
99
 
 
100
  [3/8] Executing 'nvtx_sum' stats report
101
+
102
+ Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Style Range
103
+ -------- --------------- --------- --------------- --------------- ------------- ------------- ----------- ------- ------------
104
+ 32.8 5,115,252,310 1 5,115,252,310.0 5,115,252,310.0 5,115,252,310 5,115,252,310 0.0 PushPop :bs:128 qwen
105
+ 25.5 3,964,311,307 1 3,964,311,307.0 3,964,311,307.0 3,964,311,307 3,964,311,307 0.0 PushPop :bs:64 qwen
106
+ 21.9 3,412,263,536 1 3,412,263,536.0 3,412,263,536.0 3,412,263,536 3,412,263,536 0.0 PushPop :bs:32 qwen
107
+ 19.8 3,083,737,486 1 3,083,737,486.0 3,083,737,486.0 3,083,737,486 3,083,737,486 0.0 PushPop :bs:16 qwen
108
+
109
  [4/8] Executing 'osrt_sum' stats report
110
 
111
  Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
112
  -------- ----------------- --------- ---------------- ---------------- --------- -------------- --------------- ----------------------
113
+ 81.5 1,370,973,183,949 96 14,280,970,666.1 13,375,999,749.0 133,132 15,604,144,341 1,919,025,045.4 pthread_cond_wait
114
+ 8.8 148,715,278,362 4,676 31,803,951.7 5,002,643.5 1,009 34,336,697,739 821,232,877.1 epoll_wait
115
+ 5.6 94,912,909,337 9,798 9,686,967.7 1,375.5 1,000 10,010,047,625 187,778,851.4 poll
116
+ 1.6 26,570,890,385 12 2,214,240,865.4 165,453.5 99,894 10,000,097,576 4,093,524,018.4 sem_timedwait
117
+ 1.4 24,098,966,884 4,104 5,872,068.0 5,385,563.5 146,730 18,879,015 1,264,152.9 sem_wait
118
+ 0.9 15,505,809,401 65 238,550,913.9 456,999.0 10,476 500,098,586 251,665,440.9 pthread_cond_timedwait
119
+ 0.1 1,245,043,955 17,459 71,312.4 2,384.0 1,000 517,177,937 4,597,714.0 read
120
+ 0.0 215,661,289 234 921,629.4 2,287.0 1,103 34,390,374 4,274,980.9 fopen
121
+ 0.0 154,104,361 73,084 2,108.6 1,267.0 1,006 167,363 3,324.7 munmap
122
+ 0.0 81,045,901 16 5,065,368.8 5,064,039.5 5,053,918 5,080,925 7,510.2 nanosleep
123
+ 0.0 80,555,283 1,475 54,613.8 7,151.0 1,026 28,044,030 753,940.6 ioctl
124
+ 0.0 54,719,358 67 816,706.8 3,552.0 1,029 18,800,673 3,782,575.7 open
125
+ 0.0 51,541,123 2 25,770,561.5 25,770,561.5 1,044,901 50,496,222 34,967,364.4 fork
126
+ 0.0 50,514,209 18,121 2,787.6 2,350.0 1,000 716,512 5,542.5 open64
127
+ 0.0 48,865,146 569 85,879.0 1,055.0 1,000 42,675,079 1,790,964.1 waitpid
128
+ 0.0 39,788,388 97 410,189.6 12,283.0 6,446 6,066,262 1,109,283.3 pthread_join
129
+ 0.0 18,545,605 98 189,240.9 70,024.5 46,406 11,983,727 1,203,717.1 sleep
130
+ 0.0 16,159,369 4,167 3,877.9 3,636.0 1,160 17,449 1,038.1 recv
131
+ 0.0 13,480,816 3,260 4,135.2 2,216.5 1,000 1,471,516 26,612.3 mmap64
132
+ 0.0 10,271,775 4,569 2,248.1 1,946.0 1,027 50,716 1,978.1 write
133
+ 0.0 7,098,308 58 122,384.6 80,090.5 27,747 670,356 135,549.3 pthread_create
134
+ 0.0 4,057,179 538 7,541.2 1,660.0 1,004 93,747 10,727.2 fgets
135
+ 0.0 1,464,194 255 5,741.9 4,939.0 1,487 20,660 2,741.1 send
136
+ 0.0 1,269,204 1,084 1,170.9 1,034.0 1,000 9,944 658.1 fclose
137
+ 0.0 489,450 307 1,594.3 1,400.0 1,003 9,045 749.7 epoll_ctl
138
+ 0.0 425,006 171 2,485.4 2,020.0 1,018 14,308 1,637.7 pthread_cond_signal
139
+ 0.0 306,299 48 6,381.2 2,926.0 2,128 70,259 11,393.8 futex
140
+ 0.0 254,267 42 6,054.0 5,096.0 2,110 16,888 3,347.8 pipe2
141
+ 0.0 201,080 18 11,171.1 5,789.5 2,323 71,972 16,039.3 mmap
142
+ 0.0 144,323 6 24,053.8 18,950.5 5,438 59,553 22,282.8 bind
143
+ 0.0 129,607 5 25,921.4 21,642.0 16,639 48,236 12,693.9 pthread_mutex_lock
144
+ 0.0 124,038 25 4,961.5 3,170.0 2,078 11,898 3,257.1 fopen64
145
+ 0.0 85,284 9 9,476.0 9,089.0 4,110 16,407 3,605.8 socket
146
+ 0.0 48,050 3 16,016.7 17,656.0 9,950 20,444 5,435.7 accept4
147
+ 0.0 41,473 2 20,736.5 20,736.5 14,059 27,414 9,443.4 connect
148
+ 0.0 38,324 15 2,554.9 2,128.0 1,231 5,855 1,264.6 stat
149
+ 0.0 37,561 4 9,390.3 7,437.0 3,187 19,500 7,496.2 fread
150
+ 0.0 29,294 14 2,092.4 2,214.5 1,057 3,087 692.1 dup2
151
+ 0.0 28,412 18 1,578.4 1,291.5 1,031 4,781 850.6 fcntl
152
+ 0.0 27,322 16 1,707.6 1,694.5 1,040 2,489 363.9 sigaction
153
+ 0.0 24,199 4 6,049.8 5,448.0 5,073 8,230 1,480.2 lstat
154
+ 0.0 23,534 7 3,362.0 4,194.0 1,117 5,334 1,850.2 fflush
155
+ 0.0 19,487 3 6,495.7 5,131.0 4,816 9,540 2,641.2 pthread_cond_broadcast
156
+ 0.0 16,536 9 1,837.3 1,786.0 1,096 2,456 385.0 pread
157
+ 0.0 12,881 3 4,293.7 4,369.0 4,063 4,449 203.7 fputs_unlocked
158
+ 0.0 11,145 2 5,572.5 5,572.5 5,504 5,641 96.9 fwrite
159
+ 0.0 10,851 4 2,712.8 2,681.0 2,427 3,062 310.7 mprotect
160
+ 0.0 9,864 1 9,864.0 9,864.0 9,864 9,864 0.0 kill
161
+ 0.0 6,254 4 1,563.5 1,699.5 SKIPPED: /data/cy/qwen_util_bs.sqlite does not contain CUDA kernel data.
162
+ SKIPPED: /data/cy/qwen_util_bs.sqlite does not contain GPU memory data.
163
+ SKIPPED: /data/cy/qwen_util_bs.sqlite does not contain GPU memory data.
164
+ 1,040 1,815 353.3 listen
165
+ 0.0 4,609 3 1,536.3 1,475.0 1,238 1,896 333.3 fstat
166
+ 0.0 3,625 1 3,625.0 3,625.0 3,625 3,625 0.0 fputs
167
+ 0.0 2,994 2 1,497.0 1,497.0 1,125 1,869 526.1 openat64
168
 
169
  [5/8] Executing 'cuda_api_sum' stats report
170
 
171
  Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
172
  -------- --------------- --------- ------------ -------- -------- ----------- ------------ ----------------------
173
+ 100.0 151,119,781 12 12,593,315.1 20,533.0 3,744 150,954,978 43,572,624.4 cudaDeviceSynchronize
174
+ 0.0 1,897 1 1,897.0 1,897.0 1,897 1,897 0.0 cuModuleGetLoadingMode
 
 
 
175
 
176
  [6/8] Executing 'cuda_gpu_kern_sum' stats report
177
  [7/8] Executing 'cuda_gpu_mem_time_sum' stats report
qwen_util_bs.nsys-rep CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:23694f022f2361cf5122a4269de279cfd98207e8e557650f4cc51a00e369e7bf
3
- size 18193309
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d22febf26a5798bf0e477815e2ddcdc40166ff25ed61a7b76260cd5533922c8
3
+ size 18219333