Hamerlate commited on
Commit
16c77e6
·
verified ·
1 Parent(s): 6da2c6c

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -58,3 +58,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  vllm_tts.nsys-rep filter=lfs diff=lfs merge=lfs -text
 
 
 
 
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  vllm_tts.nsys-rep filter=lfs diff=lfs merge=lfs -text
61
+ vllm_tts_N128.nsys-rep filter=lfs diff=lfs merge=lfs -text
62
+ vllm_tts_N32.nsys-rep filter=lfs diff=lfs merge=lfs -text
63
+ vllm_tts_N64.nsys-rep filter=lfs diff=lfs merge=lfs -text
output_token_ids.json ADDED
The diff for this file is too large to render. See raw diff
 
profile_N128.log ADDED
The diff for this file is too large to render. See raw diff
 
profile_N32.log ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WARNING: CPU IP/backtrace sampling not supported, disabling.
2
+ Try the 'nsys status --environment' command to learn more.
3
+
4
+ WARNING: CPU context switch tracing not supported, disabling.
5
+ Try the 'nsys status --environment' command to learn more.
6
+
7
+ INFO 08-10 22:44:26 [__init__.py:244] Automatically detected platform cuda.
8
+ INFO:__main__:FastTTS AIME Experiment
9
+ INFO:__main__:==================================================
10
+ INFO:__main__:Starting FastTTS AIME experiment
11
+ INFO:__main__:Parameters: {'num_iterations': 2, 'n': 32, 'temperature': 2, 'beam_width': 4, 'generator_model': 'Qwen/Qwen2.5-Math-1.5B-Instruct', 'verifier_model': 'peiyi9979/math-shepherd-mistral-7b-prm', 'generator_gpu_memory': 0.3, 'verifier_gpu_memory': 0.62, 'offload_enabled': False, 'spec_beam_extension': False, 'prefix_aware_scheduling': False}
12
+ INFO:__main__:Loaded AIME dataset with 30 samples
13
+ INFO:__main__:Problem: Every morning Aya goes for a $9$-kilometer-long walk and stops at a coffee shop afterwards. When she walks at a constant speed of $s$ kilometers per hour, the walk takes her 4 hours, including $t$ minutes spent in the coffee shop. When she walks $s+2$ kilometers per hour, the walk takes her 2 hours and 24 minutes, including $t$ minutes spent in the coffee shop. Suppose Aya walks at $s+\frac{1}{2}$ kilometers per hour. Find the number of minutes the walk takes her, including the $t$ minutes spent in the coffee shop.
14
+ INFO:__main__:Reference answer: 204
15
+ INFO:__main__:Initializing FastTTS models...
16
+ INFO:fasttts:Initializing FastTTS models...
17
+ INFO:models.vllm_wrapper:Initializing generator model: Qwen/Qwen2.5-Math-1.5B-Instruct
18
+ INFO 08-10 22:44:38 [__init__.py:244] Automatically detected platform cuda.
19
+ INFO:models.tts_llm:Using V0 engine with speculative beam extension: False
20
+ INFO:models.tts_llm:Prefix-aware scheduling enabled: False
21
+ ✅ Process PID: 3736098 | CUDA Context Object: None
22
+ INFO 08-10 22:44:49 [config.py:841] This model supports multiple tasks: {'embed', 'classify', 'generate', 'reward'}. Defaulting to 'generate'.
23
+ INFO 08-10 22:44:49 [config.py:1472] Using max model len 4096
24
+ INFO:models.generator_engine:Using GeneratorLLMEngine with vLLM version 0.9.2
25
+ INFO 08-10 22:44:49 [llm_engine.py:230] Initializing a V0 LLM engine (v0.9.2) with config: model='Qwen/Qwen2.5-Math-1.5B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2.5-Math-1.5B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=Qwen/Qwen2.5-Math-1.5B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=False, use_async_output_proc=True, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":false,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":256,"local_cache_dir":null}, use_cached_outputs=False,
26
+ INFO 08-10 22:44:51 [cuda.py:363] Using Flash Attention backend.
27
+ INFO 08-10 22:44:52 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
28
+ INFO 08-10 22:44:52 [model_runner.py:1171] Starting to load model Qwen/Qwen2.5-Math-1.5B-Instruct...
29
+ INFO 08-10 22:44:53 [weight_utils.py:292] Using model weights format ['*.safetensors']
30
+ INFO 08-10 22:44:53 [weight_utils.py:345] No model.safetensors.index.json found in remote.
31
+
32
+
33
+
34
+
35
+ INFO 08-10 22:44:54 [default_loader.py:272] Loading weights took 0.77 seconds
36
+ INFO 08-10 22:44:54 [model_runner.py:1203] Model loading took 2.8798 GiB and 1.928124 seconds
37
+ INFO 08-10 22:44:55 [worker.py:294] Memory profiling takes 0.92 seconds
38
+ INFO 08-10 22:44:55 [worker.py:294] the current vLLM instance can use total_gpu_memory (23.64GiB) x gpu_memory_utilization (0.30) = 7.09GiB
39
+ INFO 08-10 22:44:55 [worker.py:294] model weights take 2.88GiB; non_torch_memory takes 0.08GiB; PyTorch activation peak memory takes 1.40GiB; the rest of the memory reserved for KV Cache is 2.74GiB.
40
+ INFO 08-10 22:44:56 [executor_base.py:113] # cuda blocks: 6412, # CPU blocks: 9362
41
+ INFO 08-10 22:44:56 [executor_base.py:118] Maximum concurrency for 4096 tokens per request: 25.05x
42
+ INFO 08-10 22:44:58 [model_runner.py:1513] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
43
+
44
+ INFO 08-10 22:45:13 [model_runner.py:1671] Graph capturing finished in 14 secs, took 0.23 GiB
45
+ INFO 08-10 22:45:13 [llm_engine.py:428] init engine (profile, create kv cache, warmup model) took 18.36 seconds
46
+ INFO:models.custom_scheduler:Using CustomScheduler
47
+ INFO:models.custom_scheduler:CustomScheduler initialized with config: SchedulerConfig(runner_type='generate', max_num_batched_tokens=4096, max_num_seqs=256, max_model_len=4096, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, num_lookahead_slots=0, cuda_graph_sizes=[512], delay_factor=0.0, enable_chunked_prefill=False, is_multimodal_model=False, max_num_encoder_input_tokens=4096, encoder_cache_size=4096, preemption_mode=None, num_scheduler_steps=1, multi_step_stream_outputs=True, send_delta_data=False, policy='fcfs', chunked_prefill_enabled=False, disable_chunked_mm_input=False, scheduler_cls=<class 'models.custom_scheduler.CustomScheduler'>, disable_hybrid_kv_cache_manager=False)
48
+ INFO:models.vllm_wrapper:Generator model initialized successfully in separate process
49
+ INFO:models.vllm_wrapper:Initializing verifier model: peiyi9979/math-shepherd-mistral-7b-prm
50
+ INFO 08-10 22:45:19 [__init__.py:244] Automatically detected platform cuda.
51
+ INFO:models.tts_llm:Prefix-aware scheduling enabled: False
52
+ ✅ Process PID: 3736531 | CUDA Context Object: None
53
+ INFO 08-10 22:45:29 [config.py:1472] Using max model len 4096
54
+ INFO 08-10 22:45:29 [arg_utils.py:1596] (Disabling) chunked prefill by default
55
+ INFO 08-10 22:45:30 [config.py:4601] Only "last" pooling supports chunked prefill and prefix caching; disabling both.
56
+ You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message
57
+ You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
58
+ INFO 08-10 22:45:31 [core.py:526] Waiting for init message from front-end.
59
+ INFO 08-10 22:45:31 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='peiyi9979/math-shepherd-mistral-7b-prm', speculative_config=None, tokenizer='peiyi9979/math-shepherd-mistral-7b-prm', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=peiyi9979/math-shepherd-mistral-7b-prm, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, pooler_config=PoolerConfig(pooling_type='STEP', normalize=None, softmax=True, step_tag_id=12902, returned_token_ids=[648, 387]), compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}
60
+ INFO 08-10 22:45:32 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
61
+ WARNING 08-10 22:45:32 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
62
+ INFO 08-10 22:45:32 [gpu_model_runner.py:1770] Starting to load model peiyi9979/math-shepherd-mistral-7b-prm...
63
+ INFO 08-10 22:45:32 [gpu_model_runner.py:1775] Loading model from scratch...
64
+ INFO 08-10 22:45:32 [cuda.py:284] Using Flash Attention backend on V1 engine.
65
+ INFO 08-10 22:45:33 [weight_utils.py:292] Using model weights format ['*.bin']
66
+
67
+
68
+
69
+
70
+
71
+ INFO 08-10 22:45:44 [default_loader.py:272] Loading weights took 10.28 seconds
72
+ INFO 08-10 22:45:44 [gpu_model_runner.py:1801] Model loading took 13.2457 GiB and 11.338008 seconds
73
+ INFO 08-10 22:45:51 [backends.py:508] Using cache directory: /home/cy/.cache/vllm/torch_compile_cache/eae4db4fef/rank_0_0/backbone for vLLM's torch.compile
74
+ INFO 08-10 22:45:52 [backends.py:519] Dynamo bytecode transform time: 7.17 s
75
+ INFO 08-10 22:45:57 [backends.py:155] Directly load the compiled graph(s) for shape None from the cache, took 4.807 s
76
+ INFO 08-10 22:45:58 [monitor.py:34] torch.compile takes 7.17 s in total
77
+ INFO 08-10 22:45:59 [gpu_worker.py:232] Available KV cache memory: 0.88 GiB
78
+ INFO 08-10 22:45:59 [kv_cache_utils.py:716] GPU KV cache size: 7,168 tokens
79
+ INFO 08-10 22:45:59 [kv_cache_utils.py:720] Maximum concurrency for 4,096 tokens per request: 1.75x
80
+
81
+ INFO 08-10 22:46:19 [gpu_model_runner.py:2326] Graph capturing finished in 20 secs, took 0.53 GiB
82
+ INFO 08-10 22:46:19 [core.py:172] init engine (profile, create kv cache, warmup model) took 34.95 seconds
83
+ INFO 08-10 22:46:20 [config.py:4601] Only "last" pooling supports chunked prefill and prefix caching; disabling both.
84
+ INFO:models.vllm_wrapper:Verifier model initialized successfully in separate process
85
+ INFO:fasttts:FastTTS models initialized successfully
86
+ INFO:__main__:Starting search...
87
+ INFO:fasttts:Processing 1 problems at once
88
+ INFO:search.beam_search:Starting beam search iterations
89
+
90
+
91
+ INFO 08-10 22:46:20 [metrics.py:433] Prefix cache hit rate: GPU: 96.88%, CPU: 0.00%
92
+
93
+ INFO 08-10 22:46:25 [metrics.py:433] Prefix cache hit rate: GPU: 96.88%, CPU: 0.00%
94
+
95
+
96
+
97
+ INFO:search.beam_search:----------------------------------------------------------------------------------------------------
98
+ INFO:search.beam_search:Iteration 0 completed beams: 0, skipped beams: 0, extended beams: 0, verifier beams: 0, total latency: 8.03s, length of agg_scores: [1, 1, 1, 1, 1, 1, 1, 1], num_steps: [1, 1, 1, 1, 1, 1, 1, 1], stop reasons: ['\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n']
99
+
100
+
101
+ INFO 08-10 22:46:30 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00%
102
+ INFO 08-10 22:46:35 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3955.6 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 31.8%, CPU KV cache usage: 0.0%.
103
+ INFO 08-10 22:46:35 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00%
104
+ INFO 08-10 22:46:40 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3675.8 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 49.8%, CPU KV cache usage: 0.0%.
105
+ INFO 08-10 22:46:40 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00%
106
+ INFO 08-10 22:46:45 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3413.7 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 66.4%, CPU KV cache usage: 0.0%.
107
+ INFO 08-10 22:46:45 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00%
108
+
109
+
110
+
111
+ INFO:search.beam_search:Early exit: 0 active, 32 completed
112
+
113
+ INFO:__main__:
114
+ ==================================================
115
+ INFO:__main__:RESULTS
116
+ INFO:__main__:==================================================
117
+ INFO:__main__:Total num tokens: 68030
118
+ INFO:__main__:Effective num tokens: 85322
119
+ INFO:__main__:Effective num tokens per step: 2666.3125
120
+ INFO:__main__:Number of tokens in 1 completion: 2666.3125
121
+ INFO:__main__:N completion tokens: 68030
122
+ INFO:__main__:Total generator latency: 23.24s
123
+ INFO:__main__:Total verifier latency: 16.29s
124
+ INFO:__main__:N generator latency: 23.24s
125
+ INFO:__main__:N verifier latency: 16.29s
126
+ INFO:__main__:Goodput: 2158.48
127
+ INFO:__main__:Per-token generator goodput: 67.45
128
+ INFO:__main__:Completions: 32
129
+ INFO:__main__:Completion time: 25.93s
130
+ INFO:__main__:Number of steps in 1 completion: 10.25
131
+ INFO:__main__:Extended tokens: [[], []]
132
+ INFO:__main__:Cleaning up...
133
+ [rank0]:[W810 22:47:01.495434270 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
134
+ INFO:models.vllm_wrapper:Generator model shutdown complete
135
+ INFO:models.vllm_wrapper:Verifier model shutdown complete
136
+ INFO:fasttts:FastTTS shutdown complete
137
+ INFO:__main__:Experiment completed successfully!
138
+ GPU 3: General Metrics for NVIDIA AD10x (any frequency)
139
+ Generating '/tmp/nsys-report-e5f4.qdstrm'
140
+
141
+
142
+ [3/8] Executing 'nvtx_sum' stats report
143
+
144
+ Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Style Range
145
+ -------- --------------- --------- ---------------- ---------------- -------------- -------------- --------------- ------- ----------------------------------
146
+ 50.5 39,843,350,248 1 39,843,350,248.0 39,843,350,248.0 39,843,350,248 39,843,350,248 0.0 PushPop :Total
147
+ 29.4 23,243,493,104 2 11,621,746,552.0 11,621,746,552.0 5,340,266,592 17,903,226,512 8,883,354,151.2 PushPop :generate
148
+ 20.1 15,877,275,147 2 7,938,637,573.5 7,938,637,573.5 2,602,685,677 13,274,589,470 7,546,175,540.2 PushPop :encode
149
+ 0.0 91,012 1 91,012.0 91,012.0 91,012 91,012 0.0 PushPop CCCL:cub::DeviceSegmentedRadixSort
150
+
151
+ [4/8] Executing 'osrt_sum' stats report
152
+
153
+ Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
154
+ -------- ----------------- --------- ---------------- ---------------- --------- -------------- --------------- ----------------------
155
+ 32.0 1,495,279,673,407 100 14,952,796,734.1 12,598,416,785.0 34,960 52,079,469,971 5,701,768,597.0 pthread_cond_wait
156
+ 21.6 1,006,154,375,582 64,808 15,525,157.0 10,062,301.0 1,012 48,579,597,860 373,003,994.1 epoll_wait
157
+ 21.1 986,064,713,572 8,269 119,248,363.0 100,065,882.0 1,007 1,000,133,427 124,515,794.8 pthread_cond_timedwait
158
+ 10.5 491,657,664,329 61 8,059,961,710.3 10,000,070,982.0 24,009 10,000,129,572 3,765,291,149.8 sem_timedwait
159
+ 8.9 415,835,725,164 40,707 10,215,337.0 3,225.0 1,000 72,458,789,164 680,428,756.8 read
160
+ 5.3 249,470,841,708 2,131 117,067,499.6 100,117,330.0 1,000 18,888,613,964 658,468,072.7 poll
161
+ 0.4 16,443,801,227 66 249,148,503.4 403,743,051.0 18,476 593,598,899 204,107,189.1 sem_wait
162
+ 0.1 2,550,459,210 3,533 721,896.2 10,914.0 1,003 128,136,713 7,428,082.5 ioctl
163
+ 0.0 1,263,893,529 665 1,900,591.8 1,086.0 1,000 1,191,076,865 46,235,235.3 waitpid
164
+ 0.0 392,832,052 148,765 2,640.6 1,273.0 1,000 124,952,899 323,970.7 munmap
165
+ 0.0 328,065,758 523 627,276.8 2,427.0 1,092 23,223,237 3,321,108.6 fopen
166
+ 0.0 202,592,204 40 5,064,805.1 5,065,262.5 5,024,292 5,081,925 10,309.1 nanosleep
167
+ 0.0 147,053,034 46,544 3,159.4 2,713.0 1,001 115,114 2,016.9 open64
168
+ 0.0 126,438,937 150 842,926.2 3,891.5 1,000 19,663,699 3,811,776.5 open
169
+ 0.0 61,549,970 374 164,572.1 5,616.5 1,850 22,166,135 1,774,807.5 fopen64
170
+ 0.0 61,114,705 3 20,371,568.3 1,056,842.0 619,147 59,438,716 33,833,850.1 fork
171
+ 0.0 58,617,528 10 5,861,752.8 32,102.0 13,217 58,204,036 18,391,244.7 connect
172
+ 0.0 45,066,004 99 455,212.2 13,634.0 1,084 9,496,692 1,501,872.6 pthread_join
173
+ 0.0 44,401,764 245 181,231.7 68,648.0 48,867 11,982,775 1,051,434.0 sleep
174
+ 0.0 30,431,961 8,135 3,740.9 2,148.0 1,000 1,518,411 17,964.8 mmap64
175
+ 0.0 25,558,768 187 136,677.9 140,934.0 1,001 3,096,670 233,027.5 recv
176
+ 0.0 16,935,966 215 78,771.9 56,155.0 19,092 989,161 89,946.1 pthread_create
177
+ 0.0 15,974,670 793 20,144.6 7,053.0 1,018 622,768 37,588.5 write
178
+ 0.0 9,532,074 1,514 6,296.0 1,978.5 1,020 87,696 9,142.7 fgets
179
+ 0.0 8,650,201 238 36,345.4 47,056.0 1,457 134,118 28,801.4 send
180
+ 0.0 5,570,086 31 179,680.2 183,032.0 10,664 908,382 171,310.9 pthread_rwlock_wrlock
181
+ 0.0 2,570,739 2,113 1,216.6 1,055.0 1,000 10,784 649.9 fclose
182
+ 0.0 2,112,103 147 14,368.0 3,024.0 1,849 221,897 31,740.0 futex
183
+ 0.0 1,542,364 26 59,321.7 12,597.0 1,374 563,122 137,182.8 pthread_mutex_lock
184
+ 0.0 1,523,160 15 101,544.0 2,748.0 1,015 1,460,981 376,101.0 pthread_cond_broadcast
185
+ 0.0 1,360,776 190 7,162.0 4,237.0 1,303 73,378 6,786.7 mmap
186
+ 0.0 1,283,451 11 116,677.4 119,867.0 18,712 230,389 72,308.1 pthread_rwlock_rdlock
187
+ 0.0 1,210,115 302 4,007.0 2,848.0 1,000 20,865 3,440.8 pthread_cond_signal
188
+ 0.0 563,295 102 5,522.5 4,212.0 1,861 20,178 3,431.6 pipe2
189
+ 0.0 536,232 225 2,383.3 2,204.0 1,002 8,327 1,151.1 epoll_ctl
190
+ 0.0 290,402 42 6,914.3 6,196.5 1,853 19,061 4,749.6 socket
191
+ 0.0 227,097 26 8,734.5 3,393.5 1,051 59,346 15,397.1 bind
192
+ 0.0 124,839 16 7,802.4 8,189.0 1,879 13,015 3,637.4 pthread_mutex_trylock
193
+ 0.0 82,322 35 2,352.1 1,836.0 1,012 21,734 3,393.4 sigaction
194
+ 0.0 79,333 30 2,644.4 2,229.5 1,279 6,465 1,321.7 stat
195
+ 0.0 58,542 37 1,582.2 1,282.0 1,008 5,536 863.9 fcntl
196
+ 0.0 54,922 29 1,893.9 1,720.0 1,017 3,558 709.3 dup2
197
+ 0.0 54,037 14 3,859.8 4,785.0 1,007 6,784 2,052.3 fflush
198
+ 0.0 47,143 5 9,428.6 11,702.0 3,898 12,930 3,965.2 accept4
199
+ 0.0 43,631 8 5,453.9 5,369.0 5,179 5,818 242.8 lstat
200
+ 0.0 40,639 17 2,390.5 1,871.0 1,594 4,295 855.5 pread
201
+ 0.0 34,683 5 6,936.6 3,809.0 3,476 12,818 4,547.1 fread
202
+ 0.0 29,930 7 4,275.7 4,172.0 3,831 5,338 494.0 fputs_unlocked
203
+ 0.0 28,898 8 3,612.3 3,124.5 2,387 6,342 1,352.4 flock
204
+ 0.0 28,431 2 14,215.5 14,215.5 12,558 15,873 2,344.1 socketpair
205
+ 0.0 22,947 8 2,868.4 2,979.0 2,227 3,507 458.5 mprotect
206
+ 0.0 22,068 3 7,356.0 9,216.0 3,348 9,504 3,474.0 fwrite
207
+ 0.0 18,837 10 1,883.7 1,568.5 1,426 3,658 691.4 listen
208
+ 0.0 14,260 6 2,376.7 1,796.5 1,343 5,789 1,703.9 fstat
209
+ 0.0 10,338 1 10,338.0 10,338.0 10,338 10,338 0.0 kill
210
+ 0.0 7,673 2 3,836.5 3,836.5 3,715 3,958 171.8 fputs
211
+ 0.0 5,214 3 1,738.0 1,301.0 1,138 2,775 901.8 openat64
212
+
213
+ [5/8] Executing 'cuda_api_sum' stats report
214
+
215
+ Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
216
+ -------- --------------- --------- ------------ ----------- --------- ----------- ------------ ------------------------------------------
217
+ 74.3 12,046,823,992 66,082 182,301.1 4,631.5 2,799 111,454,677 977,047.3 cudaMemcpyAsync
218
+ 14.7 2,379,107,086 75 31,721,427.8 29,347.0 5,174 131,066,461 41,844,560.4 cudaHostAlloc
219
+ 4.9 800,407,537 64,088 12,489.2 5,221.0 780 90,081,889 526,384.9 cudaLaunchKernel
220
+ 2.3 377,844,910 2,846 132,763.5 143,195.5 62,177 1,174,251 46,493.0 cudaGraphLaunch_v10000
221
+ 1.0 166,969,818 10 16,696,981.8 52,678.0 12,587 166,609,772 52,673,989.8 cudaMemGetInfo
222
+ 0.5 74,585,652 35 2,131,018.6 1,951,594.0 1,511,828 3,134,805 550,682.4 cudaGraphInstantiateWithFlags_v11040
223
+ 0.3 54,658,594 45,617 1,198.2 1,019.0 582 52,369 682.2 cudaEventRecord
224
+ 0.3 51,032,214 10,794 4,727.8 4,867.5 723 67,369 2,294.2 cuLaunchKernel
225
+ 0.3 48,625,718 10 4,862,571.8 4,999,197.5 95,993 8,683,040 2,834,704.8 cuLibraryLoadData
226
+ 0.3 47,375,954 45,610 1,038.7 724.0 358 50,565 920.5 cudaEventQuery
227
+ 0.2 27,029,614 59 458,129.1 229,518.0 68,670 2,793,349 554,293.7 cudaFree
228
+ 0.2 25,635,826 171 149,917.1 132,375.0 9,293 573,128 60,368.7 cudaMalloc
229
+ 0.2 25,364,405 5,427 4,673.7 5,592.0 243 272,681 4,409.3 cudaMemsetAsync
230
+ 0.2 24,554,137 35 701,546.8 657,789.0 591,504 852,369 87,341.2 cudaGraphExecDestroy_v10000
231
+ 0.1 14,089,228 3,389 4,157.3 3,036.0 2,035 57,472 4,827.1 cudaStreamSynchronize
232
+ 0.1 13,994,409 10,794 1,296.5 626.0 285 4,529,506 45,772.1 cuKernelGetFunction
233
+ 0.0 6,696,473 8,753 765.0 860.0 279 10,572 425.1 cudaStreamIsCapturing_v10000
234
+ 0.0 5,283,719 35 150,963.4 151,300.0 121,832 178,715 14,502.3 cudaGraphDestroy_v10000
235
+ 0.0 4,944,481 8,785 562.8 565.0 307 7,193 198.0 cudaStreamGetCaptureInfo_v2_v11030
236
+ 0.0 4,215,616 35 120,446.2 114,207.0 97,930 226,023 22,037.3 cudaStreamEndCapture_v10000
237
+ 0.0 3,557,693 128 27,794.5 3,109.5 2,153 1,183,201 142,814.7 cudaStreamCreateWithPriority
238
+ 0.0 2,006,040 106 18,924.9 19,578.5 2,904 112,999 16,487.0 cudaDeviceSynchronize
239
+ 0.0 895,780 35 25,593.7 26,580.0 12,710 30,798 4,335.7 cudaGraphGetNodes_v10000
240
+ 0.0 419,839 35 11,995.4 9,618.0 8,019 20,040 3,910.5 cudaStreamBeginCapture_v10000
241
+ 0.0 211,197 810 260.7 210.0 117 3,128 178.6 cuGetProcAddress_v2
242
+ 0.0 57,364 26 2,206.3 524.0 435 20,886 4,374.1 cudaEventCreateWithFlags
243
+ 0.0 31,380 16 1,961.3 1,257.5 717 5,553 1,541.0 cuLibraryGetKernel
244
+ 0.0 7,970 3 2,656.7 2,459.0 2,287 3,224 498.8 cuInit
245
+ 0.0 4,914 8 614.3 575.0 448 1,081 202.7 cudaThreadExchangeStreamCaptureMode_v10010
246
+ 0.0 3,667 1 3,667.0 3,667.0 3,667 3,667 0.0 cudaStreamWaitEvent
247
+ 0.0 2,298 3 766.0 327.0 199 1,772 873.6 cuModuleGetLoadingMode
248
+ 0.0 1,642 1 1,642.0 1,642.0 1,642 1,642 0.0 cudaEventDestroy
249
+ 0.0 1,399 2 699.5 699.5 368 1,031 468.8 cudaGetDriverEntryPoint_v11030
250
+
251
+ [6/8] Executing 'cuda_gpu_kern_sum' stats report
252
+
253
+ Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
254
+ -------- --------------- --------- ----------- ----------- --------- --------- ----------- ----------------------------------------------------------------------------------------------------
255
+ 52.6 1,342,892,585 5,896 227,763.3 83,280.0 7,808 544,962 235,015.8 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x64_32x6_tn_align8>(T1::Param…
256
+ 11.6 297,264,596 1,434 207,297.5 59,872.0 10,528 525,698 227,493.4 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x2_tn_align8>(T1::Par…
257
+ 4.9 126,014,763 392 321,466.2 54,240.0 53,184 1,411,300 517,177.2 ampere_bf16_s1688gemm_bf16_128x128_ldg8_f2f_stages_32x1_tn
258
+ 4.0 100,936,151 644 156,733.2 43,231.0 41,249 711,713 225,724.7 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_f2f_tn
259
+ 3.5 88,267,611 2,852 30,949.4 30,593.0 29,536 630,241 15,075.7 void at::native::<unnamed>::cunn_SoftMaxForward<(int)4, float, float, float, at::native::<unnamed>:…
260
+ 3.0 75,370,782 2,851 26,436.6 26,401.0 25,088 590,721 10,586.3 void at::native::<unnamed>::cunn_SoftMaxForward<(int)4, float, float, float, at::native::<unnamed>:…
261
+ 1.7 44,056,554 2,851 15,453.0 19,232.0 2,304 179,072 7,459.1 void at::native::<unnamed>::distribution_elementwise_grid_stride_kernel<float, (int)4, void at::nat…
262
+ 1.7 44,049,142 2,851 15,450.4 18,528.0 2,752 331,169 8,172.3 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::…
263
+ 1.7 42,645,188 2,851 14,958.0 17,888.0 2,848 334,464 8,106.7 void at::native::index_elementwise_kernel<(int)128, (int)4, void at::native::gpu_index_kernel<void …
264
+ 1.4 36,655,978 2,851 12,857.2 14,976.0 3,392 223,968 5,585.0 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator…
265
+ 1.3 33,924,224 2,851 11,899.1 14,112.0 1,440 496,385 10,086.3 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BinaryFunctor<float, float, floa…
266
+ 1.3 33,906,506 2,100 16,146.0 3,648.0 3,232 249,121 48,742.4 void vllm::act_and_mul_kernel<c10::BFloat16, &vllm::silu_kernel<c10::BFloat16>, (bool)1>(T1 *, cons…
267
+ 1.3 31,906,187 28 1,139,506.7 1,139,189.0 1,136,325 1,143,653 2,082.4 ampere_bf16_s16816gemm_bf16_128x64_ldg8_f2f_tn
268
+ 1.0 26,279,717 2,851 9,217.7 9,952.0 5,088 204,641 4,018.6 void at::native::reduce_kernel<(int)512, (int)1, at::native::ReduceOp<float, at::native::ArgMaxOps<…
269
+ 1.0 24,390,271 48 508,130.6 507,570.0 506,242 534,498 3,950.2 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x1_tn_align8>(T1::Par…
270
+ 0.9 21,729,849 204 106,518.9 8,640.0 6,944 488,386 178,137.0 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, __nv_bfloat16, __n…
271
+ 0.8 20,311,799 1,120 18,135.5 17,088.0 11,808 23,104 3,857.8 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
272
+ 0.7 16,762,358 448 37,416.0 37,376.0 35,840 39,073 518.7 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x128_32x6_tn_align8>(T1::Para…
273
+ 0.6 14,301,394 700 20,430.6 13,408.0 13,056 36,960 10,333.1 ampere_bf16_s16816gemm_bf16_64x64_ldg8_f2f_stages_64x5_tn
274
+ 0.6 14,176,146 4,200 3,375.3 2,432.0 1,664 32,416 3,903.7 std::enable_if<T2>(int)0&&vllm::_typeConvert<T1>::exists, void>::type vllm::fused_add_rms_norm_kern…
275
+ 0.5 13,284,608 980 13,555.7 13,504.0 12,287 15,808 960.9 ampere_bf16_s16816gemm_bf16_64x64_ldg8_relu_f2f_stages_64x5_tn
276
+ 0.4 10,345,042 84 123,155.3 134,289.0 29,184 210,881 71,576.9 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
277
+ 0.4 9,234,753 56 164,906.3 164,800.0 163,968 168,289 674.4 ampere_bf16_s1688gemm_bf16_128x128_ldg8_relu_f2f_stages_32x1_tn
278
+ 0.3 8,082,552 840 9,622.1 8,128.0 6,303 15,456 2,982.3 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
279
+ 0.3 7,169,562 112 64,013.9 63,872.5 62,688 66,048 690.9 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_64x1_tn_align8>(T1::Para…
280
+ 0.3 6,493,611 2,100 3,092.2 2,176.0 1,695 32,768 4,378.0 void vllm::rotary_embedding_kernel<c10::BFloat16, (bool)1>(const long *, T1 *, T1 *, const T1 *, in…
281
+ 0.2 6,253,223 3,052 2,048.9 1,888.0 1,344 3,073 551.2 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo…
282
+ 0.2 6,085,232 2,294 2,652.7 2,592.0 2,048 32,480 1,151.5 void at::native::<unnamed>::indexSelectLargeIndex<c10::BFloat16, long, unsigned int, (int)2, (int)2…
283
+ 0.2 5,583,604 2,186 2,554.3 960.0 832 109,312 11,893.2 void at::native::vectorized_elementwise_kernel<(int)8, at::native::FillFunctor<c10::BFloat16>, std:…
284
+ 0.2 5,469,605 2,852 1,917.8 1,920.0 1,343 2,593 225.4 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator…
285
+ 0.2 4,931,788 4 1,232,947.0 1,224,579.0 1,207,907 1,274,723 30,803.4 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy…
286
+ 0.2 4,843,622 224 21,623.3 21,456.5 9,440 34,528 11,938.8 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_128x2_tn_align8>(T1::Par…
287
+ 0.2 3,840,021 28 137,143.6 136,897.0 136,193 139,552 843.2 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_relu_f2f_tn
288
+ 0.1 3,682,796 2,846 1,294.0 1,280.0 1,120 1,473 37.1 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n…
289
+ 0.1 3,022,648 2,072 1,458.8 1,120.0 960 11,329 1,680.7 void vllm::reshape_and_cache_flash_kernel<__nv_bfloat16, __nv_bfloat16, (vllm::Fp8KVCacheDataType)0…
290
+ 0.1 2,880,329 56 51,434.4 51,408.5 49,728 54,049 679.7 void flash::flash_fwd_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)64, (int)4, (bool)0, (…
291
+ 0.1 2,551,910 2 1,275,955.0 1,275,955.0 1,236,547 1,315,363 55,731.3 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy…
292
+ 0.1 2,327,648 632 3,683.0 3,424.0 1,535 6,336 1,080.0 void at::native::<unnamed>::indexSelectSmallIndex<c10::BFloat16, long, unsigned int, (int)2, (int)2…
293
+ 0.1 1,938,021 56 34,607.5 34,960.0 17,408 35,681 2,377.2 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, float, float, floa…
294
+ 0.0 1,099,453 168 6,544.4 6,528.0 6,432 6,688 73.3 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
295
+ 0.0 1,012,450 1 1,012,450.0 1,012,450.0 1,012,450 1,012,450 0.0 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte…
296
+ 0.0 881,129 280 3,146.9 3,136.0 2,785 3,520 213.5 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
297
+ 0.0 803,650 1 803,650.0 803,650.0 803,650 803,650 0.0 ampere_bf16_s1688gemm_bf16_64x128_sliced1x2_ldg8_f2f_tn
298
+ 0.0 740,486 224 3,305.7 3,297.0 3,104 3,488 88.0 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
299
+ 0.0 679,490 2 339,745.0 339,745.0 339,361 340,129 543.1 void at::native::vectorized_elementwise_kernel<(int)4, at::native::<unnamed>::masked_fill_kernel(at…
300
+ 0.0 607,311 336 1,807.5 1,792.0 1,631 2,113 119.0 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo…
301
+ 0.0 359,360 1 359,360.0 359,360.0 359,360 359,360 0.0 void at::native::tensor_kernel_scan_innermost_dim<float, std::plus<float>>(T1 *, const T1 *, unsign…
302
+ 0.0 318,145 1 318,145.0 318,145.0 318,145 318,145 0.0 at::native::<unnamed>::fill_reverse_indices_kernel(long *, int, at::cuda::detail::IntDivider<unsign…
303
+ 0.0 316,805 112 2,828.6 2,817.0 2,720 2,944 36.0 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
304
+ 0.0 315,236 75 4,203.1 2,624.0 1,920 33,184 6,087.8 void vllm::rms_norm_kernel<c10::BFloat16>(T1 *, const T1 *, const T1 *, float, int, int)
305
+ 0.0 251,010 56 4,482.3 4,480.0 4,448 4,513 12.0 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
306
+ 0.0 231,873 1 231,873.0 231,873.0 231,873 231,873 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n…
307
+ 0.0 223,136 1 223,136.0 223,136.0 223,136 223,136 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::…
308
+ 0.0 74,820 56 1,336.1 1,344.0 1,311 1,345 14.0 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, float, __nv_bfloat16, float, (bool)0, __n…
309
+ 0.0 65,347 73 895.2 896.0 831 1,408 75.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<long>, std::array<ch…
310
+ 0.0 3,232 1 3,232.0 3,232.0 3,232 3,232 0.0 void at::native::<unnamed>::CatArrayBatchedCopy_aligned16_contig<at::native::<unnamed>::OpaqueType<…
311
+ 0.0 2,369 2 1,184.5 1,184.5 1,089 1,280 135.1 void <unnamed>::elementwise_kernel_with_index<int, at::native::arange_cuda_out(const c10::Scalar &,…
312
+ 0.0 2,336 1 2,336.0 2,336.0 2,336 2,336 0.0 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte…
313
+ 0.0 2,208 1 2,208.0 2,208.0 2,208 2,208 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::…
314
+ 0.0 2,208 1 2,208.0 2,208.0 2,208 2,208 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::cos_kernel_cuda(at::TensorIterat…
315
+ 0.0 2,049 1 2,049.0 2,049.0 2,049 2,049 0.0 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n…
316
+ 0.0 1,855 1 1,855.0 1,855.0 1,855 1,855 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::sin_kernel_cuda(at::TensorIterat…
317
+ 0.0 1,697 1 1,697.0 1,697.0 1,697 1,697 0.0 void at::native::vectorized_elementwise_kernel<(int)8, at::native::bfloat16_copy_kernel_cuda(at::Te…
318
+ 0.0 1,536 1 1,536.0 1,536.0 1,536 1,536 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n…
319
+ 0.0 1,505 1 1,505.0 1,505.0 1,505 1,505 0.0 void at::native::vectorized_elementwise_kernel<(int)8, at::native::CUDAFunctorOnOther_add<c10::BFlo…
320
+ 0.0 1,472 1 1,472.0 1,472.0 1,472 1,472 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BUnaryFunctor<float, float, floa…
321
+ 0.0 1,344 1 1,344.0 1,344.0 1,344 1,344 0.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::CUDAFunctorOnOther_add<long>, st…
322
+ 0.0 1,216 1 1,216.0 1,216.0 1,216 1,216 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::reciprocal_kernel_cuda(at::Tenso…
323
+ 0.0 1,024 1 1,024.0 1,024.0 1,024 1,024 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::AUnaryFunctor<float, float, floa…
324
+ 0.0 896 1 896.0 896.0 896 896 0.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<double>, std::array<…
325
+ 0.0 896 1 896.0 896.0 896 896 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor<int>, std::array<cha…
326
+
327
+ [7/8] Executing 'cuda_gpu_mem_time_sum' stats report
328
+
329
+ Time (%) Total Time (ns) Count Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Operation
330
+ -------- --------------- ------ -------- -------- -------- ----------- ----------- ------------------------------
331
+ 97.1 588,870,084 49,000 12,017.8 353.0 288 110,979,833 539,121.1 [CUDA memcpy Host-to-Device]
332
+ 2.2 13,046,684 14,231 916.8 896.0 832 343,425 2,871.5 [CUDA memcpy Device-to-Device]
333
+ 0.5 3,261,860 2,851 1,144.1 1,120.0 863 1,664 70.8 [CUDA memcpy Device-to-Host]
334
+ 0.2 1,509,526 3,971 380.1 352.0 288 1,280 123.5 [CUDA memset]
335
+
336
+ [8/8] Executing 'cuda_gpu_mem_size_sum' stats report
337
+
338
+ Total (MB) Count Avg (MB) Med (MB) Min (MB) Max (MB) StdDev (MB) Operation
339
+ ---------- ------ -------- -------- -------- -------- ----------- ------------------------------
340
+ 3,170.710 49,000 0.065 0.000 0.000 466.747 2.401 [CUDA memcpy Host-to-Device]
341
+ 235.229 14,231 0.017 0.000 0.000 155.582 1.304 [CUDA memcpy Device-to-Device]
342
+ 1.731 3,971 0.000 0.000 0.000 0.003 0.001 [CUDA memset]
343
+ 0.593 2,851 0.000 0.000 0.000 0.002 0.000 [CUDA memcpy Device-to-Host]
344
+
345
+ Generated:
346
+ /data/cy/vllm_tts_N32.nsys-rep
347
+ /data/cy/vllm_tts_N32.sqlite
profile_N64.log ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ WARNING: CPU IP/backtrace sampling not supported, disabling.
2
+ Try the 'nsys status --environment' command to learn more.
3
+
4
+ WARNING: CPU context switch tracing not supported, disabling.
5
+ Try the 'nsys status --environment' command to learn more.
6
+
7
+ INFO 08-10 22:49:36 [__init__.py:244] Automatically detected platform cuda.
8
+ INFO:__main__:FastTTS AIME Experiment
9
+ INFO:__main__:==================================================
10
+ INFO:__main__:Starting FastTTS AIME experiment
11
+ INFO:__main__:Parameters: {'num_iterations': 2, 'n': 64, 'temperature': 2, 'beam_width': 4, 'generator_model': 'Qwen/Qwen2.5-Math-1.5B-Instruct', 'verifier_model': 'peiyi9979/math-shepherd-mistral-7b-prm', 'generator_gpu_memory': 0.3, 'verifier_gpu_memory': 0.62, 'offload_enabled': False, 'spec_beam_extension': False, 'prefix_aware_scheduling': False}
12
+ INFO:__main__:Loaded AIME dataset with 30 samples
13
+ INFO:__main__:Problem: Every morning Aya goes for a $9$-kilometer-long walk and stops at a coffee shop afterwards. When she walks at a constant speed of $s$ kilometers per hour, the walk takes her 4 hours, including $t$ minutes spent in the coffee shop. When she walks $s+2$ kilometers per hour, the walk takes her 2 hours and 24 minutes, including $t$ minutes spent in the coffee shop. Suppose Aya walks at $s+\frac{1}{2}$ kilometers per hour. Find the number of minutes the walk takes her, including the $t$ minutes spent in the coffee shop.
14
+ INFO:__main__:Reference answer: 204
15
+ INFO:__main__:Initializing FastTTS models...
16
+ INFO:fasttts:Initializing FastTTS models...
17
+ INFO:models.vllm_wrapper:Initializing generator model: Qwen/Qwen2.5-Math-1.5B-Instruct
18
+ INFO 08-10 22:49:48 [__init__.py:244] Automatically detected platform cuda.
19
+ INFO:models.tts_llm:Using V0 engine with speculative beam extension: False
20
+ INFO:models.tts_llm:Prefix-aware scheduling enabled: False
21
+ ✅ Process PID: 3738074 | CUDA Context Object: None
22
+ INFO 08-10 22:49:57 [config.py:841] This model supports multiple tasks: {'classify', 'generate', 'embed', 'reward'}. Defaulting to 'generate'.
23
+ INFO 08-10 22:49:57 [config.py:1472] Using max model len 4096
24
+ INFO:models.generator_engine:Using GeneratorLLMEngine with vLLM version 0.9.2
25
+ INFO 08-10 22:49:58 [llm_engine.py:230] Initializing a V0 LLM engine (v0.9.2) with config: model='Qwen/Qwen2.5-Math-1.5B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2.5-Math-1.5B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=Qwen/Qwen2.5-Math-1.5B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=False, use_async_output_proc=True, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":false,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":256,"local_cache_dir":null}, use_cached_outputs=False,
26
+ INFO 08-10 22:49:59 [cuda.py:363] Using Flash Attention backend.
27
+ INFO 08-10 22:50:00 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
28
+ INFO 08-10 22:50:00 [model_runner.py:1171] Starting to load model Qwen/Qwen2.5-Math-1.5B-Instruct...
29
+ INFO 08-10 22:50:01 [weight_utils.py:292] Using model weights format ['*.safetensors']
30
+ INFO 08-10 22:50:01 [weight_utils.py:345] No model.safetensors.index.json found in remote.
31
+
32
+
33
+
34
+
35
+ INFO 08-10 22:50:02 [default_loader.py:272] Loading weights took 0.78 seconds
36
+ INFO 08-10 22:50:03 [model_runner.py:1203] Model loading took 2.8798 GiB and 1.909023 seconds
37
+ INFO 08-10 22:50:04 [worker.py:294] Memory profiling takes 0.77 seconds
38
+ INFO 08-10 22:50:04 [worker.py:294] the current vLLM instance can use total_gpu_memory (23.64GiB) x gpu_memory_utilization (0.30) = 7.09GiB
39
+ INFO 08-10 22:50:04 [worker.py:294] model weights take 2.88GiB; non_torch_memory takes 0.08GiB; PyTorch activation peak memory takes 1.40GiB; the rest of the memory reserved for KV Cache is 2.74GiB.
40
+ INFO 08-10 22:50:04 [executor_base.py:113] # cuda blocks: 6412, # CPU blocks: 9362
41
+ INFO 08-10 22:50:04 [executor_base.py:118] Maximum concurrency for 4096 tokens per request: 25.05x
42
+ INFO 08-10 22:50:06 [model_runner.py:1513] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
43
+
44
+ INFO 08-10 22:50:21 [model_runner.py:1671] Graph capturing finished in 14 secs, took 0.23 GiB
45
+ INFO 08-10 22:50:21 [llm_engine.py:428] init engine (profile, create kv cache, warmup model) took 18.10 seconds
46
+ INFO:models.custom_scheduler:Using CustomScheduler
47
+ INFO:models.custom_scheduler:CustomScheduler initialized with config: SchedulerConfig(runner_type='generate', max_num_batched_tokens=4096, max_num_seqs=256, max_model_len=4096, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, num_lookahead_slots=0, cuda_graph_sizes=[512], delay_factor=0.0, enable_chunked_prefill=False, is_multimodal_model=False, max_num_encoder_input_tokens=4096, encoder_cache_size=4096, preemption_mode=None, num_scheduler_steps=1, multi_step_stream_outputs=True, send_delta_data=False, policy='fcfs', chunked_prefill_enabled=False, disable_chunked_mm_input=False, scheduler_cls=<class 'models.custom_scheduler.CustomScheduler'>, disable_hybrid_kv_cache_manager=False)
48
+ INFO:models.vllm_wrapper:Generator model initialized successfully in separate process
49
+ INFO:models.vllm_wrapper:Initializing verifier model: peiyi9979/math-shepherd-mistral-7b-prm
50
+ INFO 08-10 22:50:25 [__init__.py:244] Automatically detected platform cuda.
51
+ INFO:models.tts_llm:Prefix-aware scheduling enabled: False
52
+ ✅ Process PID: 3738452 | CUDA Context Object: None
53
+ INFO 08-10 22:50:37 [config.py:1472] Using max model len 4096
54
+ INFO 08-10 22:50:37 [arg_utils.py:1596] (Disabling) chunked prefill by default
55
+ INFO 08-10 22:50:37 [config.py:4601] Only "last" pooling supports chunked prefill and prefix caching; disabling both.
56
+ You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message
57
+ You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
58
+ INFO 08-10 22:50:39 [core.py:526] Waiting for init message from front-end.
59
+ INFO 08-10 22:50:39 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='peiyi9979/math-shepherd-mistral-7b-prm', speculative_config=None, tokenizer='peiyi9979/math-shepherd-mistral-7b-prm', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=peiyi9979/math-shepherd-mistral-7b-prm, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, pooler_config=PoolerConfig(pooling_type='STEP', normalize=None, softmax=True, step_tag_id=12902, returned_token_ids=[648, 387]), compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}
60
+ INFO 08-10 22:50:40 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
61
+ WARNING 08-10 22:50:40 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
62
+ INFO 08-10 22:50:40 [gpu_model_runner.py:1770] Starting to load model peiyi9979/math-shepherd-mistral-7b-prm...
63
+ INFO 08-10 22:50:40 [gpu_model_runner.py:1775] Loading model from scratch...
64
+ INFO 08-10 22:50:40 [cuda.py:284] Using Flash Attention backend on V1 engine.
65
+ INFO 08-10 22:50:41 [weight_utils.py:292] Using model weights format ['*.bin']
66
+
67
+
68
+
69
+
70
+
71
+ INFO 08-10 22:50:51 [default_loader.py:272] Loading weights took 9.82 seconds
72
+ INFO 08-10 22:50:52 [gpu_model_runner.py:1801] Model loading took 13.2457 GiB and 11.067265 seconds
73
+ INFO 08-10 22:50:59 [backends.py:508] Using cache directory: /home/cy/.cache/vllm/torch_compile_cache/eae4db4fef/rank_0_0/backbone for vLLM's torch.compile
74
+ INFO 08-10 22:50:59 [backends.py:519] Dynamo bytecode transform time: 6.83 s
75
+ INFO 08-10 22:51:04 [backends.py:155] Directly load the compiled graph(s) for shape None from the cache, took 4.844 s
76
+ INFO 08-10 22:51:05 [monitor.py:34] torch.compile takes 6.83 s in total
77
+ INFO 08-10 22:51:06 [gpu_worker.py:232] Available KV cache memory: 0.88 GiB
78
+ INFO 08-10 22:51:06 [kv_cache_utils.py:716] GPU KV cache size: 7,168 tokens
79
+ INFO 08-10 22:51:06 [kv_cache_utils.py:720] Maximum concurrency for 4,096 tokens per request: 1.75x
80
+
81
+ INFO 08-10 22:51:26 [gpu_model_runner.py:2326] Graph capturing finished in 20 secs, took 0.53 GiB
82
+ INFO 08-10 22:51:26 [core.py:172] init engine (profile, create kv cache, warmup model) took 34.31 seconds
83
+ INFO 08-10 22:51:27 [config.py:4601] Only "last" pooling supports chunked prefill and prefix caching; disabling both.
84
+ INFO:models.vllm_wrapper:Verifier model initialized successfully in separate process
85
+ INFO:fasttts:FastTTS models initialized successfully
86
+ INFO:__main__:Starting search...
87
+ INFO:fasttts:Processing 1 problems at once
88
+ INFO:search.beam_search:Starting beam search iterations
89
+
90
+
91
+ INFO 08-10 22:51:27 [metrics.py:433] Prefix cache hit rate: GPU: 98.44%, CPU: 0.00%
92
+
93
+ INFO 08-10 22:51:32 [metrics.py:433] Prefix cache hit rate: GPU: 98.44%, CPU: 0.00%
94
+
95
+ INFO 08-10 22:51:37 [metrics.py:433] Prefix cache hit rate: GPU: 98.44%, CPU: 0.00%
96
+
97
+
98
+
99
+ INFO:search.beam_search:----------------------------------------------------------------------------------------------------
100
+ INFO:search.beam_search:Iteration 0 completed beams: 0, skipped beams: 0, extended beams: 0, verifier beams: 0, total latency: 16.77s, length of agg_scores: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], num_steps: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], stop reasons: ['\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n']
101
+
102
+
103
+ INFO 08-10 22:51:44 [metrics.py:433] Prefix cache hit rate: GPU: 70.82%, CPU: 0.00%
104
+ INFO 08-10 22:51:49 [metrics.py:417] Avg prompt throughput: 12003.1 tokens/s, Avg generation throughput: 5077.1 tokens/s, Running: 64 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 37.6%, CPU KV cache usage: 0.0%.
105
+ INFO 08-10 22:51:49 [metrics.py:433] Prefix cache hit rate: GPU: 85.39%, CPU: 0.00%
106
+ INFO 08-10 22:51:54 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 4980.1 tokens/s, Running: 64 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 62.1%, CPU KV cache usage: 0.0%.
107
+ INFO 08-10 22:51:54 [metrics.py:433] Prefix cache hit rate: GPU: 85.39%, CPU: 0.00%
108
+
109
+ INFO 08-10 22:51:59 [metrics.py:433] Prefix cache hit rate: GPU: 85.39%, CPU: 0.00%
110
+ WARNING 08-10 22:52:03 [scheduler.py:1834] Sequence group 127 is preempted by PreemptionMode.RECOMPUTE mode because there is not enough KV cache space. This can affect the end-to-end performance. Increase gpu_memory_utilization or tensor_parallel_size to provide more KV cache memory. total_num_cumulative_preemption=1
111
+
112
+ INFO 08-10 22:52:04 [metrics.py:433] Prefix cache hit rate: GPU: 85.39%, CPU: 0.00%
113
+ INFO 08-10 22:52:09 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3901.7 tokens/s, Running: 46 reqs, Swapped: 0 reqs, Pending: 15 reqs, GPU KV cache usage: 99.3%, CPU KV cache usage: 0.0%.
114
+ INFO 08-10 22:52:09 [metrics.py:433] Prefix cache hit rate: GPU: 85.39%, CPU: 0.00%
115
+
116
+ INFO 08-10 22:52:14 [metrics.py:433] Prefix cache hit rate: GPU: 68.29%, CPU: 0.00%
117
+
118
+
119
+
120
+ INFO:search.beam_search:Early exit: 0 active, 64 completed
121
+
122
+ INFO:__main__:
123
+ ==================================================
124
+ INFO:__main__:RESULTS
125
+ INFO:__main__:==================================================
126
+ INFO:__main__:Total num tokens: 133754
127
+ INFO:__main__:Effective num tokens: 181430
128
+ INFO:__main__:Effective num tokens per step: 2834.84375
129
+ INFO:__main__:Number of tokens in 1 completion: 2834.84375
130
+ INFO:__main__:N completion tokens: 133754
131
+ INFO:__main__:Total generator latency: 42.39s
132
+ INFO:__main__:Total verifier latency: 33.06s
133
+ INFO:__main__:N generator latency: 42.39s
134
+ INFO:__main__:N verifier latency: 33.06s
135
+ INFO:__main__:Goodput: 2404.47
136
+ INFO:__main__:Per-token generator goodput: 37.57
137
+ INFO:__main__:Completions: 64
138
+ INFO:__main__:Completion time: 48.29s
139
+ INFO:__main__:Number of steps in 1 completion: 9.375
140
+ INFO:__main__:Extended tokens: [[], []]
141
+ INFO:__main__:Cleaning up...
142
+ [rank0]:[W810 22:52:45.451462166 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
143
+ INFO:models.vllm_wrapper:Generator model shutdown complete
144
+ INFO:models.vllm_wrapper:Verifier model shutdown complete
145
+ INFO:fasttts:FastTTS shutdown complete
146
+ INFO:__main__:Experiment completed successfully!
147
+ GPU 3: General Metrics for NVIDIA AD10x (any frequency)
148
+ Generating '/tmp/nsys-report-5cae.qdstrm'
149
+
150
+
151
+ [3/8] Executing 'nvtx_sum' stats report
152
+
153
+ Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Style Range
154
+ -------- --------------- --------- ---------------- ---------------- -------------- -------------- ---------------- ------- ----------------------------------
155
+ 50.4 76,066,555,665 1 76,066,555,665.0 76,066,555,665.0 76,066,555,665 76,066,555,665 0.0 PushPop :Total
156
+ 28.1 42,391,722,869 2 21,195,861,434.5 21,195,861,434.5 10,864,510,578 31,527,212,291 14,610,736,498.9 PushPop :generate
157
+ 21.4 32,329,219,625 2 16,164,609,812.5 16,164,609,812.5 5,818,750,726 26,510,468,899 14,631,254,234.5 PushPop :encode
158
+ 0.0 82,132 1 82,132.0 82,132.0 82,132 82,132 0.0 PushPop CCCL:cub::DeviceSegmentedRadixSort
159
+
160
+ [4/8] Executing 'osrt_sum' stats report
161
+
162
+ Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
163
+ -------- ----------------- --------- ---------------- ---------------- --------- -------------- --------------- ----------------------
164
+ 27.5 1,570,586,069,664 99 15,864,505,754.2 14,152,497,026.0 33,152 51,050,498,094 4,865,754,614.8 pthread_cond_wait
165
+ 23.3 1,331,651,323,026 82,813 16,080,220.8 10,062,125.0 1,003 81,342,415,477 427,988,334.0 epoll_wait
166
+ 22.0 1,258,791,723,921 10,509 119,782,255.6 100,067,472.0 1,038 1,000,133,283 124,620,087.5 pthread_cond_timedwait
167
+ 11.7 666,354,533,375 80 8,329,431,667.2 10,000,073,731.5 23,706 10,000,174,961 3,631,147,056.3 sem_timedwait
168
+ 9.1 521,879,778,889 40,788 12,794,934.3 3,130.0 1,000 76,842,750,440 782,674,995.6 read
169
+ 5.7 322,689,387,587 2,637 122,369,885.3 100,121,935.0 1,000 33,116,788,050 816,777,744.8 poll
170
+ 0.6 32,628,206,240 107 304,936,506.9 412,945,471.0 26,513 435,645,358 176,842,057.7 sem_wait
171
+ 0.0 2,606,030,057 3,612 721,492.3 10,308.0 1,003 120,495,189 7,540,681.0 ioctl
172
+ 0.0 1,334,118,106 780 1,710,407.8 1,049.5 1,000 1,258,687,140 45,116,549.7 waitpid
173
+ 0.0 399,770,775 148,773 2,687.1 1,293.0 1,022 126,997,526 329,275.0 munmap
174
+ 0.0 274,510,787 523 524,877.2 2,582.0 1,056 19,487,046 2,944,214.1 fopen
175
+ 0.0 202,659,851 40 5,066,496.3 5,064,858.0 5,040,807 5,084,659 8,872.3 nanosleep
176
+ 0.0 142,055,986 46,554 3,051.4 2,617.0 1,008 1,093,341 5,374.5 open64
177
+ 0.0 132,038,923 151 874,430.0 3,960.0 1,030 23,199,301 3,983,645.6 open
178
+ 0.0 59,492,133 10 5,949,213.3 35,000.0 10,107 59,133,413 18,687,056.3 connect
179
+ 0.0 58,594,767 374 156,670.5 6,240.0 1,943 19,325,814 1,674,531.7 fopen64
180
+ 0.0 47,992,541 3 15,997,513.7 1,075,287.0 858,312 46,058,942 26,034,186.7 fork
181
+ 0.0 34,761,473 98 354,708.9 12,929.5 5,636 6,322,592 980,721.1 pthread_join
182
+ 0.0 29,564,332 8,054 3,670.8 2,208.0 1,000 1,671,051 19,459.1 mmap64
183
+ 0.0 28,837,476 263 109,648.2 104,471.0 1,067 2,318,041 161,232.9 recv
184
+ 0.0 24,293,672 245 99,157.8 68,660.0 54,570 8,049,966 510,062.1 sleep
185
+ 0.0 19,254,665 1,147 16,787.0 6,284.0 1,006 766,716 39,843.0 write
186
+ 0.0 16,341,237 215 76,005.8 48,286.0 15,491 902,649 92,370.5 pthread_create
187
+ 0.0 11,318,320 34 332,891.8 234,941.0 1,834 2,012,743 358,161.4 pthread_rwlock_wrlock
188
+ 0.0 10,021,688 370 27,085.6 8,884.0 1,164 88,422 25,476.1 send
189
+ 0.0 9,822,488 1,521 6,457.9 2,124.0 1,017 84,090 9,727.8 fgets
190
+ 0.0 3,193,803 2,637 1,211.2 1,074.0 1,005 14,780 567.6 fclose
191
+ 0.0 2,057,750 147 13,998.3 3,040.0 1,928 320,866 40,740.9 futex
192
+ 0.0 1,512,017 192 7,875.1 5,237.0 1,324 68,866 6,729.6 mmap
193
+ 0.0 1,396,293 38 36,744.6 11,798.0 1,055 573,815 92,333.3 pthread_mutex_lock
194
+ 0.0 839,678 287 2,925.7 2,195.0 1,001 27,000 2,738.7 pthread_cond_signal
195
+ 0.0 643,114 297 2,165.4 2,010.0 1,038 14,434 1,221.5 epoll_ctl
196
+ 0.0 600,231 5 120,046.2 117,218.0 20,631 271,670 104,219.5 pthread_rwlock_rdlock
197
+ 0.0 595,385 102 5,837.1 4,637.0 1,804 18,764 3,609.1 pipe2
198
+ 0.0 275,162 42 6,551.5 5,607.5 1,893 17,150 4,372.9 socket
199
+ 0.0 202,253 25 8,090.1 2,869.0 1,003 48,784 13,218.2 bind
200
+ 0.0 80,087 30 2,669.6 2,208.5 1,234 6,650 1,331.9 stat
201
+ 0.0 65,894 37 1,780.9 1,792.0 1,026 2,928 397.2 sigaction
202
+ 0.0 62,954 31 2,030.8 2,247.0 1,033 3,429 752.2 dup2
203
+ 0.0 58,502 18 3,250.1 3,191.5 1,029 6,493 2,107.7 fflush
204
+ 0.0 54,988 12 4,582.3 3,752.0 1,048 10,593 3,547.6 pthread_cond_broadcast
205
+ 0.0 51,591 5 10,318.2 10,246.0 4,922 16,428 4,094.9 accept4
206
+ 0.0 49,265 16 3,079.1 3,066.5 1,181 6,938 1,812.4 pthread_mutex_trylock
207
+ 0.0 45,034 8 5,629.3 5,133.5 4,623 7,254 974.5 lstat
208
+ 0.0 40,338 27 1,494.0 1,379.0 1,007 3,796 546.5 fcntl
209
+ 0.0 39,769 17 2,339.4 1,856.0 1,297 4,201 947.9 pread
210
+ 0.0 33,378 5 6,675.6 4,399.0 3,287 12,984 4,228.1 fread
211
+ 0.0 32,642 2 16,321.0 16,321.0 13,781 18,861 3,592.1 socketpair
212
+ 0.0 30,642 7 4,377.4 4,157.0 3,805 5,445 557.9 fputs_unlocked
213
+ 0.0 23,634 8 2,954.3 2,793.5 2,094 3,949 800.7 flock
214
+ 0.0 21,622 8 2,702.8 2,848.0 2,045 3,320 511.2 mprotect
215
+ 0.0 21,600 3 7,200.0 8,286.0 2,582 10,732 4,182.1 fwrite
216
+ 0.0 16,405 10 1,640.5 1,410.0 1,136 3,238 659.5 listen
217
+ 0.0 14,137 6 2,356.2 2,069.0 1,320 3,888 1,010.6 fstat
218
+ 0.0 11,133 2 5,566.5 5,566.5 3,293 7,840 3,215.2 fputs
219
+ 0.0 10,336 1 10,336.0 10,336.0 10,336 10,336 0.0 kill
220
+ 0.0 5,429 3 1,809.7 1,576.0 1,219 2,634 735.9 openat64
221
+
222
+ [5/8] Executing 'cuda_api_sum' stats report
223
+
224
+ Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
225
+ -------- --------------- --------- ------------ ----------- --------- ----------- ------------ ------------------------------------------
226
+ 80.2 18,766,359,081 99,630 188,360.5 4,711.0 2,729 112,239,810 1,085,804.8 cudaMemcpyAsync
227
+ 10.4 2,436,385,485 82 29,712,018.1 25,880.5 4,761 123,460,724 42,117,123.4 cudaHostAlloc
228
+ 4.0 933,341,749 83,708 11,150.0 5,442.0 703 86,332,503 446,024.2 cudaLaunchKernel
229
+ 2.4 570,293,680 4,296 132,749.9 144,270.0 62,066 1,200,614 68,348.1 cudaGraphLaunch_v10000
230
+ 0.7 153,413,229 10 15,341,322.9 42,661.0 12,778 153,071,203 48,393,352.9 cudaMemGetInfo
231
+ 0.3 79,659,549 68,993 1,154.6 971.0 586 13,267 612.1 cudaEventRecord
232
+ 0.3 75,454,580 68,986 1,093.8 753.0 356 38,725 963.9 cudaEventQuery
233
+ 0.3 74,196,238 35 2,119,892.5 1,916,761.0 1,486,917 3,065,726 570,084.3 cudaGraphInstantiateWithFlags_v11040
234
+ 0.3 59,565,421 12,077 4,932.1 4,906.0 665 75,452 2,446.1 cuLaunchKernel
235
+ 0.2 46,938,500 10 4,693,850.0 4,834,651.0 97,836 8,421,697 2,733,918.5 cuLibraryLoadData
236
+ 0.2 41,018,775 35 1,171,965.0 1,116,417.0 961,297 1,524,129 151,276.2 cudaGraphExecDestroy_v10000
237
+ 0.2 39,595,262 7,476 5,296.3 5,792.5 203 548,097 7,905.1 cudaMemsetAsync
238
+ 0.1 29,204,824 60 486,747.1 260,572.5 69,492 2,709,139 565,666.9 cudaFree
239
+ 0.1 26,170,937 172 152,156.6 133,220.0 5,708 512,837 64,461.0 cudaMalloc
240
+ 0.1 18,529,952 4,861 3,812.0 2,903.0 2,264 57,006 4,085.6 cudaStreamSynchronize
241
+ 0.1 14,956,888 12,077 1,238.5 637.0 262 4,521,834 42,565.8 cuKernelGetFunction
242
+ 0.0 10,178,076 13,126 775.4 863.0 273 9,371 398.0 cudaStreamIsCapturing_v10000
243
+ 0.0 5,365,861 35 153,310.3 153,464.0 126,092 193,415 18,094.2 cudaGraphDestroy_v10000
244
+ 0.0 4,661,222 8,785 530.6 522.0 310 7,041 169.7 cudaStreamGetCaptureInfo_v2_v11030
245
+ 0.0 4,308,195 35 123,091.3 115,015.0 99,059 281,475 30,290.5 cudaStreamEndCapture_v10000
246
+ 0.0 3,506,035 128 27,390.9 2,592.0 2,152 1,178,422 142,976.4 cudaStreamCreateWithPriority
247
+ 0.0 2,098,596 106 19,798.1 19,822.5 2,884 114,410 18,203.1 cudaDeviceSynchronize
248
+ 0.0 896,814 35 25,623.3 25,859.0 17,784 29,201 2,132.8 cudaGraphGetNodes_v10000
249
+ 0.0 441,272 35 12,607.8 10,140.0 8,217 26,212 4,497.7 cudaStreamBeginCapture_v10000
250
+ 0.0 196,352 810 242.4 207.0 116 2,275 149.1 cuGetProcAddress_v2
251
+ 0.0 53,188 26 2,045.7 560.0 456 23,040 4,516.5 cudaEventCreateWithFlags
252
+ 0.0 25,620 16 1,601.3 1,262.5 792 4,560 1,049.8 cuLibraryGetKernel
253
+ 0.0 5,911 3 1,970.3 1,866.0 1,700 2,345 334.9 cuInit
254
+ 0.0 4,571 8 571.4 539.0 358 1,033 199.0 cudaThreadExchangeStreamCaptureMode_v10010
255
+ 0.0 3,626 1 3,626.0 3,626.0 3,626 3,626 0.0 cudaStreamWaitEvent
256
+ 0.0 2,567 3 855.7 192.0 176 2,199 1,163.4 cuModuleGetLoadingMode
257
+ 0.0 1,605 1 1,605.0 1,605.0 1,605 1,605 0.0 cudaEventDestroy
258
+ 0.0 1,429 2 714.5 714.5 447 982 378.3 cudaGetDriverEntryPoint_v11030
259
+
260
+ [6/8] Executing 'cuda_gpu_kern_sum' stats report
261
+
262
+ Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name
263
+ -------- --------------- --------- ----------- ----------- --------- --------- ----------- ----------------------------------------------------------------------------------------------------
264
+ 33.6 1,570,853,979 6,432 244,224.8 88,272.5 7,777 567,619 241,154.7 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x64_32x6_tn_align8>(T1::Param…
265
+ 13.1 612,987,478 2,064 296,990.1 494,082.0 10,528 528,355 232,908.9 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x2_tn_align8>(T1::Par…
266
+ 6.2 289,595,232 252 1,149,187.4 1,209,432.5 812,902 1,347,464 159,957.4 ampere_bf16_s16816gemm_bf16_128x64_ldg8_f2f_tn
267
+ 4.4 204,289,529 578 353,442.1 488,002.0 6,976 488,803 210,884.6 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, __nv_bfloat16, __n…
268
+ 4.1 190,810,056 4,312 44,250.9 28,672.0 25,120 587,042 22,871.1 void at::native::<unnamed>::cunn_SoftMaxForward<(int)4, float, float, float, at::native::<unnamed>:…
269
+ 4.0 188,066,965 448 419,792.3 54,464.0 53,184 1,411,525 547,428.7 ampere_bf16_s1688gemm_bf16_128x128_ldg8_f2f_stages_32x1_tn
270
+ 4.0 186,125,023 896 207,728.8 72,385.0 41,024 712,803 248,592.0 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_f2f_tn
271
+ 3.7 171,144,010 4,312 39,690.2 32,385.0 2,848 332,800 34,299.9 void at::native::index_elementwise_kernel<(int)128, (int)4, void at::native::gpu_index_kernel<void …
272
+ 3.7 170,737,521 4,313 39,586.7 30,913.0 29,505 630,018 16,208.8 void at::native::<unnamed>::cunn_SoftMaxForward<(int)4, float, float, float, at::native::<unnamed>:…
273
+ 3.4 159,493,240 4,312 36,988.2 15,888.0 1,440 496,097 35,956.8 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BinaryFunctor<float, float, floa…
274
+ 2.4 111,459,459 392 284,335.4 304,210.0 70,784 452,515 98,250.5 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
275
+ 2.3 107,164,887 281 381,369.7 508,612.0 96,704 802,403 224,737.7 ampere_bf16_s1688gemm_bf16_64x128_sliced1x2_ldg8_f2f_tn
276
+ 2.0 93,672,078 4,312 21,723.6 23,840.0 2,304 184,257 17,543.1 void at::native::<unnamed>::distribution_elementwise_grid_stride_kernel<float, (int)4, void at::nat…
277
+ 1.9 89,824,056 2,408 37,302.3 5,920.0 3,232 248,609 74,450.8 void vllm::act_and_mul_kernel<c10::BFloat16, &vllm::silu_kernel<c10::BFloat16>, (bool)1>(T1 *, cons…
278
+ 1.9 88,588,532 4,312 20,544.7 21,280.0 2,783 333,761 15,078.8 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::…
279
+ 1.8 83,806,723 165 507,919.5 507,620.0 506,275 534,243 2,945.0 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x1_tn_align8>(T1::Par…
280
+ 1.6 74,337,186 4,312 17,239.6 17,056.0 3,391 220,417 11,406.2 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator…
281
+ 1.0 47,165,472 4,312 10,938.2 12,256.0 5,057 203,296 5,182.9 void at::native::reduce_kernel<(int)512, (int)1, at::native::ReduceOp<float, at::native::ArgMaxOps<…
282
+ 0.7 32,899,821 224 146,874.2 151,841.0 113,505 168,449 18,546.6 ampere_bf16_s1688gemm_bf16_128x128_ldg8_relu_f2f_stages_32x1_tn
283
+ 0.5 24,957,468 4,816 5,182.2 2,815.0 1,664 38,368 6,580.9 std::enable_if<T2>(int)0&&vllm::_typeConvert<T1>::exists, void>::type vllm::fused_add_rms_norm_kern…
284
+ 0.4 20,348,051 1,120 18,167.9 17,120.0 11,840 23,232 3,859.0 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
285
+ 0.4 16,753,720 448 37,396.7 37,344.0 35,968 39,456 541.1 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x128_32x6_tn_align8>(T1::Para…
286
+ 0.3 15,993,907 728 21,969.7 13,440.0 12,416 63,169 12,711.9 ampere_bf16_s16816gemm_bf16_64x64_ldg8_f2f_stages_64x5_tn
287
+ 0.3 12,886,452 952 13,536.2 13,440.0 12,256 15,648 965.3 ampere_bf16_s16816gemm_bf16_64x64_ldg8_relu_f2f_stages_64x5_tn
288
+ 0.2 11,441,342 2,408 4,751.4 2,272.0 1,727 37,216 6,403.4 void vllm::rotary_embedding_kernel<c10::BFloat16, (bool)1>(const long *, T1 *, T1 *, const T1 *, in…
289
+ 0.2 10,702,121 84 127,406.2 140,497.0 98,496 145,793 19,683.1 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_relu_f2f_tn
290
+ 0.2 8,093,660 840 9,635.3 8,128.0 6,304 15,520 2,991.7 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (…
291
+ 0.2 7,630,192 2,645 2,884.8 2,784.0 2,048 33,856 1,930.4 void at::native::<unnamed>::indexSelectLargeIndex<c10::BFloat16, long, unsigned int, (int)2, (int)2…
292
+ 0.2 7,285,319 4,313 1,689.2 1,664.0 1,280 2,624 201.7 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator…
293
+ 0.2 7,175,073 112 64,063.2 64,017.0 62,881 66,337 696.0 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_64x1_tn_align8>(T1::Para…
294
+ 0.2 7,038,404 2,494 2,822.1 992.0 831 108,833 11,162.2 void at::native::vectorized_elementwise_kernel<(int)8, at::native::FillFunctor<c10::BFloat16>, std:…
295
+ 0.1 6,179,131 3,024 2,043.4 1,888.0 1,343 3,071 548.3 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo…
296
+ 0.1 6,016,394 1,753 3,432.1 3,520.0 1,535 6,368 1,488.2 void at::native::<unnamed>::indexSelectSmallIndex<c10::BFloat16, long, unsigned int, (int)2, (int)2…
297
+ 0.1 5,847,636 4,296 1,361.2 1,312.0 1,120 1,697 137.2 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n…
298
+ 0.1 5,772,666 2,380 2,425.5 1,120.0 960 13,472 3,192.2 void vllm::reshape_and_cache_flash_kernel<__nv_bfloat16, __nv_bfloat16, (vllm::Fp8KVCacheDataType)0…
299
+ 0.1 4,930,196 4 1,232,549.0 1,221,797.0 1,205,157 1,281,445 35,608.2 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy…
300
+ 0.1 4,844,152 224 21,625.7 21,520.0 9,504 34,560 11,940.6 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_128x2_tn_align8>(T1::Par…
301
+ 0.1 2,881,101 56 51,448.2 51,425.0 49,728 53,665 740.6 void flash::flash_fwd_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)64, (int)4, (bool)0, (…
302
+ 0.1 2,556,265 2 1,278,132.5 1,278,132.5 1,238,660 1,317,605 55,822.5 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy…
303
+ 0.0 1,935,364 56 34,560.1 34,880.0 17,376 35,809 2,367.3 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, float, float, floa…
304
+ 0.0 1,099,587 168 6,545.2 6,528.0 6,432 6,752 80.1 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
305
+ 0.0 1,012,484 1 1,012,484.0 1,012,484.0 1,012,484 1,012,484 0.0 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte…
306
+ 0.0 882,409 280 3,151.5 3,136.0 2,815 3,457 214.9 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
307
+ 0.0 741,117 224 3,308.6 3,328.0 3,104 3,457 88.1 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
308
+ 0.0 679,138 2 339,569.0 339,569.0 339,553 339,585 22.6 void at::native::vectorized_elementwise_kernel<(int)4, at::native::<unnamed>::masked_fill_kernel(at…
309
+ 0.0 606,622 336 1,805.4 1,792.0 1,632 2,112 116.2 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo…
310
+ 0.0 588,585 86 6,844.0 3,296.5 1,888 33,248 9,270.3 void vllm::rms_norm_kernel<c10::BFloat16>(T1 *, const T1 *, const T1 *, float, int, int)
311
+ 0.0 358,113 1 358,113.0 358,113.0 358,113 358,113 0.0 void at::native::tensor_kernel_scan_innermost_dim<float, std::plus<float>>(T1 *, const T1 *, unsign…
312
+ 0.0 318,273 1 318,273.0 318,273.0 318,273 318,273 0.0 at::native::<unnamed>::fill_reverse_indices_kernel(long *, int, at::cuda::detail::IntDivider<unsign…
313
+ 0.0 317,793 112 2,837.4 2,848.0 2,688 2,944 35.8 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
314
+ 0.0 251,040 56 4,482.9 4,480.0 4,448 4,544 16.4 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (…
315
+ 0.0 232,033 1 232,033.0 232,033.0 232,033 232,033 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n…
316
+ 0.0 222,112 1 222,112.0 222,112.0 222,112 222,112 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::…
317
+ 0.0 74,914 56 1,337.8 1,344.0 1,311 1,345 12.8 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, float, __nv_bfloat16, float, (bool)0, __n…
318
+ 0.0 65,439 73 896.4 895.0 832 1,408 74.9 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<long>, std::array<ch…
319
+ 0.0 3,295 1 3,295.0 3,295.0 3,295 3,295 0.0 void at::native::<unnamed>::CatArrayBatchedCopy_aligned16_contig<at::native::<unnamed>::OpaqueType<…
320
+ 0.0 2,304 2 1,152.0 1,152.0 1,087 1,217 91.9 void <unnamed>::elementwise_kernel_with_index<int, at::native::arange_cuda_out(const c10::Scalar &,…
321
+ 0.0 2,272 1 2,272.0 2,272.0 2,272 2,272 0.0 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte…
322
+ 0.0 2,208 1 2,208.0 2,208.0 2,208 2,208 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::…
323
+ 0.0 2,176 1 2,176.0 2,176.0 2,176 2,176 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::cos_kernel_cuda(at::TensorIterat…
324
+ 0.0 2,048 1 2,048.0 2,048.0 2,048 2,048 0.0 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n…
325
+ 0.0 1,856 1 1,856.0 1,856.0 1,856 1,856 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::sin_kernel_cuda(at::TensorIterat…
326
+ 0.0 1,696 1 1,696.0 1,696.0 1,696 1,696 0.0 void at::native::vectorized_elementwise_kernel<(int)8, at::native::bfloat16_copy_kernel_cuda(at::Te…
327
+ 0.0 1,536 1 1,536.0 1,536.0 1,536 1,536 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n…
328
+ 0.0 1,503 1 1,503.0 1,503.0 1,503 1,503 0.0 void at::native::vectorized_elementwise_kernel<(int)8, at::native::CUDAFunctorOnOther_add<c10::BFlo…
329
+ 0.0 1,472 1 1,472.0 1,472.0 1,472 1,472 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BUnaryFunctor<float, float, floa…
330
+ 0.0 1,376 1 1,376.0 1,376.0 1,376 1,376 0.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::CUDAFunctorOnOther_add<long>, st…
331
+ 0.0 1,216 1 1,216.0 1,216.0 1,216 1,216 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::reciprocal_kernel_cuda(at::Tenso…
332
+ 0.0 1,024 1 1,024.0 1,024.0 1,024 1,024 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::AUnaryFunctor<float, float, floa…
333
+ 0.0 896 1 896.0 896.0 896 896 0.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<double>, std::array<…
334
+ 0.0 896 1 896.0 896.0 896 896 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor<int>, std::array<cha…
335
+
336
+ [7/8] Executing 'cuda_gpu_mem_time_sum' stats report
337
+
338
+ Time (%) Total Time (ns) Count Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Operation
339
+ -------- --------------- ------ -------- -------- -------- ----------- ----------- ------------------------------
340
+ 95.9 626,119,802 73,837 8,479.8 353.0 287 111,812,244 443,931.4 [CUDA memcpy Host-to-Device]
341
+ 3.0 19,606,353 21,481 912.7 896.0 831 342,626 2,331.9 [CUDA memcpy Device-to-Device]
342
+ 0.8 5,012,393 4,312 1,162.4 1,152.0 863 1,760 109.9 [CUDA memcpy Device-to-Host]
343
+ 0.4 2,420,816 5,936 407.8 352.0 320 1,792 179.9 [CUDA memset]
344
+
345
+ [8/8] Executing 'cuda_gpu_mem_size_sum' stats report
346
+
347
+ Total (MB) Count Avg (MB) Med (MB) Min (MB) Max (MB) StdDev (MB) Operation
348
+ ---------- ------ -------- -------- -------- -------- ----------- ------------------------------
349
+ 3,259.930 73,837 0.044 0.000 0.000 466.747 1.957 [CUDA memcpy Host-to-Device]
350
+ 320.544 21,481 0.015 0.000 0.000 155.582 1.062 [CUDA memcpy Device-to-Device]
351
+ 3.171 5,936 0.001 0.000 0.000 0.003 0.001 [CUDA memset]
352
+ 1.190 4,312 0.000 0.000 0.000 0.002 0.000 [CUDA memcpy Device-to-Host]
353
+
354
+ Generated:
355
+ /data/cy/vllm_tts_N64.nsys-rep
356
+ /data/cy/vllm_tts_N64.sqlite
vllm_tts_N128.nsys-rep ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7c2c3112bfd6193d3c3b3157a42f2bb96901d52754ac728700bc73512ad4688
3
+ size 84758844
vllm_tts_N32.nsys-rep ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a9141a53becbdb5b9991f04765ad3c43dde19418a2583d5a1f8b1037553007b
3
+ size 44215112
vllm_tts_N64.nsys-rep ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c202c775406b3900eeba8a002d600cc0e943c9c1e4ac881cf75d16136c06775
3
+ size 59513798