| WARNING: CPU IP/backtrace sampling not supported, disabling. |
| Try the 'nsys status --environment' command to learn more. |
|
|
| WARNING: CPU context switch tracing not supported, disabling. |
| Try the 'nsys status --environment' command to learn more. |
|
|
| INFO 08-10 22:44:26 [__init__.py:244] Automatically detected platform cuda. |
| INFO:__main__:FastTTS AIME Experiment |
| INFO:__main__:================================================== |
| INFO:__main__:Starting FastTTS AIME experiment |
| INFO:__main__:Parameters: {'num_iterations': 2, 'n': 32, 'temperature': 2, 'beam_width': 4, 'generator_model': 'Qwen/Qwen2.5-Math-1.5B-Instruct', 'verifier_model': 'peiyi9979/math-shepherd-mistral-7b-prm', 'generator_gpu_memory': 0.3, 'verifier_gpu_memory': 0.62, 'offload_enabled': False, 'spec_beam_extension': False, 'prefix_aware_scheduling': False} |
| INFO:__main__:Loaded AIME dataset with 30 samples |
| INFO:__main__:Problem: Every morning Aya goes for a $9$-kilometer-long walk and stops at a coffee shop afterwards. When she walks at a constant speed of $s$ kilometers per hour, the walk takes her 4 hours, including $t$ minutes spent in the coffee shop. When she walks $s+2$ kilometers per hour, the walk takes her 2 hours and 24 minutes, including $t$ minutes spent in the coffee shop. Suppose Aya walks at $s+\frac{1}{2}$ kilometers per hour. Find the number of minutes the walk takes her, including the $t$ minutes spent in the coffee shop. |
| INFO:__main__:Reference answer: 204 |
| INFO:__main__:Initializing FastTTS models... |
| INFO:fasttts:Initializing FastTTS models... |
| INFO:models.vllm_wrapper:Initializing generator model: Qwen/Qwen2.5-Math-1.5B-Instruct |
| INFO 08-10 22:44:38 [__init__.py:244] Automatically detected platform cuda. |
| INFO:models.tts_llm:Using V0 engine with speculative beam extension: False |
| INFO:models.tts_llm:Prefix-aware scheduling enabled: False |
| ✅ Process PID: 3736098 | CUDA Context Object: None |
| INFO 08-10 22:44:49 [config.py:841] This model supports multiple tasks: {'embed', 'classify', 'generate', 'reward'}. Defaulting to 'generate'. |
| INFO 08-10 22:44:49 [config.py:1472] Using max model len 4096 |
| INFO:models.generator_engine:Using GeneratorLLMEngine with vLLM version 0.9.2 |
| INFO 08-10 22:44:49 [llm_engine.py:230] Initializing a V0 LLM engine (v0.9.2) with config: model='Qwen/Qwen2.5-Math-1.5B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2.5-Math-1.5B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=Qwen/Qwen2.5-Math-1.5B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=False, use_async_output_proc=True, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":false,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":256,"local_cache_dir":null}, use_cached_outputs=False, |
| INFO 08-10 22:44:51 [cuda.py:363] Using Flash Attention backend. |
| INFO 08-10 22:44:52 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 |
| INFO 08-10 22:44:52 [model_runner.py:1171] Starting to load model Qwen/Qwen2.5-Math-1.5B-Instruct... |
| INFO 08-10 22:44:53 [weight_utils.py:292] Using model weights format ['*.safetensors'] |
| INFO 08-10 22:44:53 [weight_utils.py:345] No model.safetensors.index.json found in remote. |
|
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s] |
|
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 1.56it/s] |
|
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 1.56it/s] |
|
|
| INFO 08-10 22:44:54 [default_loader.py:272] Loading weights took 0.77 seconds |
| INFO 08-10 22:44:54 [model_runner.py:1203] Model loading took 2.8798 GiB and 1.928124 seconds |
| INFO 08-10 22:44:55 [worker.py:294] Memory profiling takes 0.92 seconds
|
| INFO 08-10 22:44:55 [worker.py:294] the current vLLM instance can use total_gpu_memory (23.64GiB) x gpu_memory_utilization (0.30) = 7.09GiB
|
| INFO 08-10 22:44:55 [worker.py:294] model weights take 2.88GiB; non_torch_memory takes 0.08GiB; PyTorch activation peak memory takes 1.40GiB; the rest of the memory reserved for KV Cache is 2.74GiB. |
| INFO 08-10 22:44:56 [executor_base.py:113] |
| INFO 08-10 22:44:56 [executor_base.py:118] Maximum concurrency for 4096 tokens per request: 25.05x |
| INFO 08-10 22:44:58 [model_runner.py:1513] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage. |
|
Capturing CUDA graph shapes: 0%| | 0/35 [00:00<?, ?it/s]
Capturing CUDA graph shapes: 3%|▎ | 1/35 [00:00<00:14, 2.31it/s]
Capturing CUDA graph shapes: 6%|▌ | 2/35 [00:00<00:13, 2.39it/s]
Capturing CUDA graph shapes: 9%|▊ | 3/35 [00:01<00:13, 2.41it/s]
Capturing CUDA graph shapes: 11%|█▏ | 4/35 [00:01<00:12, 2.42it/s]
Capturing CUDA graph shapes: 14%|█▍ | 5/35 [00:02<00:12, 2.43it/s]
Capturing CUDA graph shapes: 17%|█▋ | 6/35 [00:02<00:11, 2.43it/s]
Capturing CUDA graph shapes: 20%|██ | 7/35 [00:02<00:11, 2.44it/s]
Capturing CUDA graph shapes: 23%|██▎ | 8/35 [00:03<00:11, 2.44it/s]
Capturing CUDA graph shapes: 26%|██▌ | 9/35 [00:03<00:10, 2.44it/s]
Capturing CUDA graph shapes: 29%|██▊ | 10/35 [00:04<00:10, 2.45it/s]
Capturing CUDA graph shapes: 31%|███▏ | 11/35 [00:04<00:09, 2.45it/s]
Capturing CUDA graph shapes: 34%|███▍ | 12/35 [00:04<00:09, 2.45it/s]
Capturing CUDA graph shapes: 37%|███▋ | 13/35 [00:05<00:08, 2.45it/s]
Capturing CUDA graph shapes: 40%|████ | 14/35 [00:05<00:08, 2.45it/s]
Capturing CUDA graph shapes: 43%|████▎ | 15/35 [00:06<00:08, 2.45it/s]
Capturing CUDA graph shapes: 46%|████▌ | 16/35 [00:06<00:07, 2.45it/s]
Capturing CUDA graph shapes: 49%|████▊ | 17/35 [00:06<00:07, 2.44it/s]
Capturing CUDA graph shapes: 51%|█████▏ | 18/35 [00:07<00:06, 2.44it/s]
Capturing CUDA graph shapes: 54%|█████▍ | 19/35 [00:07<00:06, 2.44it/s]
Capturing CUDA graph shapes: 57%|█████▋ | 20/35 [00:08<00:06, 2.44it/s]
Capturing CUDA graph shapes: 60%|██████ | 21/35 [00:08<00:05, 2.43it/s]
Capturing CUDA graph shapes: 63%|██████▎ | 22/35 [00:09<00:05, 2.43it/s]
Capturing CUDA graph shapes: 66%|██████▌ | 23/35 [00:09<00:04, 2.43it/s]
Capturing CUDA graph shapes: 69%|██████▊ | 24/35 [00:09<00:04, 2.38it/s]
Capturing CUDA graph shapes: 71%|███████▏ | 25/35 [00:10<00:04, 2.39it/s]
Capturing CUDA graph shapes: 74%|███████▍ | 26/35 [00:10<00:03, 2.35it/s]
Capturing CUDA graph shapes: 77%|███████▋ | 27/35 [00:11<00:03, 2.37it/s]
Capturing CUDA graph shapes: 80%|████████ | 28/35 [00:11<00:02, 2.38it/s]
Capturing CUDA graph shapes: 83%|████████▎ | 29/35 [00:11<00:02, 2.40it/s]
Capturing CUDA graph shapes: 86%|████████▌ | 30/35 [00:12<00:02, 2.37it/s]
Capturing CUDA graph shapes: 89%|████████▊ | 31/35 [00:12<00:01, 2.39it/s]
Capturing CUDA graph shapes: 91%|█████████▏| 32/35 [00:13<00:01, 2.40it/s]
Capturing CUDA graph shapes: 94%|█████████▍| 33/35 [00:13<00:00, 2.41it/s]
Capturing CUDA graph shapes: 97%|█████████▋| 34/35 [00:14<00:00, 2.42it/s]
Capturing CUDA graph shapes: 100%|██████████| 35/35 [00:14<00:00, 2.41it/s]
Capturing CUDA graph shapes: 100%|██████████| 35/35 [00:14<00:00, 2.42it/s] |
| INFO 08-10 22:45:13 [model_runner.py:1671] Graph capturing finished in 14 secs, took 0.23 GiB |
| INFO 08-10 22:45:13 [llm_engine.py:428] init engine (profile, create kv cache, warmup model) took 18.36 seconds |
| INFO:models.custom_scheduler:Using CustomScheduler |
| INFO:models.custom_scheduler:CustomScheduler initialized with config: SchedulerConfig(runner_type='generate', max_num_batched_tokens=4096, max_num_seqs=256, max_model_len=4096, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, num_lookahead_slots=0, cuda_graph_sizes=[512], delay_factor=0.0, enable_chunked_prefill=False, is_multimodal_model=False, max_num_encoder_input_tokens=4096, encoder_cache_size=4096, preemption_mode=None, num_scheduler_steps=1, multi_step_stream_outputs=True, send_delta_data=False, policy='fcfs', chunked_prefill_enabled=False, disable_chunked_mm_input=False, scheduler_cls=<class 'models.custom_scheduler.CustomScheduler'>, disable_hybrid_kv_cache_manager=False) |
| INFO:models.vllm_wrapper:Generator model initialized successfully in separate process |
| INFO:models.vllm_wrapper:Initializing verifier model: peiyi9979/math-shepherd-mistral-7b-prm |
| INFO 08-10 22:45:19 [__init__.py:244] Automatically detected platform cuda. |
| INFO:models.tts_llm:Prefix-aware scheduling enabled: False |
| ✅ Process PID: 3736531 | CUDA Context Object: None |
| INFO 08-10 22:45:29 [config.py:1472] Using max model len 4096 |
| INFO 08-10 22:45:29 [arg_utils.py:1596] (Disabling) chunked prefill by default |
| INFO 08-10 22:45:30 [config.py:4601] Only "last" pooling supports chunked prefill and prefix caching; disabling both. |
| You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message |
| You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. |
| INFO 08-10 22:45:31 [core.py:526] Waiting for init message from front-end. |
| INFO 08-10 22:45:31 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='peiyi9979/math-shepherd-mistral-7b-prm', speculative_config=None, tokenizer='peiyi9979/math-shepherd-mistral-7b-prm', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=peiyi9979/math-shepherd-mistral-7b-prm, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, pooler_config=PoolerConfig(pooling_type='STEP', normalize=None, softmax=True, step_tag_id=12902, returned_token_ids=[648, 387]), compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null} |
| INFO 08-10 22:45:32 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 |
| WARNING 08-10 22:45:32 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. |
| INFO 08-10 22:45:32 [gpu_model_runner.py:1770] Starting to load model peiyi9979/math-shepherd-mistral-7b-prm... |
| INFO 08-10 22:45:32 [gpu_model_runner.py:1775] Loading model from scratch... |
| INFO 08-10 22:45:32 [cuda.py:284] Using Flash Attention backend on V1 engine. |
| INFO 08-10 22:45:33 [weight_utils.py:292] Using model weights format ['*.bin'] |
|
Loading pt checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s] |
|
Loading pt checkpoint shards: 50% Completed | 1/2 [00:06<00:06, 6.31s/it] |
|
Loading pt checkpoint shards: 100% Completed | 2/2 [00:10<00:00, 4.93s/it] |
|
Loading pt checkpoint shards: 100% Completed | 2/2 [00:10<00:00, 5.14s/it] |
| |
| INFO 08-10 22:45:44 [default_loader.py:272] Loading weights took 10.28 seconds |
| INFO 08-10 22:45:44 [gpu_model_runner.py:1801] Model loading took 13.2457 GiB and 11.338008 seconds |
| INFO 08-10 22:45:51 [backends.py:508] Using cache directory: /home/cy/.cache/vllm/torch_compile_cache/eae4db4fef/rank_0_0/backbone for vLLM's torch.compile |
| INFO 08-10 22:45:52 [backends.py:519] Dynamo bytecode transform time: 7.17 s |
| INFO 08-10 22:45:57 [backends.py:155] Directly load the compiled graph(s) for shape None from the cache, took 4.807 s |
| INFO 08-10 22:45:58 [monitor.py:34] torch.compile takes 7.17 s in total |
| INFO 08-10 22:45:59 [gpu_worker.py:232] Available KV cache memory: 0.88 GiB |
| INFO 08-10 22:45:59 [kv_cache_utils.py:716] GPU KV cache size: 7,168 tokens |
| INFO 08-10 22:45:59 [kv_cache_utils.py:720] Maximum concurrency for 4,096 tokens per request: 1.75x |
|
Capturing CUDA graph shapes: 0%| | 0/67 [00:00<?, ?it/s]
Capturing CUDA graph shapes: 1%|▏ | 1/67 [00:00<00:21, 3.13it/s]
Capturing CUDA graph shapes: 3%|▎ | 2/67 [00:00<00:20, 3.20it/s]
Capturing CUDA graph shapes: 4%|▍ | 3/67 [00:00<00:19, 3.21it/s]
Capturing CUDA graph shapes: 6%|▌ | 4/67 [00:01<00:19, 3.22it/s]
Capturing CUDA graph shapes: 7%|▋ | 5/67 [00:01<00:19, 3.13it/s]
Capturing CUDA graph shapes: 9%|▉ | 6/67 [00:01<00:19, 3.14it/s]
Capturing CUDA graph shapes: 10%|█ | 7/67 [00:02<00:19, 3.15it/s]
Capturing CUDA graph shapes: 12%|█▏ | 8/67 [00:02<00:18, 3.13it/s]
Capturing CUDA graph shapes: 13%|█▎ | 9/67 [00:02<00:18, 3.15it/s]
Capturing CUDA graph shapes: 15%|█▍ | 10/67 [00:03<00:18, 3.16it/s]
Capturing CUDA graph shapes: 16%|█▋ | 11/67 [00:03<00:17, 3.18it/s]
Capturing CUDA graph shapes: 18%|█▊ | 12/67 [00:03<00:17, 3.18it/s]
Capturing CUDA graph shapes: 19%|█▉ | 13/67 [00:04<00:16, 3.19it/s]
Capturing CUDA graph shapes: 21%|██ | 14/67 [00:04<00:16, 3.20it/s]
Capturing CUDA graph shapes: 22%|██▏ | 15/67 [00:04<00:16, 3.20it/s]
Capturing CUDA graph shapes: 24%|██▍ | 16/67 [00:05<00:16, 3.17it/s]
Capturing CUDA graph shapes: 25%|██▌ | 17/67 [00:05<00:15, 3.21it/s]
Capturing CUDA graph shapes: 27%|██▋ | 18/67 [00:05<00:15, 3.24it/s]
Capturing CUDA graph shapes: 28%|██▊ | 19/67 [00:05<00:14, 3.25it/s]
Capturing CUDA graph shapes: 30%|██▉ | 20/67 [00:06<00:14, 3.25it/s]
Capturing CUDA graph shapes: 31%|███▏ | 21/67 [00:06<00:14, 3.26it/s]
Capturing CUDA graph shapes: 33%|███▎ | 22/67 [00:06<00:13, 3.27it/s]
Capturing CUDA graph shapes: 34%|███▍ | 23/67 [00:07<00:13, 3.28it/s]
Capturing CUDA graph shapes: 36%|███▌ | 24/67 [00:07<00:13, 3.25it/s]
Capturing CUDA graph shapes: 37%|███▋ | 25/67 [00:07<00:12, 3.26it/s]
Capturing CUDA graph shapes: 39%|███▉ | 26/67 [00:08<00:12, 3.27it/s]
Capturing CUDA graph shapes: 40%|████ | 27/67 [00:08<00:12, 3.28it/s]
Capturing CUDA graph shapes: 42%|████▏ | 28/67 [00:08<00:12, 3.21it/s]
Capturing CUDA graph shapes: 43%|████▎ | 29/67 [00:09<00:11, 3.23it/s]
Capturing CUDA graph shapes: 45%|████▍ | 30/67 [00:09<00:11, 3.24it/s]
Capturing CUDA graph shapes: 46%|████▋ | 31/67 [00:09<00:11, 3.26it/s]
Capturing CUDA graph shapes: 48%|████▊ | 32/67 [00:09<00:10, 3.24it/s]
Capturing CUDA graph shapes: 49%|████▉ | 33/67 [00:10<00:10, 3.29it/s]
Capturing CUDA graph shapes: 51%|█████ | 34/67 [00:10<00:09, 3.33it/s]
Capturing CUDA graph shapes: 52%|█████▏ | 35/67 [00:10<00:09, 3.36it/s]
Capturing CUDA graph shapes: 54%|█████▎ | 36/67 [00:11<00:09, 3.38it/s]
Capturing CUDA graph shapes: 55%|█████▌ | 37/67 [00:11<00:08, 3.38it/s]
Capturing CUDA graph shapes: 57%|█████▋ | 38/67 [00:11<00:08, 3.40it/s]
Capturing CUDA graph shapes: 58%|█████▊ | 39/67 [00:12<00:08, 3.40it/s]
Capturing CUDA graph shapes: 60%|█████▉ | 40/67 [00:12<00:07, 3.41it/s]
Capturing CUDA graph shapes: 61%|██████ | 41/67 [00:12<00:07, 3.41it/s]
Capturing CUDA graph shapes: 63%|██████▎ | 42/67 [00:12<00:07, 3.37it/s]
Capturing CUDA graph shapes: 64%|██████▍ | 43/67 [00:13<00:07, 3.39it/s]
Capturing CUDA graph shapes: 66%|██████▌ | 44/67 [00:13<00:06, 3.40it/s]
Capturing CUDA graph shapes: 67%|██████▋ | 45/67 [00:13<00:06, 3.40it/s]
Capturing CUDA graph shapes: 69%|██████▊ | 46/67 [00:14<00:06, 3.41it/s]
Capturing CUDA graph shapes: 70%|███████ | 47/67 [00:14<00:05, 3.40it/s]
Capturing CUDA graph shapes: 72%|███████▏ | 48/67 [00:14<00:05, 3.41it/s]
Capturing CUDA graph shapes: 73%|███████▎ | 49/67 [00:14<00:05, 3.42it/s]
Capturing CUDA graph shapes: 75%|███████▍ | 50/67 [00:15<00:04, 3.43it/s]
Capturing CUDA graph shapes: 76%|███████▌ | 51/67 [00:15<00:04, 3.43it/s]
Capturing CUDA graph shapes: 78%|███████▊ | 52/67 [00:15<00:04, 3.42it/s]
Capturing CUDA graph shapes: 79%|███████▉ | 53/67 [00:16<00:04, 3.43it/s]
Capturing CUDA graph shapes: 81%|████████ | 54/67 [00:16<00:03, 3.44it/s]
Capturing CUDA graph shapes: 82%|████████▏ | 55/67 [00:16<00:03, 3.44it/s]
Capturing CUDA graph shapes: 84%|████████▎ | 56/67 [00:16<00:03, 3.40it/s]
Capturing CUDA graph shapes: 85%|████████▌ | 57/67 [00:17<00:02, 3.36it/s]
Capturing CUDA graph shapes: 87%|████████▋ | 58/67 [00:17<00:02, 3.38it/s]
Capturing CUDA graph shapes: 88%|████████▊ | 59/67 [00:17<00:02, 3.40it/s]
Capturing CUDA graph shapes: 90%|████████▉ | 60/67 [00:18<00:02, 3.42it/s]
Capturing CUDA graph shapes: 91%|█████████ | 61/67 [00:18<00:01, 3.43it/s]
Capturing CUDA graph shapes: 93%|█████████▎| 62/67 [00:18<00:01, 3.44it/s]
Capturing CUDA graph shapes: 94%|█████████▍| 63/67 [00:19<00:01, 3.40it/s]
Capturing CUDA graph shapes: 96%|█████████▌| 64/67 [00:19<00:00, 3.43it/s]
Capturing CUDA graph shapes: 97%|█████████▋| 65/67 [00:19<00:00, 3.45it/s]
Capturing CUDA graph shapes: 99%|█████████▊| 66/67 [00:19<00:00, 3.46it/s]
Capturing CUDA graph shapes: 100%|██████████| 67/67 [00:20<00:00, 3.44it/s]
Capturing CUDA graph shapes: 100%|██████████| 67/67 [00:20<00:00, 3.32it/s] |
| INFO 08-10 22:46:19 [gpu_model_runner.py:2326] Graph capturing finished in 20 secs, took 0.53 GiB |
| INFO 08-10 22:46:19 [core.py:172] init engine (profile, create kv cache, warmup model) took 34.95 seconds |
| INFO 08-10 22:46:20 [config.py:4601] Only "last" pooling supports chunked prefill and prefix caching; disabling both. |
| INFO:models.vllm_wrapper:Verifier model initialized successfully in separate process |
| INFO:fasttts:FastTTS models initialized successfully |
| INFO:__main__:Starting search... |
| INFO:fasttts:Processing 1 problems at once |
| INFO:search.beam_search:Starting beam search iterations |
|
Beam search iterations: 0%| | 0/2 [00:00<?, ?it/s]
Adding requests: 0%| | 0/32 [00:00<?, ?it/s]
Adding requests: 100%|██████████| 32/32 [00:00<00:00, 1056.47it/s] |
|
Processed prompts: 0%| | 0/32 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]INFO 08-10 22:46:20 [metrics.py:417] Avg prompt throughput: 60.3 tokens/s, Avg generation throughput: 0.2 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.7%, CPU KV cache usage: 0.0%. |
| INFO 08-10 22:46:20 [metrics.py:433] Prefix cache hit rate: GPU: 96.88%, CPU: 0.00% |
|
Processed prompts: 3%|▎ | 1/32 [00:00<00:04, 6.36it/s, est. speed input: 1716.75 toks/s, output: 19.07 toks/s]
Processed prompts: 9%|▉ | 3/32 [00:00<00:02, 11.47it/s, est. speed input: 2866.19 toks/s, output: 102.61 toks/s]
Processed prompts: 22%|██▏ | 7/32 [00:00<00:02, 9.06it/s, est. speed input: 2461.88 toks/s, output: 251.39 toks/s]
Processed prompts: 28%|██▊ | 9/32 [00:00<00:02, 10.86it/s, est. speed input: 2772.59 toks/s, output: 417.59 toks/s]
Processed prompts: 34%|███▍ | 11/32 [00:01<00:01, 11.50it/s, est. speed input: 2887.38 toks/s, output: 565.80 toks/s]
Processed prompts: 41%|████ | 13/32 [00:01<00:01, 11.36it/s, est. speed input: 2902.09 toks/s, output: 706.91 toks/s]
Processed prompts: 47%|████▋ | 15/32 [00:01<00:01, 12.83it/s, est. speed input: 3068.41 toks/s, output: 873.53 toks/s]
Processed prompts: 53%|█████▎ | 17/32 [00:01<00:01, 9.71it/s, est. speed input: 2800.00 toks/s, output: 932.10 toks/s]
Processed prompts: 59%|█████▉ | 19/32 [00:01<00:01, 10.19it/s, est. speed input: 2828.20 toks/s, output: 1080.55 toks/s]
Processed prompts: 66%|██████▌ | 21/32 [00:01<00:01, 10.43it/s, est. speed input: 2841.77 toks/s, output: 1233.93 toks/s]
Processed prompts: 72%|███████▏ | 23/32 [00:02<00:00, 9.21it/s, est. speed input: 2734.25 toks/s, output: 1331.89 toks/s]
Processed prompts: 78%|███████▊ | 25/32 [00:02<00:01, 6.39it/s, est. speed input: 2404.96 toks/s, output: 1323.96 toks/s]
Processed prompts: 81%|████████▏ | 26/32 [00:03<00:01, 5.09it/s, est. speed input: 2202.05 toks/s, output: 1305.23 toks/s]
Processed prompts: 84%|████████▍ | 27/32 [00:03<00:00, 5.54it/s, est. speed input: 2205.33 toks/s, output: 1399.12 toks/s]
Processed prompts: 88%|████████▊ | 28/32 [00:04<00:01, 2.47it/s, est. speed input: 1684.05 toks/s, output: 1176.60 toks/s]
Processed prompts: 91%|█████████ | 29/32 [00:04<00:01, 2.54it/s, est. speed input: 1616.21 toks/s, output: 1237.85 toks/s]
Processed prompts: 94%|█████████▍| 30/32 [00:04<00:00, 3.06it/s, est. speed input: 1627.12 toks/s, output: 1352.71 toks/s]
Processed prompts: 97%|█████████▋| 31/32 [00:05<00:00, 3.63it/s, est. speed input: 1636.21 toks/s, output: 1465.35 toks/s]INFO 08-10 22:46:25 [metrics.py:417] Avg prompt throughput: 917.8 tokens/s, Avg generation throughput: 1648.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.0%, CPU KV cache usage: 0.0%. |
| INFO 08-10 22:46:25 [metrics.py:433] Prefix cache hit rate: GPU: 96.88%, CPU: 0.00% |
|
Processed prompts: 100%|██████████| 32/32 [00:05<00:00, 3.98it/s, est. speed input: 1628.93 toks/s, output: 1563.69 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:05<00:00, 3.98it/s, est. speed input: 1628.93 toks/s, output: 1563.69 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:05<00:00, 6.03it/s, est. speed input: 1628.93 toks/s, output: 1563.69 toks/s] |
|
Adding requests: 0%| | 0/32 [00:00<?, ?it/s]
Adding requests: 100%|██████████| 32/32 [00:00<00:00, 10764.98it/s] |
|
Processed prompts: 0%| | 0/32 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 3%|▎ | 1/32 [00:00<00:12, 2.52it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 16%|█▌ | 5/32 [00:00<00:03, 7.12it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 31%|███▏ | 10/32 [00:01<00:02, 9.74it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 47%|████▋ | 15/32 [00:01<00:01, 10.99it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 59%|█████▉ | 19/32 [00:01<00:01, 10.83it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 78%|███████▊ | 25/32 [00:02<00:00, 12.44it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 91%|█████████ | 29/32 [00:02<00:00, 13.00it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:02<00:00, 13.00it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:02<00:00, 12.33it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s] |
| INFO:search.beam_search:---------------------------------------------------------------------------------------------------- |
| INFO:search.beam_search:Iteration 0 completed beams: 0, skipped beams: 0, extended beams: 0, verifier beams: 0, total latency: 8.03s, length of agg_scores: [1, 1, 1, 1, 1, 1, 1, 1], num_steps: [1, 1, 1, 1, 1, 1, 1, 1], stop reasons: ['\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n', '\n\n'] |
|
Beam search iterations: 50%|█████ | 1/2 [00:08<00:08, 8.08s/it]
Adding requests: 0%| | 0/32 [00:00<?, ?it/s]
Adding requests: 100%|██████████| 32/32 [00:00<00:00, 578.50it/s] |
|
Processed prompts: 0%| | 0/32 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]INFO 08-10 22:46:30 [metrics.py:417] Avg prompt throughput: 5184.4 tokens/s, Avg generation throughput: 1555.0 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 12.5%, CPU KV cache usage: 0.0%. |
| INFO 08-10 22:46:30 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00% |
| INFO 08-10 22:46:35 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3955.6 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 31.8%, CPU KV cache usage: 0.0%. |
| INFO 08-10 22:46:35 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00% |
| INFO 08-10 22:46:40 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3675.8 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 49.8%, CPU KV cache usage: 0.0%. |
| INFO 08-10 22:46:40 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00% |
| INFO 08-10 22:46:45 [metrics.py:417] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 3413.7 tokens/s, Running: 32 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 66.4%, CPU KV cache usage: 0.0%. |
| INFO 08-10 22:46:45 [metrics.py:433] Prefix cache hit rate: GPU: 86.75%, CPU: 0.00% |
|
Processed prompts: 3%|▎ | 1/32 [00:17<09:12, 17.83s/it, est. speed input: 37.42 toks/s, output: 114.89 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:17<00:00, 17.83s/it, est. speed input: 1454.44 toks/s, output: 3676.27 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:17<00:00, 1.80it/s, est. speed input: 1454.44 toks/s, output: 3676.27 toks/s] |
|
Adding requests: 0%| | 0/32 [00:00<?, ?it/s]
Adding requests: 100%|██████████| 32/32 [00:00<00:00, 6209.47it/s] |
|
Processed prompts: 0%| | 0/32 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 3%|▎ | 1/32 [00:00<00:13, 2.37it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 6%|▋ | 2/32 [00:00<00:12, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 9%|▉ | 3/32 [00:01<00:12, 2.40it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 12%|█▎ | 4/32 [00:01<00:11, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 16%|█▌ | 5/32 [00:02<00:11, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 19%|█▉ | 6/32 [00:02<00:10, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 22%|██▏ | 7/32 [00:02<00:10, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 25%|██▌ | 8/32 [00:03<00:09, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 28%|██▊ | 9/32 [00:03<00:09, 2.42it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 31%|███▏ | 10/32 [00:04<00:09, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 34%|███▍ | 11/32 [00:04<00:08, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 38%|███▊ | 12/32 [00:04<00:08, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 41%|████ | 13/32 [00:05<00:07, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 44%|████▍ | 14/32 [00:05<00:07, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 47%|████▋ | 15/32 [00:06<00:07, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 50%|█████ | 16/32 [00:06<00:06, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 53%|█████▎ | 17/32 [00:07<00:06, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 56%|█████▋ | 18/32 [00:07<00:05, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 59%|█████▉ | 19/32 [00:07<00:05, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 62%|██████▎ | 20/32 [00:08<00:04, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 66%|██████▌ | 21/32 [00:08<00:04, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 69%|██████▉ | 22/32 [00:09<00:04, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 72%|███████▏ | 23/32 [00:09<00:03, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 75%|███████▌ | 24/32 [00:09<00:03, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 78%|███████▊ | 25/32 [00:10<00:02, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 81%|████████▏ | 26/32 [00:10<00:02, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 84%|████████▍ | 27/32 [00:11<00:02, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 88%|████████▊ | 28/32 [00:11<00:01, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 91%|█████████ | 29/32 [00:12<00:01, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 94%|█████████▍| 30/32 [00:12<00:00, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 97%|█████████▋| 31/32 [00:12<00:00, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:13<00:00, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:13<00:00, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
Processed prompts: 100%|██████████| 32/32 [00:13<00:00, 2.41it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s] |
| INFO:search.beam_search:Early exit: 0 active, 32 completed |
|
Beam search iterations: 50%|█████ | 1/2 [00:39<00:39, 39.82s/it] |
| INFO:__main__: |
| ================================================== |
| INFO:__main__:RESULTS |
| INFO:__main__:================================================== |
| INFO:__main__:Total num tokens: 68030 |
| INFO:__main__:Effective num tokens: 85322 |
| INFO:__main__:Effective num tokens per step: 2666.3125 |
| INFO:__main__:Number of tokens in 1 completion: 2666.3125 |
| INFO:__main__:N completion tokens: 68030 |
| INFO:__main__:Total generator latency: 23.24s |
| INFO:__main__:Total verifier latency: 16.29s |
| INFO:__main__:N generator latency: 23.24s |
| INFO:__main__:N verifier latency: 16.29s |
| INFO:__main__:Goodput: 2158.48 |
| INFO:__main__:Per-token generator goodput: 67.45 |
| INFO:__main__:Completions: 32 |
| INFO:__main__:Completion time: 25.93s |
| INFO:__main__:Number of steps in 1 completion: 10.25 |
| INFO:__main__:Extended tokens: [[], []] |
| INFO:__main__:Cleaning up... |
| [rank0]:[W810 22:47:01.495434270 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) |
| INFO:models.vllm_wrapper:Generator model shutdown complete |
| INFO:models.vllm_wrapper:Verifier model shutdown complete |
| INFO:fasttts:FastTTS shutdown complete |
| INFO:__main__:Experiment completed successfully! |
| GPU 3: General Metrics for NVIDIA AD10x (any frequency) |
| Generating '/tmp/nsys-report-e5f4.qdstrm' |
|
[1/8] [0% ] vllm_tts_N32.nsys-rep
[1/8] [0% ] vllm_tts_N32.nsys-rep
[1/8] [0% ] vllm_tts_N32.nsys-rep
[1/8] [5% ] vllm_tts_N32.nsys-rep
[1/8] [5% ] vllm_tts_N32.nsys-rep
[1/8] [7% ] vllm_tts_N32.nsys-rep
[1/8] [6% ] vllm_tts_N32.nsys-rep
[1/8] [7% ] vllm_tts_N32.nsys-rep
[1/8] [8% ] vllm_tts_N32.nsys-rep
[1/8] [9% ] vllm_tts_N32.nsys-rep
[1/8] [8% ] vllm_tts_N32.nsys-rep
[1/8] [10% ] vllm_tts_N32.nsys-rep
[1/8] [9% ] vllm_tts_N32.nsys-rep
[1/8] [10% ] vllm_tts_N32.nsys-rep
[1/8] [9% ] vllm_tts_N32.nsys-rep
[1/8] [10% ] vllm_tts_N32.nsys-rep
[1/8] [9% ] vllm_tts_N32.nsys-rep
[1/8] [10% ] vllm_tts_N32.nsys-rep
[1/8] [9% ] vllm_tts_N32.nsys-rep
[1/8] [10% ] vllm_tts_N32.nsys-rep
[1/8] [11% ] vllm_tts_N32.nsys-rep
[1/8] [12% ] vllm_tts_N32.nsys-rep
[1/8] [11% ] vllm_tts_N32.nsys-rep
[1/8] [12% ] vllm_tts_N32.nsys-rep
[1/8] [11% ] vllm_tts_N32.nsys-rep
[1/8] [12% ] vllm_tts_N32.nsys-rep
[1/8] [11% ] vllm_tts_N32.nsys-rep
[1/8] [12% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [12% ] vllm_tts_N32.nsys-rep
[1/8] [11% ] vllm_tts_N32.nsys-rep
[1/8] [10% ] vllm_tts_N32.nsys-rep
[1/8] [9% ] vllm_tts_N32.nsys-rep
[1/8] [8% ] vllm_tts_N32.nsys-rep
[1/8] [9% ] vllm_tts_N32.nsys-rep
[1/8] [10% ] vllm_tts_N32.nsys-rep
[1/8] [11% ] vllm_tts_N32.nsys-rep
[1/8] [12% ] vllm_tts_N32.nsys-rep
[1/8] [13% ] vllm_tts_N32.nsys-rep
[1/8] [14% ] vllm_tts_N32.nsys-rep
[1/8] [=15% ] vllm_tts_N32.nsys-rep
[1/8] [=16% ] vllm_tts_N32.nsys-rep
[1/8] [=17% ] vllm_tts_N32.nsys-rep
[1/8] [==18% ] vllm_tts_N32.nsys-rep
[1/8] [==19% ] vllm_tts_N32.nsys-rep
[1/8] [==20% ] vllm_tts_N32.nsys-rep
[1/8] [==21% ] vllm_tts_N32.nsys-rep
[1/8] [===22% ] vllm_tts_N32.nsys-rep
[1/8] [===23% ] vllm_tts_N32.nsys-rep
[1/8] [===24% ] vllm_tts_N32.nsys-rep
[1/8] [====25% ] vllm_tts_N32.nsys-rep
[1/8] [====26% ] vllm_tts_N32.nsys-rep
[1/8] [====27% ] vllm_tts_N32.nsys-rep
[1/8] [====28% ] vllm_tts_N32.nsys-rep
[1/8] [=====29% ] vllm_tts_N32.nsys-rep
[1/8] [=====30% ] vllm_tts_N32.nsys-rep
[1/8] [=====31% ] vllm_tts_N32.nsys-rep
[1/8] [=====32% ] vllm_tts_N32.nsys-rep
[1/8] [======33% ] vllm_tts_N32.nsys-rep
[1/8] [======34% ] vllm_tts_N32.nsys-rep
[1/8] [======35% ] vllm_tts_N32.nsys-rep
[1/8] [=======36% ] vllm_tts_N32.nsys-rep
[1/8] [=======37% ] vllm_tts_N32.nsys-rep
[1/8] [=======38% ] vllm_tts_N32.nsys-rep
[1/8] [=======39% ] vllm_tts_N32.nsys-rep
[1/8] [========40% ] vllm_tts_N32.nsys-rep
[1/8] [========41% ] vllm_tts_N32.nsys-rep
[1/8] [========42% ] vllm_tts_N32.nsys-rep
[1/8] [=========43% ] vllm_tts_N32.nsys-rep
[1/8] [=========44% ] vllm_tts_N32.nsys-rep
[1/8] [=========45% ] vllm_tts_N32.nsys-rep
[1/8] [=========46% ] vllm_tts_N32.nsys-rep
[1/8] [==========47% ] vllm_tts_N32.nsys-rep
[1/8] [==========48% ] vllm_tts_N32.nsys-rep
[1/8] [==========49% ] vllm_tts_N32.nsys-rep
[1/8] [===========50% ] vllm_tts_N32.nsys-rep
[1/8] [===========51% ] vllm_tts_N32.nsys-rep
[1/8] [===========52% ] vllm_tts_N32.nsys-rep
[1/8] [===========53% ] vllm_tts_N32.nsys-rep
[1/8] [============54% ] vllm_tts_N32.nsys-rep
[1/8] [============55% ] vllm_tts_N32.nsys-rep
[1/8] [============56% ] vllm_tts_N32.nsys-rep
[1/8] [============57% ] vllm_tts_N32.nsys-rep
[1/8] [=============58% ] vllm_tts_N32.nsys-rep
[1/8] [=============59% ] vllm_tts_N32.nsys-rep
[1/8] [=============60% ] vllm_tts_N32.nsys-rep
[1/8] [==============61% ] vllm_tts_N32.nsys-rep
[1/8] [==============62% ] vllm_tts_N32.nsys-rep
[1/8] [==============63% ] vllm_tts_N32.nsys-rep
[1/8] [========================100%] vllm_tts_N32.nsys-rep
[1/8] [========================100%] vllm_tts_N32.nsys-rep |
|
[2/8] [0% ] vllm_tts_N32.sqlite
[2/8] [1% ] vllm_tts_N32.sqlite
[2/8] [2% ] vllm_tts_N32.sqlite
[2/8] [3% ] vllm_tts_N32.sqlite
[2/8] [4% ] vllm_tts_N32.sqlite
[2/8] [5% ] vllm_tts_N32.sqlite
[2/8] [6% ] vllm_tts_N32.sqlite
[2/8] [7% ] vllm_tts_N32.sqlite
[2/8] [8% ] vllm_tts_N32.sqlite
[2/8] [9% ] vllm_tts_N32.sqlite
[2/8] [10% ] vllm_tts_N32.sqlite
[2/8] [11% ] vllm_tts_N32.sqlite
[2/8] [12% ] vllm_tts_N32.sqlite
[2/8] [13% ] vllm_tts_N32.sqlite
[2/8] [14% ] vllm_tts_N32.sqlite
[2/8] [=15% ] vllm_tts_N32.sqlite
[2/8] [=16% ] vllm_tts_N32.sqlite
[2/8] [=17% ] vllm_tts_N32.sqlite
[2/8] [==18% ] vllm_tts_N32.sqlite
[2/8] [==19% ] vllm_tts_N32.sqlite
[2/8] [==20% ] vllm_tts_N32.sqlite
[2/8] [==21% ] vllm_tts_N32.sqlite
[2/8] [===22% ] vllm_tts_N32.sqlite
[2/8] [===23% ] vllm_tts_N32.sqlite
[2/8] [===24% ] vllm_tts_N32.sqlite
[2/8] [====25% ] vllm_tts_N32.sqlite
[2/8] [====26% ] vllm_tts_N32.sqlite
[2/8] [====27% ] vllm_tts_N32.sqlite
[2/8] [====28% ] vllm_tts_N32.sqlite
[2/8] [=====29% ] vllm_tts_N32.sqlite
[2/8] [=====30% ] vllm_tts_N32.sqlite
[2/8] [=====31% ] vllm_tts_N32.sqlite
[2/8] [=====32% ] vllm_tts_N32.sqlite
[2/8] [======33% ] vllm_tts_N32.sqlite
[2/8] [======34% ] vllm_tts_N32.sqlite
[2/8] [======35% ] vllm_tts_N32.sqlite
[2/8] [=======36% ] vllm_tts_N32.sqlite
[2/8] [=======37% ] vllm_tts_N32.sqlite
[2/8] [=======38% ] vllm_tts_N32.sqlite
[2/8] [=======39% ] vllm_tts_N32.sqlite
[2/8] [========40% ] vllm_tts_N32.sqlite
[2/8] [========41% ] vllm_tts_N32.sqlite
[2/8] [========42% ] vllm_tts_N32.sqlite
[2/8] [=========43% ] vllm_tts_N32.sqlite
[2/8] [=========44% ] vllm_tts_N32.sqlite
[2/8] [=========45% ] vllm_tts_N32.sqlite
[2/8] [=========46% ] vllm_tts_N32.sqlite
[2/8] [==========47% ] vllm_tts_N32.sqlite
[2/8] [==========48% ] vllm_tts_N32.sqlite
[2/8] [==========49% ] vllm_tts_N32.sqlite
[2/8] [===========50% ] vllm_tts_N32.sqlite
[2/8] [===========51% ] vllm_tts_N32.sqlite
[2/8] [===========52% ] vllm_tts_N32.sqlite
[2/8] [===========53% ] vllm_tts_N32.sqlite
[2/8] [============54% ] vllm_tts_N32.sqlite
[2/8] [============55% ] vllm_tts_N32.sqlite
[2/8] [============56% ] vllm_tts_N32.sqlite
[2/8] [============57% ] vllm_tts_N32.sqlite
[2/8] [=============58% ] vllm_tts_N32.sqlite
[2/8] [=============59% ] vllm_tts_N32.sqlite
[2/8] [=============60% ] vllm_tts_N32.sqlite
[2/8] [==============61% ] vllm_tts_N32.sqlite
[2/8] [==============62% ] vllm_tts_N32.sqlite
[2/8] [==============63% ] vllm_tts_N32.sqlite
[2/8] [==============64% ] vllm_tts_N32.sqlite
[2/8] [===============65% ] vllm_tts_N32.sqlite
[2/8] [===============66% ] vllm_tts_N32.sqlite
[2/8] [===============67% ] vllm_tts_N32.sqlite
[2/8] [================68% ] vllm_tts_N32.sqlite
[2/8] [================69% ] vllm_tts_N32.sqlite
[2/8] [================70% ] vllm_tts_N32.sqlite
[2/8] [================71% ] vllm_tts_N32.sqlite
[2/8] [=================72% ] vllm_tts_N32.sqlite
[2/8] [=================73% ] vllm_tts_N32.sqlite
[2/8] [=================74% ] vllm_tts_N32.sqlite
[2/8] [==================75% ] vllm_tts_N32.sqlite
[2/8] [==================76% ] vllm_tts_N32.sqlite
[2/8] [==================77% ] vllm_tts_N32.sqlite
[2/8] [==================78% ] vllm_tts_N32.sqlite
[2/8] [===================79% ] vllm_tts_N32.sqlite
[2/8] [===================80% ] vllm_tts_N32.sqlite
[2/8] [===================81% ] vllm_tts_N32.sqlite
[2/8] [===================82% ] vllm_tts_N32.sqlite
[2/8] [====================83% ] vllm_tts_N32.sqlite
[2/8] [====================84% ] vllm_tts_N32.sqlite
[2/8] [====================85% ] vllm_tts_N32.sqlite
[2/8] [=====================86% ] vllm_tts_N32.sqlite
[2/8] [=====================87% ] vllm_tts_N32.sqlite
[2/8] [=====================88% ] vllm_tts_N32.sqlite
[2/8] [=====================89% ] vllm_tts_N32.sqlite
[2/8] [======================90% ] vllm_tts_N32.sqlite
[2/8] [======================91% ] vllm_tts_N32.sqlite
[2/8] [======================92% ] vllm_tts_N32.sqlite
[2/8] [=======================93% ] vllm_tts_N32.sqlite
[2/8] [=======================94% ] vllm_tts_N32.sqlite
[2/8] [=======================95% ] vllm_tts_N32.sqlite
[2/8] [=======================96% ] vllm_tts_N32.sqlite
[2/8] [========================97% ] vllm_tts_N32.sqlite
[2/8] [========================98% ] vllm_tts_N32.sqlite
[2/8] [========================99% ] vllm_tts_N32.sqlite
[2/8] [========================100%] vllm_tts_N32.sqlite
[2/8] [========================100%] vllm_tts_N32.sqlite |
| [3/8] Executing 'nvtx_sum' stats report |
| |
| Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Style Range |
| -------- --------------- --------- ---------------- ---------------- -------------- -------------- --------------- ------- ---------------------------------- |
| 50.5 39,843,350,248 1 39,843,350,248.0 39,843,350,248.0 39,843,350,248 39,843,350,248 0.0 PushPop :Total |
| 29.4 23,243,493,104 2 11,621,746,552.0 11,621,746,552.0 5,340,266,592 17,903,226,512 8,883,354,151.2 PushPop :generate |
| 20.1 15,877,275,147 2 7,938,637,573.5 7,938,637,573.5 2,602,685,677 13,274,589,470 7,546,175,540.2 PushPop :encode |
| 0.0 91,012 1 91,012.0 91,012.0 91,012 91,012 0.0 PushPop CCCL:cub::DeviceSegmentedRadixSort |
| |
| [4/8] Executing 'osrt_sum' stats report |
| |
| Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name |
| -------- ----------------- --------- ---------------- ---------------- --------- -------------- --------------- ---------------------- |
| 32.0 1,495,279,673,407 100 14,952,796,734.1 12,598,416,785.0 34,960 52,079,469,971 5,701,768,597.0 pthread_cond_wait |
| 21.6 1,006,154,375,582 64,808 15,525,157.0 10,062,301.0 1,012 48,579,597,860 373,003,994.1 epoll_wait |
| 21.1 986,064,713,572 8,269 119,248,363.0 100,065,882.0 1,007 1,000,133,427 124,515,794.8 pthread_cond_timedwait |
| 10.5 491,657,664,329 61 8,059,961,710.3 10,000,070,982.0 24,009 10,000,129,572 3,765,291,149.8 sem_timedwait |
| 8.9 415,835,725,164 40,707 10,215,337.0 3,225.0 1,000 72,458,789,164 680,428,756.8 read |
| 5.3 249,470,841,708 2,131 117,067,499.6 100,117,330.0 1,000 18,888,613,964 658,468,072.7 poll |
| 0.4 16,443,801,227 66 249,148,503.4 403,743,051.0 18,476 593,598,899 204,107,189.1 sem_wait |
| 0.1 2,550,459,210 3,533 721,896.2 10,914.0 1,003 128,136,713 7,428,082.5 ioctl |
| 0.0 1,263,893,529 665 1,900,591.8 1,086.0 1,000 1,191,076,865 46,235,235.3 waitpid |
| 0.0 392,832,052 148,765 2,640.6 1,273.0 1,000 124,952,899 323,970.7 munmap |
| 0.0 328,065,758 523 627,276.8 2,427.0 1,092 23,223,237 3,321,108.6 fopen |
| 0.0 202,592,204 40 5,064,805.1 5,065,262.5 5,024,292 5,081,925 10,309.1 nanosleep |
| 0.0 147,053,034 46,544 3,159.4 2,713.0 1,001 115,114 2,016.9 open64 |
| 0.0 126,438,937 150 842,926.2 3,891.5 1,000 19,663,699 3,811,776.5 open |
| 0.0 61,549,970 374 164,572.1 5,616.5 1,850 22,166,135 1,774,807.5 fopen64 |
| 0.0 61,114,705 3 20,371,568.3 1,056,842.0 619,147 59,438,716 33,833,850.1 fork |
| 0.0 58,617,528 10 5,861,752.8 32,102.0 13,217 58,204,036 18,391,244.7 connect |
| 0.0 45,066,004 99 455,212.2 13,634.0 1,084 9,496,692 1,501,872.6 pthread_join |
| 0.0 44,401,764 245 181,231.7 68,648.0 48,867 11,982,775 1,051,434.0 sleep |
| 0.0 30,431,961 8,135 3,740.9 2,148.0 1,000 1,518,411 17,964.8 mmap64 |
| 0.0 25,558,768 187 136,677.9 140,934.0 1,001 3,096,670 233,027.5 recv |
| 0.0 16,935,966 215 78,771.9 56,155.0 19,092 989,161 89,946.1 pthread_create |
| 0.0 15,974,670 793 20,144.6 7,053.0 1,018 622,768 37,588.5 write |
| 0.0 9,532,074 1,514 6,296.0 1,978.5 1,020 87,696 9,142.7 fgets |
| 0.0 8,650,201 238 36,345.4 47,056.0 1,457 134,118 28,801.4 send |
| 0.0 5,570,086 31 179,680.2 183,032.0 10,664 908,382 171,310.9 pthread_rwlock_wrlock |
| 0.0 2,570,739 2,113 1,216.6 1,055.0 1,000 10,784 649.9 fclose |
| 0.0 2,112,103 147 14,368.0 3,024.0 1,849 221,897 31,740.0 futex |
| 0.0 1,542,364 26 59,321.7 12,597.0 1,374 563,122 137,182.8 pthread_mutex_lock |
| 0.0 1,523,160 15 101,544.0 2,748.0 1,015 1,460,981 376,101.0 pthread_cond_broadcast |
| 0.0 1,360,776 190 7,162.0 4,237.0 1,303 73,378 6,786.7 mmap |
| 0.0 1,283,451 11 116,677.4 119,867.0 18,712 230,389 72,308.1 pthread_rwlock_rdlock |
| 0.0 1,210,115 302 4,007.0 2,848.0 1,000 20,865 3,440.8 pthread_cond_signal |
| 0.0 563,295 102 5,522.5 4,212.0 1,861 20,178 3,431.6 pipe2 |
| 0.0 536,232 225 2,383.3 2,204.0 1,002 8,327 1,151.1 epoll_ctl |
| 0.0 290,402 42 6,914.3 6,196.5 1,853 19,061 4,749.6 socket |
| 0.0 227,097 26 8,734.5 3,393.5 1,051 59,346 15,397.1 bind |
| 0.0 124,839 16 7,802.4 8,189.0 1,879 13,015 3,637.4 pthread_mutex_trylock |
| 0.0 82,322 35 2,352.1 1,836.0 1,012 21,734 3,393.4 sigaction |
| 0.0 79,333 30 2,644.4 2,229.5 1,279 6,465 1,321.7 stat |
| 0.0 58,542 37 1,582.2 1,282.0 1,008 5,536 863.9 fcntl |
| 0.0 54,922 29 1,893.9 1,720.0 1,017 3,558 709.3 dup2 |
| 0.0 54,037 14 3,859.8 4,785.0 1,007 6,784 2,052.3 fflush |
| 0.0 47,143 5 9,428.6 11,702.0 3,898 12,930 3,965.2 accept4 |
| 0.0 43,631 8 5,453.9 5,369.0 5,179 5,818 242.8 lstat |
| 0.0 40,639 17 2,390.5 1,871.0 1,594 4,295 855.5 pread |
| 0.0 34,683 5 6,936.6 3,809.0 3,476 12,818 4,547.1 fread |
| 0.0 29,930 7 4,275.7 4,172.0 3,831 5,338 494.0 fputs_unlocked |
| 0.0 28,898 8 3,612.3 3,124.5 2,387 6,342 1,352.4 flock |
| 0.0 28,431 2 14,215.5 14,215.5 12,558 15,873 2,344.1 socketpair |
| 0.0 22,947 8 2,868.4 2,979.0 2,227 3,507 458.5 mprotect |
| 0.0 22,068 3 7,356.0 9,216.0 3,348 9,504 3,474.0 fwrite |
| 0.0 18,837 10 1,883.7 1,568.5 1,426 3,658 691.4 listen |
| 0.0 14,260 6 2,376.7 1,796.5 1,343 5,789 1,703.9 fstat |
| 0.0 10,338 1 10,338.0 10,338.0 10,338 10,338 0.0 kill |
| 0.0 7,673 2 3,836.5 3,836.5 3,715 3,958 171.8 fputs |
| 0.0 5,214 3 1,738.0 1,301.0 1,138 2,775 901.8 openat64 |
| |
| [5/8] Executing 'cuda_api_sum' stats report |
| |
| Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name |
| -------- --------------- --------- ------------ ----------- --------- ----------- ------------ ------------------------------------------ |
| 74.3 12,046,823,992 66,082 182,301.1 4,631.5 2,799 111,454,677 977,047.3 cudaMemcpyAsync |
| 14.7 2,379,107,086 75 31,721,427.8 29,347.0 5,174 131,066,461 41,844,560.4 cudaHostAlloc |
| 4.9 800,407,537 64,088 12,489.2 5,221.0 780 90,081,889 526,384.9 cudaLaunchKernel |
| 2.3 377,844,910 2,846 132,763.5 143,195.5 62,177 1,174,251 46,493.0 cudaGraphLaunch_v10000 |
| 1.0 166,969,818 10 16,696,981.8 52,678.0 12,587 166,609,772 52,673,989.8 cudaMemGetInfo |
| 0.5 74,585,652 35 2,131,018.6 1,951,594.0 1,511,828 3,134,805 550,682.4 cudaGraphInstantiateWithFlags_v11040 |
| 0.3 54,658,594 45,617 1,198.2 1,019.0 582 52,369 682.2 cudaEventRecord |
| 0.3 51,032,214 10,794 4,727.8 4,867.5 723 67,369 2,294.2 cuLaunchKernel |
| 0.3 48,625,718 10 4,862,571.8 4,999,197.5 95,993 8,683,040 2,834,704.8 cuLibraryLoadData |
| 0.3 47,375,954 45,610 1,038.7 724.0 358 50,565 920.5 cudaEventQuery |
| 0.2 27,029,614 59 458,129.1 229,518.0 68,670 2,793,349 554,293.7 cudaFree |
| 0.2 25,635,826 171 149,917.1 132,375.0 9,293 573,128 60,368.7 cudaMalloc |
| 0.2 25,364,405 5,427 4,673.7 5,592.0 243 272,681 4,409.3 cudaMemsetAsync |
| 0.2 24,554,137 35 701,546.8 657,789.0 591,504 852,369 87,341.2 cudaGraphExecDestroy_v10000 |
| 0.1 14,089,228 3,389 4,157.3 3,036.0 2,035 57,472 4,827.1 cudaStreamSynchronize |
| 0.1 13,994,409 10,794 1,296.5 626.0 285 4,529,506 45,772.1 cuKernelGetFunction |
| 0.0 6,696,473 8,753 765.0 860.0 279 10,572 425.1 cudaStreamIsCapturing_v10000 |
| 0.0 5,283,719 35 150,963.4 151,300.0 121,832 178,715 14,502.3 cudaGraphDestroy_v10000 |
| 0.0 4,944,481 8,785 562.8 565.0 307 7,193 198.0 cudaStreamGetCaptureInfo_v2_v11030 |
| 0.0 4,215,616 35 120,446.2 114,207.0 97,930 226,023 22,037.3 cudaStreamEndCapture_v10000 |
| 0.0 3,557,693 128 27,794.5 3,109.5 2,153 1,183,201 142,814.7 cudaStreamCreateWithPriority |
| 0.0 2,006,040 106 18,924.9 19,578.5 2,904 112,999 16,487.0 cudaDeviceSynchronize |
| 0.0 895,780 35 25,593.7 26,580.0 12,710 30,798 4,335.7 cudaGraphGetNodes_v10000 |
| 0.0 419,839 35 11,995.4 9,618.0 8,019 20,040 3,910.5 cudaStreamBeginCapture_v10000 |
| 0.0 211,197 810 260.7 210.0 117 3,128 178.6 cuGetProcAddress_v2 |
| 0.0 57,364 26 2,206.3 524.0 435 20,886 4,374.1 cudaEventCreateWithFlags |
| 0.0 31,380 16 1,961.3 1,257.5 717 5,553 1,541.0 cuLibraryGetKernel |
| 0.0 7,970 3 2,656.7 2,459.0 2,287 3,224 498.8 cuInit |
| 0.0 4,914 8 614.3 575.0 448 1,081 202.7 cudaThreadExchangeStreamCaptureMode_v10010 |
| 0.0 3,667 1 3,667.0 3,667.0 3,667 3,667 0.0 cudaStreamWaitEvent |
| 0.0 2,298 3 766.0 327.0 199 1,772 873.6 cuModuleGetLoadingMode |
| 0.0 1,642 1 1,642.0 1,642.0 1,642 1,642 0.0 cudaEventDestroy |
| 0.0 1,399 2 699.5 699.5 368 1,031 468.8 cudaGetDriverEntryPoint_v11030 |
| |
| [6/8] Executing 'cuda_gpu_kern_sum' stats report |
| |
| Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name |
| -------- --------------- --------- ----------- ----------- --------- --------- ----------- ---------------------------------------------------------------------------------------------------- |
| 52.6 1,342,892,585 5,896 227,763.3 83,280.0 7,808 544,962 235,015.8 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x64_32x6_tn_align8>(T1::Param… |
| 11.6 297,264,596 1,434 207,297.5 59,872.0 10,528 525,698 227,493.4 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x2_tn_align8>(T1::Par… |
| 4.9 126,014,763 392 321,466.2 54,240.0 53,184 1,411,300 517,177.2 ampere_bf16_s1688gemm_bf16_128x128_ldg8_f2f_stages_32x1_tn |
| 4.0 100,936,151 644 156,733.2 43,231.0 41,249 711,713 225,724.7 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_f2f_tn |
| 3.5 88,267,611 2,852 30,949.4 30,593.0 29,536 630,241 15,075.7 void at::native::<unnamed>::cunn_SoftMaxForward<(int)4, float, float, float, at::native::<unnamed>:… |
| 3.0 75,370,782 2,851 26,436.6 26,401.0 25,088 590,721 10,586.3 void at::native::<unnamed>::cunn_SoftMaxForward<(int)4, float, float, float, at::native::<unnamed>:… |
| 1.7 44,056,554 2,851 15,453.0 19,232.0 2,304 179,072 7,459.1 void at::native::<unnamed>::distribution_elementwise_grid_stride_kernel<float, (int)4, void at::nat… |
| 1.7 44,049,142 2,851 15,450.4 18,528.0 2,752 331,169 8,172.3 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::… |
| 1.7 42,645,188 2,851 14,958.0 17,888.0 2,848 334,464 8,106.7 void at::native::index_elementwise_kernel<(int)128, (int)4, void at::native::gpu_index_kernel<void … |
| 1.4 36,655,978 2,851 12,857.2 14,976.0 3,392 223,968 5,585.0 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator… |
| 1.3 33,924,224 2,851 11,899.1 14,112.0 1,440 496,385 10,086.3 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BinaryFunctor<float, float, floa… |
| 1.3 33,906,506 2,100 16,146.0 3,648.0 3,232 249,121 48,742.4 void vllm::act_and_mul_kernel<c10::BFloat16, &vllm::silu_kernel<c10::BFloat16>, (bool)1>(T1 *, cons… |
| 1.3 31,906,187 28 1,139,506.7 1,139,189.0 1,136,325 1,143,653 2,082.4 ampere_bf16_s16816gemm_bf16_128x64_ldg8_f2f_tn |
| 1.0 26,279,717 2,851 9,217.7 9,952.0 5,088 204,641 4,018.6 void at::native::reduce_kernel<(int)512, (int)1, at::native::ReduceOp<float, at::native::ArgMaxOps<… |
| 1.0 24,390,271 48 508,130.6 507,570.0 506,242 534,498 3,950.2 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_16x16_128x1_tn_align8>(T1::Par… |
| 0.9 21,729,849 204 106,518.9 8,640.0 6,944 488,386 178,137.0 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, __nv_bfloat16, __n… |
| 0.8 20,311,799 1,120 18,135.5 17,088.0 11,808 23,104 3,857.8 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (… |
| 0.7 16,762,358 448 37,416.0 37,376.0 35,840 39,073 518.7 void cutlass::Kernel2<cutlass_80_tensorop_bf16_s16816gemm_relu_bf16_64x128_32x6_tn_align8>(T1::Para… |
| 0.6 14,301,394 700 20,430.6 13,408.0 13,056 36,960 10,333.1 ampere_bf16_s16816gemm_bf16_64x64_ldg8_f2f_stages_64x5_tn |
| 0.6 14,176,146 4,200 3,375.3 2,432.0 1,664 32,416 3,903.7 std::enable_if<T2>(int)0&&vllm::_typeConvert<T1>::exists, void>::type vllm::fused_add_rms_norm_kern… |
| 0.5 13,284,608 980 13,555.7 13,504.0 12,287 15,808 960.9 ampere_bf16_s16816gemm_bf16_64x64_ldg8_relu_f2f_stages_64x5_tn |
| 0.4 10,345,042 84 123,155.3 134,289.0 29,184 210,881 71,576.9 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (… |
| 0.4 9,234,753 56 164,906.3 164,800.0 163,968 168,289 674.4 ampere_bf16_s1688gemm_bf16_128x128_ldg8_relu_f2f_stages_32x1_tn |
| 0.3 8,082,552 840 9,622.1 8,128.0 6,303 15,456 2,982.3 void flash::flash_fwd_splitkv_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (int)4, (… |
| 0.3 7,169,562 112 64,013.9 63,872.5 62,688 66,048 690.9 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_64x1_tn_align8>(T1::Para… |
| 0.3 6,493,611 2,100 3,092.2 2,176.0 1,695 32,768 4,378.0 void vllm::rotary_embedding_kernel<c10::BFloat16, (bool)1>(const long *, T1 *, T1 *, const T1 *, in… |
| 0.2 6,253,223 3,052 2,048.9 1,888.0 1,344 3,073 551.2 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo… |
| 0.2 6,085,232 2,294 2,652.7 2,592.0 2,048 32,480 1,151.5 void at::native::<unnamed>::indexSelectLargeIndex<c10::BFloat16, long, unsigned int, (int)2, (int)2… |
| 0.2 5,583,604 2,186 2,554.3 960.0 832 109,312 11,893.2 void at::native::vectorized_elementwise_kernel<(int)8, at::native::FillFunctor<c10::BFloat16>, std:… |
| 0.2 5,469,605 2,852 1,917.8 1,920.0 1,343 2,593 225.4 void at::native::unrolled_elementwise_kernel<at::native::direct_copy_kernel_cuda(at::TensorIterator… |
| 0.2 4,931,788 4 1,232,947.0 1,224,579.0 1,207,907 1,274,723 30,803.4 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy… |
| 0.2 4,843,622 224 21,623.3 21,456.5 9,440 34,528 11,938.8 void cutlass::Kernel2<cutlass_80_wmma_tensorop_bf16_s161616gemm_bf16_32x32_128x2_tn_align8>(T1::Par… |
| 0.2 3,840,021 28 137,143.6 136,897.0 136,193 139,552 843.2 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_relu_f2f_tn |
| 0.1 3,682,796 2,846 1,294.0 1,280.0 1,120 1,473 37.1 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n… |
| 0.1 3,022,648 2,072 1,458.8 1,120.0 960 11,329 1,680.7 void vllm::reshape_and_cache_flash_kernel<__nv_bfloat16, __nv_bfloat16, (vllm::Fp8KVCacheDataType)0… |
| 0.1 2,880,329 56 51,434.4 51,408.5 49,728 54,049 679.7 void flash::flash_fwd_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)64, (int)4, (bool)0, (… |
| 0.1 2,551,910 2 1,275,955.0 1,275,955.0 1,236,547 1,315,363 55,731.3 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel<at_cuda_detail::cub::DeviceRadixSortPolicy… |
| 0.1 2,327,648 632 3,683.0 3,424.0 1,535 6,336 1,080.0 void at::native::<unnamed>::indexSelectSmallIndex<c10::BFloat16, long, unsigned int, (int)2, (int)2… |
| 0.1 1,938,021 56 34,607.5 34,960.0 17,408 35,681 2,377.2 std::enable_if<!T7, void>::type internal::gemvx::kernel<int, int, __nv_bfloat16, float, float, floa… |
| 0.0 1,099,453 168 6,544.4 6,528.0 6,432 6,688 73.3 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (… |
| 0.0 1,012,450 1 1,012,450.0 1,012,450.0 1,012,450 1,012,450 0.0 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte… |
| 0.0 881,129 280 3,146.9 3,136.0 2,785 3,520 213.5 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (… |
| 0.0 803,650 1 803,650.0 803,650.0 803,650 803,650 0.0 ampere_bf16_s1688gemm_bf16_64x128_sliced1x2_ldg8_f2f_tn |
| 0.0 740,486 224 3,305.7 3,297.0 3,104 3,488 88.0 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (… |
| 0.0 679,490 2 339,745.0 339,745.0 339,361 340,129 543.1 void at::native::vectorized_elementwise_kernel<(int)4, at::native::<unnamed>::masked_fill_kernel(at… |
| 0.0 607,311 336 1,807.5 1,792.0 1,631 2,113 119.0 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo… |
| 0.0 359,360 1 359,360.0 359,360.0 359,360 359,360 0.0 void at::native::tensor_kernel_scan_innermost_dim<float, std::plus<float>>(T1 *, const T1 *, unsign… |
| 0.0 318,145 1 318,145.0 318,145.0 318,145 318,145 0.0 at::native::<unnamed>::fill_reverse_indices_kernel(long *, int, at::cuda::detail::IntDivider<unsign… |
| 0.0 316,805 112 2,828.6 2,817.0 2,720 2,944 36.0 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (… |
| 0.0 315,236 75 4,203.1 2,624.0 1,920 33,184 6,087.8 void vllm::rms_norm_kernel<c10::BFloat16>(T1 *, const T1 *, const T1 *, float, int, int) |
| 0.0 251,010 56 4,482.3 4,480.0 4,448 4,513 12.0 void flash::flash_fwd_splitkv_combine_kernel<Flash_fwd_kernel_traits<(int)128, (int)64, (int)128, (… |
| 0.0 231,873 1 231,873.0 231,873.0 231,873 231,873 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n… |
| 0.0 223,136 1 223,136.0 223,136.0 223,136 223,136 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::… |
| 0.0 74,820 56 1,336.1 1,344.0 1,311 1,345 14.0 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, float, __nv_bfloat16, float, (bool)0, __n… |
| 0.0 65,347 73 895.2 896.0 831 1,408 75.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<long>, std::array<ch… |
| 0.0 3,232 1 3,232.0 3,232.0 3,232 3,232 0.0 void at::native::<unnamed>::CatArrayBatchedCopy_aligned16_contig<at::native::<unnamed>::OpaqueType<… |
| 0.0 2,369 2 1,184.5 1,184.5 1,089 1,280 135.1 void <unnamed>::elementwise_kernel_with_index<int, at::native::arange_cuda_out(const c10::Scalar &,… |
| 0.0 2,336 1 2,336.0 2,336.0 2,336 2,336 0.0 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte… |
| 0.0 2,208 1 2,208.0 2,208.0 2,208 2,208 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl<at::native::… |
| 0.0 2,208 1 2,208.0 2,208.0 2,208 2,208 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::cos_kernel_cuda(at::TensorIterat… |
| 0.0 2,049 1 2,049.0 2,049.0 2,049 2,049 0.0 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast<at::n… |
| 0.0 1,855 1 1,855.0 1,855.0 1,855 1,855 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::sin_kernel_cuda(at::TensorIterat… |
| 0.0 1,697 1 1,697.0 1,697.0 1,697 1,697 0.0 void at::native::vectorized_elementwise_kernel<(int)8, at::native::bfloat16_copy_kernel_cuda(at::Te… |
| 0.0 1,536 1 1,536.0 1,536.0 1,536 1,536 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast<at::n… |
| 0.0 1,505 1 1,505.0 1,505.0 1,505 1,505 0.0 void at::native::vectorized_elementwise_kernel<(int)8, at::native::CUDAFunctorOnOther_add<c10::BFlo… |
| 0.0 1,472 1 1,472.0 1,472.0 1,472 1,472 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BUnaryFunctor<float, float, floa… |
| 0.0 1,344 1 1,344.0 1,344.0 1,344 1,344 0.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::CUDAFunctorOnOther_add<long>, st… |
| 0.0 1,216 1 1,216.0 1,216.0 1,216 1,216 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::reciprocal_kernel_cuda(at::Tenso… |
| 0.0 1,024 1 1,024.0 1,024.0 1,024 1,024 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::AUnaryFunctor<float, float, floa… |
| 0.0 896 1 896.0 896.0 896 896 0.0 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor<double>, std::array<… |
| 0.0 896 1 896.0 896.0 896 896 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor<int>, std::array<cha… |
| |
| [7/8] Executing 'cuda_gpu_mem_time_sum' stats report |
| |
| Time (%) Total Time (ns) Count Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Operation |
| -------- --------------- ------ -------- -------- -------- ----------- ----------- ------------------------------ |
| 97.1 588,870,084 49,000 12,017.8 353.0 288 110,979,833 539,121.1 [CUDA memcpy Host-to-Device] |
| 2.2 13,046,684 14,231 916.8 896.0 832 343,425 2,871.5 [CUDA memcpy Device-to-Device] |
| 0.5 3,261,860 2,851 1,144.1 1,120.0 863 1,664 70.8 [CUDA memcpy Device-to-Host] |
| 0.2 1,509,526 3,971 380.1 352.0 288 1,280 123.5 [CUDA memset] |
| |
| [8/8] Executing 'cuda_gpu_mem_size_sum' stats report |
| |
| Total (MB) Count Avg (MB) Med (MB) Min (MB) Max (MB) StdDev (MB) Operation |
| ---------- ------ -------- -------- -------- -------- ----------- ------------------------------ |
| 3,170.710 49,000 0.065 0.000 0.000 466.747 2.401 [CUDA memcpy Host-to-Device] |
| 235.229 14,231 0.017 0.000 0.000 155.582 1.304 [CUDA memcpy Device-to-Device] |
| 1.731 3,971 0.000 0.000 0.000 0.003 0.001 [CUDA memset] |
| 0.593 2,851 0.000 0.000 0.000 0.002 0.000 [CUDA memcpy Device-to-Host] |
| |
| Generated: |
| /data/cy/vllm_tts_N32.nsys-rep |
| /data/cy/vllm_tts_N32.sqlite |
| |