WARNING: CPU IP/backtrace sampling not supported, disabling. Try the 'nsys status --environment' command to learn more. WARNING: CPU context switch tracing not supported, disabling. Try the 'nsys status --environment' command to learn more. INFO 08-10 18:10:22 [__init__.py:244] Automatically detected platform cuda. INFO:__main__:FastTTS AIME Experiment INFO:__main__:================================================== INFO:__main__:Starting FastTTS AIME experiment INFO:__main__:Parameters: {'num_iterations': 10, 'n': 128, 'temperature': 2, 'beam_width': 4, 'generator_model': 'Qwen/Qwen2.5-Math-1.5B-Instruct', 'verifier_model': 'peiyi9979/math-shepherd-mistral-7b-prm', 'generator_gpu_memory': 0.28, 'verifier_gpu_memory': 0.62, 'offload_enabled': False, 'spec_beam_extension': False, 'prefix_aware_scheduling': False} INFO:__main__:Loaded AIME dataset with 30 samples INFO:__main__:Problem: Every morning Aya goes for a $9$-kilometer-long walk and stops at a coffee shop afterwards. When she walks at a constant speed of $s$ kilometers per hour, the walk takes her 4 hours, including $t$ minutes spent in the coffee shop. When she walks $s+2$ kilometers per hour, the walk takes her 2 hours and 24 minutes, including $t$ minutes spent in the coffee shop. Suppose Aya walks at $s+\frac{1}{2}$ kilometers per hour. Find the number of minutes the walk takes her, including the $t$ minutes spent in the coffee shop. INFO:__main__:Reference answer: 204 INFO:__main__:Initializing FastTTS models... INFO:fasttts:Initializing FastTTS models... INFO:models.vllm_wrapper:Initializing generator model: Qwen/Qwen2.5-Math-1.5B-Instruct INFO 08-10 18:10:34 [__init__.py:244] Automatically detected platform cuda. INFO:models.tts_llm:Using V0 engine with speculative beam extension: False INFO:models.tts_llm:Prefix-aware scheduling enabled: False ✅ Process PID: 3674655 | CUDA Context Object: None INFO 08-10 18:10:44 [config.py:841] This model supports multiple tasks: {'classify', 'embed', 'reward', 'generate'}. Defaulting to 'generate'. INFO 08-10 18:10:44 [config.py:1472] Using max model len 4096 INFO:models.generator_engine:Using GeneratorLLMEngine with vLLM version 0.9.2 INFO 08-10 18:10:45 [llm_engine.py:230] Initializing a V0 LLM engine (v0.9.2) with config: model='Qwen/Qwen2.5-Math-1.5B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2.5-Math-1.5B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=Qwen/Qwen2.5-Math-1.5B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=False, use_async_output_proc=True, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":false,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":256,"local_cache_dir":null}, use_cached_outputs=False, INFO 08-10 18:10:46 [cuda.py:363] Using Flash Attention backend. INFO 08-10 18:10:47 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 INFO 08-10 18:10:47 [model_runner.py:1171] Starting to load model Qwen/Qwen2.5-Math-1.5B-Instruct... INFO 08-10 18:10:47 [weight_utils.py:292] Using model weights format ['*.safetensors'] INFO 08-10 18:10:48 [weight_utils.py:345] No model.safetensors.index.json found in remote. Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00, disable_hybrid_kv_cache_manager=False) INFO:models.vllm_wrapper:Generator model initialized successfully in separate process INFO:models.vllm_wrapper:Initializing verifier model: peiyi9979/math-shepherd-mistral-7b-prm INFO 08-10 18:11:12 [__init__.py:244] Automatically detected platform cuda. INFO:models.tts_llm:Prefix-aware scheduling enabled: False ✅ Process PID: 3675033 | CUDA Context Object: None INFO 08-10 18:11:23 [config.py:1472] Using max model len 4096 INFO 08-10 18:11:23 [arg_utils.py:1596] (Disabling) chunked prefill by default INFO 08-10 18:11:23 [config.py:4601] Only "last" pooling supports chunked prefill and prefix caching; disabling both. You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. INFO 08-10 18:11:25 [core.py:526] Waiting for init message from front-end. INFO 08-10 18:11:25 [core.py:69] Initializing a V1 LLM engine (v0.9.2) with config: model='peiyi9979/math-shepherd-mistral-7b-prm', speculative_config=None, tokenizer='peiyi9979/math-shepherd-mistral-7b-prm', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='xgrammar', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=42, served_model_name=peiyi9979/math-shepherd-mistral-7b-prm, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, pooler_config=PoolerConfig(pooling_type='STEP', normalize=None, softmax=True, step_tag_id=12902, returned_token_ids=[648, 387]), compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null} INFO 08-10 18:11:25 [parallel_state.py:1076] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 WARNING 08-10 18:11:26 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. INFO 08-10 18:11:26 [gpu_model_runner.py:1770] Starting to load model peiyi9979/math-shepherd-mistral-7b-prm... INFO 08-10 18:11:26 [gpu_model_runner.py:1775] Loading model from scratch... INFO 08-10 18:11:26 [cuda.py:284] Using Flash Attention backend on V1 engine. INFO 08-10 18:11:27 [weight_utils.py:292] Using model weights format ['*.bin'] Loading pt checkpoint shards: 0% Completed | 0/2 [00:00 main() ^^^^^^ File "/home/cy/hmarkc/FastTTS/run_aime_fasttts.py", line 222, in main results = run_aime_fasttts(args) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/cy/hmarkc/FastTTS/run_aime_fasttts.py", line 173, in run_aime_fasttts results = fasttts.search([problem], search_config=search_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cy/hmarkc/FastTTS/fasttts.py", line 107, in search return self._process_batch(problems, search_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cy/hmarkc/FastTTS/fasttts.py", line 75, in _process_batch return beam_search(examples, search_config, self.generator, self.verifier) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cy/hmarkc/FastTTS/search/beam_search.py", line 501, in beam_search completed_beams, total_generator_latency_s, total_verifier_latency_s, n_generator_latency_s, n_verifier_latency_s, total_num_tokens, n_completion_tokens, extended_tokens_list = _beam_search(problems, search_config, generator, verifier) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cy/hmarkc/FastTTS/search/beam_search.py", line 349, in _beam_search gen_results, gen_time = generate_beam( ^^^^^^^^^^^^^^ File "/home/cy/hmarkc/FastTTS/search/beam_search.py", line 121, in generate_beam llm_outputs = generator.generate(gen_prompts, sampling_params=current_sampling_params, priority=prefix_priorities) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cy/hmarkc/FastTTS/models/vllm_wrapper.py", line 289, in generate raise RuntimeError(f"Failed to generate: {result['error']}") RuntimeError: Failed to generate: The decoder prompt (length 4121) is longer than the maximum model length of 4096. Make sure that `max_model_len` is no smaller than the number of text tokens. GPU 3: General Metrics for NVIDIA AD10x (any frequency) Generating '/tmp/nsys-report-d5d5.qdstrm' [1/8] [0% ] vllm_tts.nsys-rep [1/8] [0% ] vllm_tts.nsys-rep [1/8] [5% ] vllm_tts.nsys-rep [1/8] [6% ] vllm_tts.nsys-rep [1/8] [7% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [10% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [10% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [7% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [7% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [7% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [7% ] vllm_tts.nsys-rep [1/8] [6% ] vllm_tts.nsys-rep [1/8] [7% ] vllm_tts.nsys-rep [1/8] [8% ] vllm_tts.nsys-rep [1/8] [9% ] vllm_tts.nsys-rep [1/8] [10% ] vllm_tts.nsys-rep [1/8] [11% ] vllm_tts.nsys-rep [1/8] [12% ] vllm_tts.nsys-rep [1/8] [13% ] vllm_tts.nsys-rep [1/8] [14% ] vllm_tts.nsys-rep [1/8] [=15% ] vllm_tts.nsys-rep [1/8] [=16% ] vllm_tts.nsys-rep [1/8] [=17% ] vllm_tts.nsys-rep [1/8] [==18% ] vllm_tts.nsys-rep [1/8] [==19% ] vllm_tts.nsys-rep [1/8] [==20% ] vllm_tts.nsys-rep [1/8] [==21% ] vllm_tts.nsys-rep [1/8] [===22% ] vllm_tts.nsys-rep [1/8] [===23% ] vllm_tts.nsys-rep [1/8] [===24% ] vllm_tts.nsys-rep [1/8] [====25% ] vllm_tts.nsys-rep [1/8] [====26% ] vllm_tts.nsys-rep [1/8] [====27% ] vllm_tts.nsys-rep [1/8] [====28% ] vllm_tts.nsys-rep [1/8] [=====29% ] vllm_tts.nsys-rep [1/8] [=====30% ] vllm_tts.nsys-rep [1/8] [=====31% ] vllm_tts.nsys-rep [1/8] [=====32% ] vllm_tts.nsys-rep [1/8] [======33% ] vllm_tts.nsys-rep [1/8] [======34% ] vllm_tts.nsys-rep [1/8] [======35% ] vllm_tts.nsys-rep [1/8] [=======36% ] vllm_tts.nsys-rep [1/8] [=======37% ] vllm_tts.nsys-rep [1/8] [=======38% ] vllm_tts.nsys-rep [1/8] [=======39% ] vllm_tts.nsys-rep [1/8] [========40% ] vllm_tts.nsys-rep [1/8] [========41% ] vllm_tts.nsys-rep [1/8] [========42% ] vllm_tts.nsys-rep [1/8] [=========43% ] vllm_tts.nsys-rep [1/8] [=========44% ] vllm_tts.nsys-rep [1/8] [=========45% ] vllm_tts.nsys-rep [1/8] [=========46% ] vllm_tts.nsys-rep [1/8] [==========47% ] vllm_tts.nsys-rep [1/8] [==========48% ] vllm_tts.nsys-rep [1/8] [==========49% ] vllm_tts.nsys-rep [1/8] [===========50% ] vllm_tts.nsys-rep [1/8] [===========51% ] vllm_tts.nsys-rep [1/8] [===========52% ] vllm_tts.nsys-rep [1/8] [===========53% ] vllm_tts.nsys-rep [1/8] [============54% ] vllm_tts.nsys-rep [1/8] [============55% ] vllm_tts.nsys-rep [1/8] [============56% ] vllm_tts.nsys-rep [1/8] [============57% ] vllm_tts.nsys-rep [1/8] [=============58% ] vllm_tts.nsys-rep [1/8] [=============59% ] vllm_tts.nsys-rep [1/8] [=============60% ] vllm_tts.nsys-rep [1/8] [==============61% ] vllm_tts.nsys-rep [1/8] [========================100%] vllm_tts.nsys-rep [1/8] [========================100%] vllm_tts.nsys-rep [2/8] [0% ] vllm_tts.sqlite [2/8] [1% ] vllm_tts.sqlite [2/8] [2% ] vllm_tts.sqlite [2/8] [3% ] vllm_tts.sqlite [2/8] [4% ] vllm_tts.sqlite [2/8] [5% ] vllm_tts.sqlite [2/8] [6% ] vllm_tts.sqlite [2/8] [7% ] vllm_tts.sqlite [2/8] [8% ] vllm_tts.sqlite [2/8] [9% ] vllm_tts.sqlite [2/8] [10% ] vllm_tts.sqlite [2/8] [11% ] vllm_tts.sqlite [2/8] [12% ] vllm_tts.sqlite [2/8] [13% ] vllm_tts.sqlite [2/8] [14% ] vllm_tts.sqlite [2/8] [=15% ] vllm_tts.sqlite [2/8] [=16% ] vllm_tts.sqlite [2/8] [=17% ] vllm_tts.sqlite [2/8] [==18% ] vllm_tts.sqlite [2/8] [==19% ] vllm_tts.sqlite [2/8] [==20% ] vllm_tts.sqlite [2/8] [==21% ] vllm_tts.sqlite [2/8] [===22% ] vllm_tts.sqlite [2/8] [===23% ] vllm_tts.sqlite [2/8] [===24% ] vllm_tts.sqlite [2/8] [====25% ] vllm_tts.sqlite [2/8] [====26% ] vllm_tts.sqlite [2/8] [====27% ] vllm_tts.sqlite [2/8] [====28% ] vllm_tts.sqlite [2/8] [=====29% ] vllm_tts.sqlite [2/8] [=====30% ] vllm_tts.sqlite [2/8] [=====31% ] vllm_tts.sqlite [2/8] [=====32% ] vllm_tts.sqlite [2/8] [======33% ] vllm_tts.sqlite [2/8] [======34% ] vllm_tts.sqlite [2/8] [======35% ] vllm_tts.sqlite [2/8] [=======36% ] vllm_tts.sqlite [2/8] [=======37% ] vllm_tts.sqlite [2/8] [=======38% ] vllm_tts.sqlite [2/8] [=======39% ] vllm_tts.sqlite [2/8] [========40% ] vllm_tts.sqlite [2/8] [========41% ] vllm_tts.sqlite [2/8] [========42% ] vllm_tts.sqlite [2/8] [=========43% ] vllm_tts.sqlite [2/8] [=========44% ] vllm_tts.sqlite [2/8] [=========45% ] vllm_tts.sqlite [2/8] [=========46% ] vllm_tts.sqlite [2/8] [==========47% ] vllm_tts.sqlite [2/8] [==========48% ] vllm_tts.sqlite [2/8] [==========49% ] vllm_tts.sqlite [2/8] [===========50% ] vllm_tts.sqlite [2/8] [===========51% ] vllm_tts.sqlite [2/8] [===========52% ] vllm_tts.sqlite [2/8] [===========53% ] vllm_tts.sqlite [2/8] [============54% ] vllm_tts.sqlite [2/8] [============55% ] vllm_tts.sqlite [2/8] [============56% ] vllm_tts.sqlite [2/8] [============57% ] vllm_tts.sqlite [2/8] [=============58% ] vllm_tts.sqlite [2/8] [=============59% ] vllm_tts.sqlite [2/8] [=============60% ] vllm_tts.sqlite [2/8] [==============61% ] vllm_tts.sqlite [2/8] [==============62% ] vllm_tts.sqlite [2/8] [==============63% ] vllm_tts.sqlite [2/8] [==============64% ] vllm_tts.sqlite [2/8] [===============65% ] vllm_tts.sqlite [2/8] [===============66% ] vllm_tts.sqlite [2/8] [===============67% ] vllm_tts.sqlite [2/8] [================68% ] vllm_tts.sqlite [2/8] [================69% ] vllm_tts.sqlite [2/8] [================70% ] vllm_tts.sqlite [2/8] [================71% ] vllm_tts.sqlite [2/8] [=================72% ] vllm_tts.sqlite [2/8] [=================73% ] vllm_tts.sqlite [2/8] [=================74% ] vllm_tts.sqlite [2/8] [==================75% ] vllm_tts.sqlite [2/8] [==================76% ] vllm_tts.sqlite [2/8] [==================77% ] vllm_tts.sqlite [2/8] [==================78% ] vllm_tts.sqlite [2/8] [===================79% ] vllm_tts.sqlite [2/8] [===================80% ] vllm_tts.sqlite [2/8] [===================81% ] vllm_tts.sqlite [2/8] [===================82% ] vllm_tts.sqlite [2/8] [====================83% ] vllm_tts.sqlite [2/8] [====================84% ] vllm_tts.sqlite [2/8] [====================85% ] vllm_tts.sqlite [2/8] [=====================86% ] vllm_tts.sqlite [2/8] [=====================87% ] vllm_tts.sqlite [2/8] [=====================88% ] vllm_tts.sqlite [2/8] [=====================89% ] vllm_tts.sqlite [2/8] [======================90% ] vllm_tts.sqlite [2/8] [======================91% ] vllm_tts.sqlite [2/8] [======================92% ] vllm_tts.sqlite [2/8] [=======================93% ] vllm_tts.sqlite [2/8] [=======================94% ] vllm_tts.sqlite [2/8] [=======================95% ] vllm_tts.sqlite [2/8] [=======================96% ] vllm_tts.sqlite [2/8] [========================97% ] vllm_tts.sqlite [2/8] [========================98% ] vllm_tts.sqlite [2/8] [========================99% ] vllm_tts.sqlite [2/8] [========================100%] vllm_tts.sqlite [2/8] [========================100%] vllm_tts.sqlite [3/8] Executing 'nvtx_sum' stats report Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Style Range -------- --------------- --------- ----------------- ----------------- --------------- --------------- ---------------- ------- ---------------------------------- 50.4 393,853,695,354 1 393,853,695,354.0 393,853,695,354.0 393,853,695,354 393,853,695,354 0.0 PushPop :Total 35.8 279,622,144,381 7 39,946,020,625.9 45,828,385,121.0 11,745,200,369 53,173,117,619 15,447,291,549.1 PushPop :encode 13.7 107,321,220,075 8 13,415,152,509.4 13,699,450,799.5 8,606,313,131 16,197,341,448 2,353,657,800.2 PushPop :generate 0.0 50,608 1 50,608.0 50,608.0 50,608 50,608 0.0 PushPop CCCL:cub::DeviceSegmentedRadixSort [4/8] Executing 'osrt_sum' stats report Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name -------- ----------------- --------- ---------------- ---------------- --------- --------------- --------------- ---------------------- 29.0 4,107,957,333,358 238,517 17,222,912.1 10,061,224.0 1,000 391,167,152,042 872,427,793.8 epoll_wait 25.0 3,547,920,544,846 29,625 119,761,031.0 100,063,686.0 1,012 1,000,142,487 122,948,044.4 pthread_cond_timedwait 15.6 2,207,991,379,444 233 9,476,357,851.7 10,000,075,945.0 17,770 10,000,137,880 2,178,640,805.2 sem_timedwait 11.6 1,644,165,870,679 108 15,223,758,061.8 13,321,191,303.0 9,317 50,085,818,176 7,164,644,071.4 pthread_cond_wait 10.2 1,447,295,467,592 44,957 32,192,883.6 3,377.0 1,000 373,076,199,364 2,027,875,088.0 read 6.6 942,233,911,978 10,134 92,977,492.8 100,114,125.0 1,000 17,768,547,837 478,817,554.3 poll 2.0 279,272,511,042 749 372,860,495.4 413,610,654.0 18,760 451,398,921 85,966,967.2 sem_wait 0.0 2,652,802,802 5,038 526,558.7 3,349.5 1,002 116,838,296 6,549,869.1 ioctl 0.0 1,126,710,062 682 1,652,067.5 1,040.0 1,000 1,070,734,083 41,026,003.0 waitpid 0.0 386,640,419 148,864 2,597.3 1,427.0 1,015 99,489,865 257,890.3 munmap 0.0 307,226,614 522 588,556.7 2,081.0 1,037 20,005,618 3,204,254.5 fopen 0.0 202,534,023 40 5,063,350.6 5,063,053.0 5,054,017 5,073,909 5,967.5 nanosleep 0.0 154,008,949 46,557 3,308.0 2,602.0 1,000 15,029,894 69,665.7 open64 0.0 131,449,169 150 876,327.8 3,599.0 1,056 22,776,981 3,972,654.1 open 0.0 94,621,674 88 1,075,246.3 695,819.0 3,494 5,783,134 1,252,692.1 pthread_rwlock_wrlock 0.0 88,975,648 5,494 16,195.1 4,706.0 1,000 3,228,393 103,824.5 write 0.0 78,944,702 1,215 64,975.1 12,599.0 1,176 2,800,486 108,421.8 recv 0.0 67,032,431 3 22,344,143.7 1,039,256.0 858,936 65,134,239 37,057,419.3 fork 0.0 56,511,786 374 151,101.0 4,617.0 1,920 19,021,663 1,628,648.9 fopen64 0.0 51,350,583 10 5,135,058.3 28,593.5 9,434 51,031,956 16,126,540.3 connect 0.0 43,716,326 100 437,163.3 16,647.5 5,708 6,100,438 1,174,093.4 pthread_join 0.0 33,555,772 1,935 17,341.5 7,007.0 1,344 117,177 21,053.3 send 0.0 29,627,225 7,800 3,798.4 2,307.5 1,000 1,214,464 15,284.6 mmap64 0.0 28,227,618 245 115,214.8 68,764.0 41,609 11,870,902 754,141.7 sleep 0.0 11,071,018 167 66,293.5 47,982.0 16,115 788,254 74,165.5 pthread_create 0.0 8,394,435 1,289 6,512.4 2,201.0 1,000 89,202 8,381.1 fgets 0.0 2,708,647 2,234 1,212.5 1,073.0 1,000 9,581 530.5 fclose 0.0 2,649,927 1,556 1,703.0 1,418.0 1,000 18,772 876.0 epoll_ctl 0.0 1,827,104 37 49,381.2 21,969.0 1,258 601,537 99,552.8 pthread_mutex_lock 0.0 1,525,531 202 7,552.1 3,538.0 1,229 126,121 10,757.8 mmap 0.0 1,442,219 12 120,184.9 135,708.5 14,897 260,157 84,664.1 pthread_rwlock_rdlock 0.0 1,333,421 99 13,468.9 3,214.0 1,903 308,857 40,342.7 futex 0.0 1,003,222 308 3,257.2 2,488.0 1,000 15,414 2,304.6 pthread_cond_signal 0.0 560,112 102 5,491.3 4,417.0 1,963 16,270 3,102.1 pipe2 0.0 274,277 42 6,530.4 4,651.5 1,583 19,600 5,032.2 socket 0.0 248,295 18 13,794.2 3,429.0 1,029 91,048 23,483.4 bind 0.0 195,940 26 7,536.2 6,595.5 1,012 23,328 6,365.3 pthread_cond_broadcast 0.0 79,849 30 2,661.6 2,095.0 1,378 7,850 1,486.4 stat 0.0 73,833 15 4,922.2 4,711.0 2,539 7,902 1,359.0 pthread_mutex_trylock 0.0 69,957 5 13,991.4 13,146.0 6,404 25,009 7,666.4 accept4 0.0 68,843 41 1,679.1 1,763.0 1,003 2,540 377.6 sigaction 0.0 58,806 30 1,960.2 2,100.5 1,003 3,452 650.6 dup2 0.0 57,466 19 3,024.5 1,752.0 1,039 5,810 1,941.1 fflush 0.0 55,461 34 1,631.2 1,186.0 1,015 6,102 1,037.6 fcntl 0.0 45,396 8 5,674.5 5,648.0 4,969 6,340 494.9 lstat 0.0 41,433 19 2,180.7 1,753.0 1,013 3,492 895.5 pread 0.0 30,122 7 4,303.1 4,200.0 3,657 5,216 488.9 fputs_unlocked 0.0 28,059 2 14,029.5 14,029.5 12,230 15,829 2,544.9 socketpair 0.0 24,818 8 3,102.3 2,990.5 2,170 4,144 716.0 flock 0.0 22,207 5 4,441.4 3,489.0 3,095 6,196 1,544.4 fread 0.0 18,935 8 2,366.9 2,367.5 1,963 3,009 327.7 mprotect 0.0 16,694 3 5,564.7 3,980.0 2,934 9,780 3,687.9 fwrite 0.0 16,115 6 2,685.8 2,184.0 1,685 4,768 1,172.4 fstat 0.0 12,684 8 1,585.5 1,347.0 1,026 2,482 576.4 listen 0.0 10,302 1 10,302.0 10,302.0 10,302 10,302 0.0 kill 0.0 8,365 2 4,182.5 4,182.5 3,963 4,402 310.4 fputs 0.0 6,903 3 2,301.0 1,391.0 1,246 4,266 1,703.3 openat64 [5/8] Executing 'cuda_api_sum' stats report Time (%) Total Time (ns) Num Calls Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name -------- --------------- --------- ------------ ----------- --------- ----------- ------------ ------------------------------------------ 88.4 46,937,426,386 245,324 191,328.3 4,587.0 2,824 101,810,784 1,048,325.3 cudaMemcpyAsync 4.7 2,493,394,033 88 28,334,023.1 15,760.0 3,229 119,698,802 42,440,625.4 cudaHostAlloc 2.7 1,455,648,101 159,808 9,108.7 5,617.0 714 64,133,488 269,769.6 cudaLaunchKernel 2.3 1,207,731,744 10,618 113,743.8 73,351.0 61,988 1,706,174 82,957.5 cudaGraphLaunch_v10000 0.4 215,017,756 170,401 1,261.8 1,050.0 594 1,859,958 7,635.1 cudaEventRecord 0.3 181,609,586 170,394 1,065.8 769.0 354 4,098,109 9,965.6 cudaEventQuery 0.2 106,372,464 10 10,637,246.4 52,428.0 9,038 105,923,811 33,480,318.7 cudaMemGetInfo 0.2 99,317,189 17,033 5,830.9 6,201.0 650 3,803,345 29,193.0 cuLaunchKernel 0.2 80,614,125 14,374 5,608.3 5,614.0 223 361,174 3,812.7 cudaMemsetAsync 0.1 78,821,783 35 2,252,050.9 2,023,368.0 1,515,806 4,062,118 688,963.2 cudaGraphInstantiateWithFlags_v11040 0.1 40,646,025 35 1,161,315.0 1,113,301.0 799,815 1,434,417 184,273.6 cudaGraphExecDestroy_v10000 0.1 35,877,763 11,215 3,199.1 2,700.0 2,257 60,765 3,068.8 cudaStreamSynchronize 0.1 35,289,694 66 534,692.3 255,452.0 70,038 2,137,121 536,637.0 cudaFree 0.1 27,182,114 10 2,718,211.4 2,817,660.5 65,879 4,870,611 1,541,459.1 cuLibraryLoadData 0.0 24,600,138 32,130 765.6 901.0 272 9,288 365.0 cudaStreamIsCapturing_v10000 0.0 21,862,560 178 122,823.4 107,050.0 4,423 440,515 60,531.1 cudaMalloc 0.0 19,789,871 17,033 1,161.9 862.0 255 4,619,413 37,260.6 cuKernelGetFunction 0.0 5,538,225 35 158,235.0 154,516.0 129,373 219,967 20,588.0 cudaGraphDestroy_v10000 0.0 5,134,115 8,785 584.4 547.0 331 2,421 170.1 cudaStreamGetCaptureInfo_v2_v11030 0.0 4,473,254 35 127,807.3 122,083.0 102,731 341,773 39,401.4 cudaStreamEndCapture_v10000 0.0 3,442,690 128 26,896.0 2,629.0 2,164 1,153,601 139,233.8 cudaStreamCreateWithPriority 0.0 2,151,872 106 20,300.7 20,077.0 3,017 116,731 17,791.3 cudaDeviceSynchronize 0.0 964,089 35 27,545.4 28,536.0 12,939 33,952 4,699.2 cudaGraphGetNodes_v10000 0.0 484,704 35 13,848.7 10,086.0 8,293 49,353 7,892.5 cudaStreamBeginCapture_v10000 0.0 143,228 810 176.8 142.0 78 1,674 124.6 cuGetProcAddress_v2 0.0 41,506 26 1,596.4 405.0 293 20,771 4,030.5 cudaEventCreateWithFlags 0.0 20,185 16 1,261.6 755.0 388 4,976 1,169.2 cuLibraryGetKernel 0.0 5,221 8 652.6 588.5 403 1,269 271.3 cudaThreadExchangeStreamCaptureMode_v10010 0.0 4,220 3 1,406.7 1,266.0 1,194 1,760 308.1 cuInit 0.0 3,891 1 3,891.0 3,891.0 3,891 3,891 0.0 cudaStreamWaitEvent 0.0 1,600 1 1,600.0 1,600.0 1,600 1,600 0.0 cudaEventDestroy 0.0 1,415 3 471.7 181.0 112 1,122 564.3 cuModuleGetLoadingMode 0.0 1,042 2 521.0 521.0 266 776 360.6 cudaGetDriverEntryPoint_v11030 [6/8] Executing 'cuda_gpu_kern_sum' stats report Time (%) Total Time (ns) Instances Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Name -------- --------------- --------- ----------- ----------- --------- --------- ----------- ---------------------------------------------------------------------------------------------------- 22.0 2,238,853,597 5,322 420,679.0 497,094.0 10,464 530,758 175,343.2 void cutlass::Kernel2(T1::Par… 21.3 2,162,330,143 7,723 279,985.8 137,569.0 7,808 572,614 241,368.9 void cutlass::Kernel2(T1::Param… 6.6 675,302,986 1,542 437,939.7 488,389.0 6,976 489,318 144,690.4 std::enable_if::type internal::gemvx::kernel::cunn_SoftMaxForward<(int)4, float, float, float, at::native:::… 4.5 456,691,515 1,624 281,214.0 123,697.5 41,857 714,468 267,999.6 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_f2f_tn 4.4 443,295,899 700 633,279.9 82,161.0 50,336 1,413,417 610,178.0 ampere_bf16_s1688gemm_bf16_128x128_ldg8_f2f_stages_32x1_tn 4.3 439,409,336 10,651 41,255.2 30,272.0 29,472 624,867 30,275.9 void at::native::::cunn_SoftMaxForward<(int)4, float, float, float, at::native:::… 3.4 347,517,575 840 413,711.4 402,820.5 131,585 994,316 214,545.4 void flash::flash_fwd_splitkv_kernel(T1::Par… 2.8 287,234,571 10,650 26,970.4 5,600.0 1,440 496,643 50,376.8 void at::native::vectorized_elementwise_kernel<(int)4, at::native::BinaryFunctor, (bool)1>(T1 *, cons… 1.6 158,104,871 10,650 14,845.5 3,872.0 2,303 177,889 19,981.1 void at::native::::distribution_elementwise_grid_stride_kernel(int)0&&vllm::_typeConvert::exists, void>::type vllm::fused_add_rms_norm_kern… 0.2 21,779,316 6,382 3,412.6 2,880.0 1,695 6,432 1,249.9 void at::native::::indexSelectSmallIndex(const long *, T1 *, T1 *, const T1 *, in… 0.2 17,761,784 10,651 1,667.6 1,632.0 1,248 2,721 172.3 void at::native::unrolled_elementwise_kernel(T1::Para… 0.2 15,315,251 140 109,394.7 137,313.0 26,753 145,378 43,660.0 ampere_bf16_s1688gemm_bf16_128x64_sliced1x2_ldg8_relu_f2f_tn 0.1 13,952,650 10,618 1,314.1 1,280.0 1,120 1,728 92.3 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast::indexSelectLargeIndex, std:… 0.1 8,103,531 840 9,647.1 8,160.0 6,305 15,520 2,996.4 void flash::flash_fwd_splitkv_kernel(T1::Para… 0.1 6,334,831 3,080 2,056.8 1,888.0 1,343 3,103 552.3 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo… 0.0 4,928,376 4 1,232,094.0 1,222,582.0 1,208,582 1,274,630 31,240.7 void at_cuda_detail::cub::DeviceSegmentedRadixSortKernel(T1::Par… 0.0 2,897,013 56 51,732.4 51,664.0 49,761 54,177 837.6 void flash::flash_fwd_kernel::type internal::gemvx::kernel(T1 *, const T1 *, const T1 *, float, int, int) 0.0 883,038 280 3,153.7 3,168.0 2,784 3,584 212.3 void flash::flash_fwd_splitkv_combine_kernel::masked_fill_kernel(at… 0.0 607,545 336 1,808.2 1,792.0 1,600 2,112 115.6 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo… 0.0 603,943 28 21,569.4 21,568.0 21,440 21,664 45.8 ampere_bf16_s16816gemm_bf16_128x64_ldg8_f2f_stages_32x6_tn 0.0 362,114 1 362,114.0 362,114.0 362,114 362,114 0.0 void at::native::tensor_kernel_scan_innermost_dim>(T1 *, const T1 *, unsign… 0.0 317,665 1 317,665.0 317,665.0 317,665 317,665 0.0 at::native::::fill_reverse_indices_kernel(long *, int, at::cuda::detail::IntDivider, std::array::CatArrayBatchedCopy_aligned16_contig::OpaqueType<… 0.0 2,559 2 1,279.5 1,279.5 1,056 1,503 316.1 void ::elementwise_kernel_with_index, st… 0.0 1,184 1 1,184.0 1,184.0 1,184 1,184 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::reciprocal_kernel_cuda(at::Tenso… 0.0 992 1 992.0 992.0 992 992 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::AUnaryFunctor, std::array, std::array<… [7/8] Executing 'cuda_gpu_mem_time_sum' stats report Time (%) Total Time (ns) Count Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Operation -------- --------------- ------- -------- -------- -------- ----------- ----------- ------------------------------ 90.3 615,980,705 181,583 3,392.3 352.0 287 101,392,877 253,405.4 [CUDA memcpy Host-to-Device] 7.2 48,846,175 53,091 920.0 896.0 831 344,258 1,491.0 [CUDA memcpy Device-to-Device] 1.8 12,138,408 10,650 1,139.8 1,120.0 832 1,760 117.2 [CUDA memcpy Device-to-Host] 0.7 5,068,956 12,722 398.4 320.0 288 1,536 188.1 [CUDA memset] [8/8] Executing 'cuda_gpu_mem_size_sum' stats report Total (MB) Count Avg (MB) Med (MB) Min (MB) Max (MB) StdDev (MB) Operation ---------- ------- -------- -------- -------- -------- ----------- ------------------------------ 3,401.339 181,583 0.019 0.000 0.000 466.747 1.248 [CUDA memcpy Host-to-Device] 455.651 53,091 0.009 0.000 0.000 155.582 0.675 [CUDA memcpy Device-to-Device] 4.763 12,722 0.000 0.000 0.000 0.003 0.001 [CUDA memset] 2.082 10,650 0.000 0.000 0.000 0.002 0.000 [CUDA memcpy Device-to-Host] Generated: /data/cy/vllm_tts.nsys-rep /data/cy/vllm_tts.sqlite