WARNING: CPU IP/backtrace sampling not supported, disabling. Try the 'nsys status --environment' command to learn more. WARNING: CPU context switch tracing not supported, disabling. Try the 'nsys status --environment' command to learn more. INFO 08-13 19:02:19 [__init__.py:235] Automatically detected platform cuda. CUDA_VISIBLE_DEVICES = 3 --- vLLM V1 基准测试(含 NVTX 标记)--- 模型: Qwen/Qwen2-1.5B 批量大小: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024] 场景: ['prefill640_decode512'] ------------------------------------------------------------ 加载分词器/模型中... INFO 08-13 19:02:29 [config.py:1604] Using max model len 4096 INFO 08-13 19:02:29 [config.py:2434] Chunked prefill is enabled with max_num_batched_tokens=8192. INFO 08-13 19:02:35 [__init__.py:235] Automatically detected platform cuda. INFO 08-13 19:02:37 [core.py:572] Waiting for init message from front-end. INFO 08-13 19:02:37 [core.py:71] Initializing a V1 LLM engine (v0.10.0) with config: model='Qwen/Qwen2-1.5B', speculative_config=None, tokenizer='Qwen/Qwen2-1.5B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2-1.5B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null} INFO 08-13 19:02:40 [parallel_state.py:1102] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 WARNING 08-13 19:02:40 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. INFO 08-13 19:02:40 [gpu_model_runner.py:1843] Starting to load model Qwen/Qwen2-1.5B... INFO 08-13 19:02:40 [gpu_model_runner.py:1875] Loading model from scratch... INFO 08-13 19:02:40 [cuda.py:290] Using Flash Attention backend on V1 engine. INFO 08-13 19:02:40 [weight_utils.py:296] Using model weights format ['*.safetensors'] INFO 08-13 19:02:41 [weight_utils.py:349] No model.safetensors.index.json found in remote. Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00, std::array(T1::Param… 2.5 940,467,368 5,865 160,352.5 13,312.0 1,984 1,008,298 288,283.2 void at::native::unrolled_elementwise_kernel(T1::Par… 1.9 720,826,362 5,867 122,861.1 9,824.0 5,120 714,183 210,401.9 void at::native::reduce_kernel<(int)512, (int)1, at::native::ReduceOp(T1::Par… 0.7 258,705,730 610 424,107.8 487,972.0 6,976 500,003 159,920.1 std::enable_if::type internal::gemvx::kernel(T1 *, float *, const T1 *, co… 0.3 95,480,145 22,086 4,323.1 1,376.0 1,215 15,104 5,013.1 triton_poi_fused_cat_4 0.2 88,249,892 16,968 5,201.0 5,473.0 1,535 80,096 3,526.8 triton_red_fused__to_copy_add_mean_mul_pow_rsqrt_0 0.2 58,167,224 15,232 3,818.8 3,840.0 3,711 4,032 31.4 void flash::flash_fwd_splitkv_combine_kernel, std::a… 0.0 9,813,089 4 2,453,272.3 2,468,672.5 2,391,504 2,484,240 42,132.0 void at::native::::cunn_SoftMaxForward<(int)4, float, float, float, at::native:::… 0.0 9,653,448 448 21,547.9 21,345.0 21,120 24,928 817.3 ampere_bf16_s16816gemm_bf16_128x64_ldg8_f2f_stages_32x6_tn 0.0 9,408,780 5,863 1,604.8 1,376.0 1,120 2,752 455.9 void at::native::unrolled_elementwise_kernel(T1::Para… 0.0 8,311,916 28 296,854.1 294,593.5 293,505 332,482 7,194.4 ampere_bf16_s1688gemm_bf16_128x128_ldg8_relu_f2f_stages_32x1_tn 0.0 7,845,574 9,023 869.5 864.0 767 1,281 77.2 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor, std::array::masked_fill_kernel(at… 0.0 5,501,036 5,863 938.3 928.0 895 1,312 71.9 void at::native::unrolled_elementwise_kernel, std::array, std::array(T1::Para… 0.0 3,541,818 818 4,329.9 1,376.0 1,215 14,815 5,022.4 triton_poi_fused_cat_2 0.0 3,434,102 4 858,525.5 858,069.5 855,685 862,278 2,939.8 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast>(T1 *, const T1 *, unsign… 0.0 2,897,906 1,512 1,916.6 1,824.0 1,312 2,976 438.1 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo… 0.0 2,593,403 606 4,279.5 4,384.0 1,984 35,904 1,445.1 triton_red_fused__to_copy_add_embedding_mean_mul_pow_rsqrt_0 0.0 2,582,288 2 1,291,144.0 1,291,144.0 1,290,536 1,291,752 859.8 at::native::::fill_reverse_indices_kernel(long *, int, at::cuda::detail::IntDivider(T1::Par… 0.0 1,377,256 2 688,628.0 688,628.0 682,820 694,436 8,213.8 void at::native::::distribution_elementwise_grid_stride_kernel, std::array::type internal::gemvx::kernel, std::array::CatArrayBatchedCopy_aligned16_contig::OpaqueType<… 0.0 90,491 86 1,052.2 927.5 895 11,488 1,139.9 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor, std:… 0.0 78,785 1 78,785.0 78,785.0 78,785 78,785 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::bfloat16_copy_kernel_cuda(at::Te… 0.0 43,232 1 43,232.0 43,232.0 43,232 43,232 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::sin_kernel_cuda(at::TensorIterat… 0.0 36,737 28 1,312.0 1,312.0 1,280 1,344 17.4 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, float, __nv_bfloat16, float, (bool)0, __n… 0.0 26,432 1 26,432.0 26,432.0 26,432 26,432 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::cos_kernel_cuda(at::TensorIterat… 0.0 19,520 1 19,520.0 19,520.0 19,520 19,520 0.0 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast, std::array::distribution_elementwise_grid_stride_kernel, st… 0.0 3,103 2 1,551.5 1,551.5 1,503 1,600 68.6 void at::native::vectorized_elementwise_kernel<(int)2, at::native::::where_kernel_impl(at:… 0.0 2,976 2 1,488.0 1,488.0 1,376 1,600 158.4 void at::native::vectorized_elementwise_kernel<(int)4, void at::native::compare_scalar_kernel::elementwise_kernel_with_index, s… 0.0 2,400 1 2,400.0 2,400.0 2,400 2,400 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl, std::array<… [7/8] Executing 'cuda_gpu_mem_time_sum' stats report Time (%) Total Time (ns) Count Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Operation -------- --------------- ------ -------- -------- -------- ----------- ----------- ------------------------------ 93.8 627,226,731 42,463 14,771.1 352.0 320 112,333,155 587,539.7 [CUDA memcpy Host-to-Device] 2.8 18,735,373 14,448 1,296.7 928.0 895 1,362,505 22,615.1 [CUDA memcpy Device-to-Device] 2.4 16,204,705 24,393 664.3 768.0 320 8,224 282.8 [CUDA memset] 1.0 6,719,471 5,919 1,135.2 1,120.0 863 1,920 102.9 [CUDA memcpy Device-to-Host] [8/8] Executing 'cuda_gpu_mem_size_sum' stats report Total (MB) Count Avg (MB) Med (MB) Min (MB) Max (MB) StdDev (MB) Operation ---------- ------ -------- -------- -------- -------- ----------- ------------------------------ 4,194.770 42,463 0.099 0.000 0.000 466.747 2.582 [CUDA memcpy Host-to-Device] 2,533.618 14,448 0.175 0.003 0.000 622.330 10.354 [CUDA memcpy Device-to-Device] 17.613 24,393 0.001 0.001 0.000 0.006 0.000 [CUDA memset] 4.192 5,919 0.001 0.000 0.000 0.004 0.001 [CUDA memcpy Device-to-Host] Generated: /data/cy/kv_cache_vs_util/std_traverse_bs/traverse_bs_util_std.nsys-rep /data/cy/kv_cache_vs_util/std_traverse_bs/traverse_bs_util_std.sqlite