WARNING: CPU IP/backtrace sampling not supported, disabling. Try the 'nsys status --environment' command to learn more. WARNING: CPU context switch tracing not supported, disabling. Try the 'nsys status --environment' command to learn more. INFO 08-13 19:21:37 [__init__.py:235] Automatically detected platform cuda. CUDA_VISIBLE_DEVICES = 3 --- vLLM V1 基准测试(含 NVTX 标记)--- 模型: Qwen/Qwen2-1.5B 批量大小: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024] 场景: ['prefill1_decode512'] ------------------------------------------------------------ 加载分词器/模型中... INFO 08-13 19:21:46 [config.py:1604] Using max model len 4096 INFO 08-13 19:21:47 [config.py:2434] Chunked prefill is enabled with max_num_batched_tokens=8192. INFO 08-13 19:21:52 [__init__.py:235] Automatically detected platform cuda. INFO 08-13 19:21:54 [core.py:572] Waiting for init message from front-end. INFO 08-13 19:21:54 [core.py:71] Initializing a V1 LLM engine (v0.10.0) with config: model='Qwen/Qwen2-1.5B', speculative_config=None, tokenizer='Qwen/Qwen2-1.5B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2-1.5B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null} INFO 08-13 19:21:56 [parallel_state.py:1102] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 WARNING 08-13 19:21:56 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. INFO 08-13 19:21:56 [gpu_model_runner.py:1843] Starting to load model Qwen/Qwen2-1.5B... INFO 08-13 19:21:56 [gpu_model_runner.py:1875] Loading model from scratch... INFO 08-13 19:21:56 [cuda.py:290] Using Flash Attention backend on V1 engine. INFO 08-13 19:21:57 [weight_utils.py:296] Using model weights format ['*.safetensors'] INFO 08-13 19:21:57 [weight_utils.py:349] No model.safetensors.index.json found in remote. Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00(T1::Param… 2.8 781,701,830 1,991 392,617.7 496,547.0 10,528 506,724 194,713.8 void cutlass::Kernel2(T1::Par… 2.5 718,309,736 5,756 124,793.2 9,920.0 5,151 716,420 213,447.3 void at::native::reduce_kernel<(int)512, (int)1, at::native::ReduceOp(T1::Par… 0.9 255,622,286 604 423,215.7 487,970.0 7,008 488,866 160,519.3 std::enable_if::type internal::gemvx::kernel, std::a… 0.0 9,733,874 4 2,433,468.5 2,435,244.5 2,367,692 2,495,693 60,628.3 void at::native::::cunn_SoftMaxForward<(int)4, float, float, float, at::native:::… 0.0 9,136,049 28 326,287.5 326,210.0 324,514 329,538 982.3 ampere_bf16_s1688gemm_bf16_128x128_ldg8_relu_f2f_stages_32x1_tn 0.0 8,425,678 224 37,614.6 37,568.5 36,608 38,880 409.5 void cutlass::Kernel2(T1::Para… 0.0 7,777,256 2 3,888,628.0 3,888,628.0 3,705,715 4,071,541 258,678.0 void at::native::_scatter_gather_elementwise_kernel<(int)128, (int)8, void at::native::_cuda_scatte… 0.0 7,758,429 8,970 864.9 864.0 767 1,280 77.7 void at::native::vectorized_elementwise_kernel<(int)2, at::native::FillFunctor, std::array::masked_fill_kernel(at… 0.0 5,380,367 5,752 935.4 896.0 863 1,344 76.3 void at::native::unrolled_elementwise_kernel, std::array, std::array, std::array(T1::Para… 0.0 3,433,971 4 858,492.8 858,901.0 855,877 860,292 2,168.9 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl_nocast>(T1 *, const T1 *, unsign… 0.0 2,871,002 1,512 1,898.8 1,760.0 1,312 2,912 445.3 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo… 0.0 2,643,071 581 4,549.2 4,671.0 1,984 36,256 1,427.2 triton_red_fused__to_copy_add_embedding_mean_mul_pow_rsqrt_0 0.0 2,581,742 2 1,290,871.0 1,290,871.0 1,290,663 1,291,079 294.2 at::native::::fill_reverse_indices_kernel(long *, int, at::cuda::detail::IntDivider(T1::Par… 0.0 1,835,794 581 3,159.7 3,200.0 1,632 39,200 1,543.9 triton_poi_fused_cat_1 0.0 1,365,128 2 682,564.0 682,564.0 677,764 687,364 6,788.2 void at::native::::distribution_elementwise_grid_stride_kernel, std::array::type internal::gemvx::kernel, std::array(T1 *, float *, const T1 *, co… 0.0 295,335 168 1,757.9 1,760.0 1,535 2,080 119.1 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, __nv_bfloat16, __nv_bfloat16, float, (boo… 0.0 155,841 1 155,841.0 155,841.0 155,841 155,841 0.0 void at::native::::CatArrayBatchedCopy_aligned16_contig::OpaqueType<… 0.0 78,880 1 78,880.0 78,880.0 78,880 78,880 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::bfloat16_copy_kernel_cuda(at::Te… 0.0 63,740 58 1,099.0 896.0 864 11,360 1,372.6 void at::native::vectorized_elementwise_kernel<(int)4, at::native::FillFunctor, std:… 0.0 43,936 1 43,936.0 43,936.0 43,936 43,936 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::sin_kernel_cuda(at::TensorIterat… 0.0 36,570 28 1,306.1 1,312.0 1,280 1,376 19.5 void cublasLt::splitKreduce_kernel<(int)32, (int)16, int, float, __nv_bfloat16, float, (bool)0, __n… 0.0 26,816 1 26,816.0 26,816.0 26,816 26,816 0.0 void at::native::vectorized_elementwise_kernel<(int)4, at::native::cos_kernel_cuda(at::TensorIterat… 0.0 19,520 1 19,520.0 19,520.0 19,520 19,520 0.0 void at::native::elementwise_kernel<(int)128, (int)2, void at::native::gpu_kernel_impl_nocast, std::array::distribution_elementwise_grid_stride_kernel, st… 0.0 3,136 2 1,568.0 1,568.0 1,504 1,632 90.5 void at::native::vectorized_elementwise_kernel<(int)2, at::native::::where_kernel_impl(at:… 0.0 3,104 2 1,552.0 1,552.0 1,344 1,760 294.2 void at::native::vectorized_elementwise_kernel<(int)4, void at::native::compare_scalar_kernel::elementwise_kernel_with_index, s… 0.0 2,336 1 2,336.0 2,336.0 2,336 2,336 0.0 void at::native::elementwise_kernel<(int)128, (int)4, void at::native::gpu_kernel_impl, std::array<… [7/8] Executing 'cuda_gpu_mem_time_sum' stats report Time (%) Total Time (ns) Count Avg (ns) Med (ns) Min (ns) Max (ns) StdDev (ns) Operation -------- --------------- ------ -------- -------- -------- ---------- ----------- ------------------------------ 93.2 540,571,743 41,277 13,096.2 352.0 287 97,068,545 513,408.1 [CUDA memcpy Host-to-Device] 3.2 18,710,334 14,564 1,284.7 896.0 864 1,362,855 22,521.7 [CUDA memcpy Device-to-Device] 2.5 14,536,294 21,760 668.0 768.0 287 7,744 311.5 [CUDA memset] 1.1 6,503,130 5,752 1,130.6 1,120.0 863 1,760 95.6 [CUDA memcpy Device-to-Host] [8/8] Executing 'cuda_gpu_mem_size_sum' stats report Total (MB) Count Avg (MB) Med (MB) Min (MB) Max (MB) StdDev (MB) Operation ---------- ------ -------- -------- -------- -------- ----------- ------------------------------ 4,190.741 41,277 0.102 0.000 0.000 466.747 2.619 [CUDA memcpy Host-to-Device] 2,534.048 14,564 0.174 0.003 0.003 622.330 10.312 [CUDA memcpy Device-to-Device] 14.589 21,760 0.001 0.001 0.000 0.006 0.000 [CUDA memset] 4.192 5,752 0.001 0.000 0.000 0.004 0.001 [CUDA memcpy Device-to-Host] Generated: /data/cy/kv_cache_vs_util/sim_traverse_bs/traverse_bs_util_sim_decoding.nsys-rep /data/cy/kv_cache_vs_util/sim_traverse_bs/traverse_bs_util_sim_decoding.sqlite