confused response

#8
by jiangyizhi - opened

startup command:
vllm serve /models/GLM-4.7-Flash-NVFP4 --host 0.0.0.0 --port 80 --served-model-name GLM-4.7-Flash-NVFP4 --max-model-len 4096 --max_num_batched_tokens 2048 --max-num-seqs 4 --disable-log-requests --gpu-memory-utilization 0.5

NVIDIA L20

infer command:
curl -X POST http://localhost:11436/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "GLM-4.7-Flash-NVFP4",
"messages": [
{"role": "user", "content": "hello"}
],
"temperature": 1.0,
"top_p": 0.95,
"thinking": {
"type": "disabled"
},
"stream": false
}'

answer:
{"id":"chatcmpl-a37f4c14acc2afd4","object":"chat.completion","created":1770118127,"model":"GLM-4.7-Flash-NVFP4","choices":[{"index":0,"message":{"role":"assistant","content":"multimultmultimultmultimultimultimultimultimultmultmultimult <!--[multmultmultmulti |multmultmultmultimultmultmult |\nmultmult <!--[multmultimultmultimultmultmultimultimultimultmultmultmultimultimult||multmultmultmultmultmultimultmultmultimultmultmultimultmultmultmultimultimultmultmultmultmultmultmultimultmultimultmultmultmultmultimultmultmultmultmultmultmultmultmultmultimultimultimultmultmultmultmultmultimultmultmultimultmultmultmultmultmultimultmultimultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultimultimultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultimultmultmultmultimultmultimultmultimultmultmultimultmultmultmultmultmultimultimultmultimultmultmultmultmultmultmultimultimultmultmultmultmultimultmultmultmultmultmultimultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultimultmultmultmultmultmultmultmultmultimultmultimultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultimultimultimultmultmultmultmultmultmultmultimultmultmultmultmultimultmultimultmultmultmultmultimultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultimultmultmultmultmultimultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultimultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultimultmultimultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultimultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultimultmultimultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultimultmultmultmultmultmultmultmultmultimultmultmultmultmultimultmultmultmultmultmultimultimultmultmultmultmultmultmultmultmultmultmultmultmultimultmultimultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultimultmultmultmultmultmultmultmultimultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultimultmultmultmultmultmultmultimultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmult |\nmultimultmultimultmultmultimultmultmultmultimultmultmultmultmalmultmultmultmultmult |multmultmultmultmultmultmultmultmultmultimultimultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmult *[multimultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmult`]multmultmultmultmultmult snowymultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultaryamultmultmultmultimultmultimultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultimultmultimultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultiasdfmultmultmultmultmultmultmult |multmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmultmultmultmult hackermultmultmultmultimultmultmultimultmultmult_BOLDmultmultmultmultmultimultmultmultmultmultimultmultmultmultmultmultimultmultmultmultimultmultTESTmultmultmultmultARYmultmultmultmultimultimultmultmultimultmult准备的mult笨multmultmultmultmultmultmultimultmultmultimultmultmultependmultmultmultmultmultmultmultmultmultimultmultmultmultmultmultmultmult_simulationmultiultymultmultmultmultmultmult MAKEmultmultmultmultmultmultmultmultmult audmult AdvertisementmultmultmultmultmultimultmultiImplementationmultmultmultmultmultmultimultmultmultREQUIREDmultmultmultmultmultmultmultmultmultmultmultmultaryamultmultmultHealthy herpes|}\nmultmultmult11multmult.Connectionmultmultmultmultmultimult Skymult依赖multmultaryamultwechsmultmult hystermultmormultmultmultmultmultimultROTmultmultmultimultmultimultmultmultmultmultmultmultmultmultmultmultaryamultmultMAKEmultimultimultmult今年的multcionamultmultmultmultXYZ wintermultmultmultmultmultsol skyrocketmultmultmultmultmultimultmultятельmultmulti |mult2 maintainsmultmultmultmultimestepmultmultimultmultmultmultchedules dirtymultmultmultfadmult ASSIGNmultimultmult ballsmultimultmultmultimultmultmultmultDOUBLEmultmultreativemultmulti融化 prankmultmultmultmultmultmultmultmultimultmultaryamultmultmult prankmult LucyUNCH Hol Holmultmultmult XYZmultCONTROLmultanya integrmultmultmultmultSTATIC."]bidmultfaf OutputPsi loosemult HolPLAYmultaryamultimultmult CheatOPYždmultmultmultmultothekmult integrmult MAKE decreasedTRY integr Stepmultanyamult silmultmultmultOPYmultimult CheatmultmultmultCold hackermultmult Pavmultmulti","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":6,"total_tokens":1339,"completion_tokens":1333,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null}

May I ask why this is the case?

Owner

TBH, I have no idea.

What version of vLLM / Transformers / Cuda/ GPU driver are you running?

Try temp 0.7 and go from there.

TBH, I have no idea.

What version of vLLM / Transformers / Cuda/ GPU driver are you running?

Try temp 0.7 and go from there.

cuda:12.8
vllm:0.15.0
transformers:5.0.0

Owner

Did you mess around with the temp?

Did you mess around with the temp?

yes,Additionally, I need to provide you with some parameters。I'll follow your configuration
1.nvidia-smi --query-gpu=name,compute_cap --format=csv
name, compute_cap
NVIDIA L20, 8.9
nvidia-smi -q | grep "Product Architecture"
Product Architecture : Ada Lovelace
2.

image

NVFP4 should work on L20 via dequantization (slower than native Blackwell, but functional). The gibberish suggests a bug somewhere.
A few things to check:

Port mismatch: You're serving on port 80 but curling port 11436 - is there something in between?
Check vLLM startup logs - any warnings about quantization or falling back?
Try --enforce-eager - disables CUDA graphs, sometimes helps isolate issues

What do the vLLM server logs show when it loads the model?

NVFP4 should work on L20 via dequantization (slower than native Blackwell, but functional). The gibberish suggests a bug somewhere.
A few things to check:

Port mismatch: You're serving on port 80 but curling port 11436 - is there something in between?
Check vLLM startup logs - any warnings about quantization or falling back?
Try --enforce-eager - disables CUDA graphs, sometimes helps isolate issues

What do the vLLM server logs show when it loads the model?

"""
vllm serve /models/GLM-4.7-Flash-NVFP4 --host 0.0.0.0 --port 80 --served-model-name GLM-4.7-Flash-NVFP4 --max-model-len 4096 --max_num_batched_tokens 2048 --max-num-seqs 4 --disable-log-requests --trust-remote-code --gpu-memory-utilization 0.5 --enforce-eager
vllm serve: warning: option '--disable-log-requests' is deprecated
(APIServer pid=8302) INFO 02-05 02:36:00 [utils.py:325]
(APIServer pid=8302) INFO 02-05 02:36:00 [utils.py:325] █ █ █▄ ▄█
(APIServer pid=8302) INFO 02-05 02:36:00 [utils.py:325] ▄▄ ▄█ █ █ █ ▀▄▀ █ version 0.15.0
(APIServer pid=8302) INFO 02-05 02:36:00 [utils.py:325] █▄█▀ █ █ █ █ model /models/GLM-4.7-Flash-NVFP4
(APIServer pid=8302) INFO 02-05 02:36:00 [utils.py:325] ▀▀ ▀▀▀▀▀ ▀▀▀▀▀ ▀ ▀
(APIServer pid=8302) INFO 02-05 02:36:00 [utils.py:325]
(APIServer pid=8302) INFO 02-05 02:36:00 [utils.py:261] non-default args: {'model_tag': '/models/GLM-4.7-Flash-NVFP4', 'api_server_count': 1, 'host': '0.0.0.0', 'port': 80, 'model': '/models/GLM-4.7-Flash-NVFP4', 'trust_remote_code': True, 'max_model_len': 4096, 'enforce_eager': True, 'served_model_name': ['GLM-4.7-Flash-NVFP4'], 'gpu_memory_utilization': 0.5, 'max_num_batched_tokens': 2048, 'max_num_seqs': 4}
(APIServer pid=8302) The argument trust_remote_code is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=8302) The argument trust_remote_code is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=8302) INFO 02-05 02:36:00 [model.py:541] Resolved architecture: Glm4MoeLiteForCausalLM
(APIServer pid=8302) INFO 02-05 02:36:00 [model.py:1561] Using max model len 4096
(APIServer pid=8302) INFO 02-05 02:36:00 [scheduler.py:226] Chunked prefill is enabled with max_num_batched_tokens=2048.
(APIServer pid=8302) INFO 02-05 02:36:00 [vllm.py:624] Asynchronous scheduling is enabled.
(APIServer pid=8302) WARNING 02-05 02:36:00 [vllm.py:662] Enforce eager set, overriding optimization level to -O0
(APIServer pid=8302) INFO 02-05 02:36:00 [vllm.py:762] Cudagraph is disabled under eager mode
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:07 [core.py:96] Initializing a V1 LLM engine (v0.15.0) with config: model='/models/GLM-4.7-Flash-NVFP4', speculative_config=None, tokenizer='/models/GLM-4.7-Flash-NVFP4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=compressed-tensors, enforce_eager=True, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=GLM-4.7-Flash-NVFP4, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.NONE: 0>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['all'], 'splitting_ops': [], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [2048], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.NONE: 0>, 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': [], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 0, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:07 [parallel_state.py:1212] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.17.0.5:44845 backend=nccl
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:07 [parallel_state.py:1423] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:08 [gpu_model_runner.py:4021] Starting to load model /models/GLM-4.7-Flash-NVFP4...
(EngineCore_DP0 pid=8341) WARNING 02-05 02:36:09 [compressed_tensors.py:766] Acceleration for non-quantized schemes is not supported by Compressed Tensors. Falling back to UnquantizedLinearMethod
(EngineCore_DP0 pid=8341) /usr/local/lib/python3.13/site-packages/tvm_ffi/_optional_torch_c_dlpack.py:174: UserWarning: Failed to JIT torch c dlpack extension, EnvTensorAllocator will not be enabled.
(EngineCore_DP0 pid=8341) We recommend installing via pip install torch-c-dlpack-ext
(EngineCore_DP0 pid=8341) warnings.warn(
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:10 [cuda.py:364] Using TRITON_MLA attention backend out of potential backends: ('TRITON_MLA',)
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:10 [mla_attention.py:1399] Using FlashAttention prefill for MLA
(EngineCore_DP0 pid=8341) WARNING 02-05 02:36:10 [compressed_tensors.py:645] Current platform does not support cutlass NVFP4. Running CompressedTensorsW4A16Fp4.
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:10 [nvfp4.py:258] Using 'MARLIN' NvFp4 MoE backend out of potential backends: ['FLASHINFER_TRTLLM', 'FLASHINFER_CUTEDSL', 'FLASHINFER_CUTLASS', 'VLLM_CUTLASS', 'MARLIN'].
Loading safetensors checkpoint shards: 0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 25% Completed | 1/4 [00:01<00:03, 1.13s/it]
Loading safetensors checkpoint shards: 50% Completed | 2/4 [00:02<00:02, 1.05s/it]
Loading safetensors checkpoint shards: 75% Completed | 3/4 [00:03<00:01, 1.21s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:04<00:00, 1.29s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:04<00:00, 1.23s/it]
(EngineCore_DP0 pid=8341)
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:16 [default_loader.py:291] Loading weights took 5.06 seconds
(EngineCore_DP0 pid=8341) WARNING 02-05 02:36:16 [marlin_utils_fp4.py:150] Your GPU does not have native support for FP4 computation but FP4 quantization is being used. Weight-only FP4 compression will be used leveraging the Marlin kernel. This may degrade performance for compute-heavy workloads.
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:17 [gpu_model_runner.py:4118] Model loading took 18.11 GiB memory and 8.219822 seconds
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:19 [gpu_worker.py:356] Available KV cache memory: 3.36 GiB
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:19 [kv_cache_utils.py:1307] GPU KV cache size: 66,720 tokens
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:19 [kv_cache_utils.py:1312] Maximum concurrency for 4,096 tokens per request: 16.29x
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:20 [core.py:272] init engine (profile, create kv cache, warmup model) took 2.45 seconds
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:21 [vllm.py:624] Asynchronous scheduling is enabled.
(EngineCore_DP0 pid=8341) WARNING 02-05 02:36:21 [vllm.py:669] Inductor compilation was disabled by user settings, optimizations settings that are only active during inductor compilation will be ignored.
(EngineCore_DP0 pid=8341) INFO 02-05 02:36:21 [vllm.py:762] Cudagraph is disabled under eager mode
(APIServer pid=8302) INFO 02-05 02:36:21 [api_server.py:665] Supported tasks: ['generate']
(APIServer pid=8302) WARNING 02-05 02:36:21 [model.py:1371] Default vLLM sampling parameters have been overridden by the model's generation_config.json: {'temperature': 1.0}. If this is not intended, please relaunch vLLM instance with --generation-config vllm.
(APIServer pid=8302) INFO 02-05 02:36:21 [serving.py:177] Warming up chat template processing...
(APIServer pid=8302) INFO 02-05 02:36:22 [hf.py:310] Detected the chat template content format to be 'openai'. You can set --chat-template-content-format to override this.
(APIServer pid=8302) INFO 02-05 02:36:22 [serving.py:212] Chat template warmup completed in 983.9ms
(APIServer pid=8302) INFO 02-05 02:36:22 [api_server.py:946] Starting vLLM API server 0 on http://0.0.0.0:80
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:38] Available routes are:
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /openapi.json, Methods: GET, HEAD
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /docs, Methods: GET, HEAD
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /docs/oauth2-redirect, Methods: GET, HEAD
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /redoc, Methods: GET, HEAD
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /scale_elastic_ep, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /is_scaling_elastic_ep, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /tokenize, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /detokenize, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /inference/v1/generate, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /pause, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /resume, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /is_paused, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /metrics, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /health, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/chat/completions, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/chat/completions/render, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/responses, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/responses/{response_id}, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/responses/{response_id}/cancel, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/audio/transcriptions, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/audio/translations, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/completions, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/completions/render, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/messages, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/models, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /load, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /version, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /ping, Methods: GET
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /ping, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /invocations, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /classify, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/embeddings, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /score, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/score, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /rerank, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v1/rerank, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /v2/rerank, Methods: POST
(APIServer pid=8302) INFO 02-05 02:36:22 [launcher.py:46] Route: /pooling, Methods: POST
(APIServer pid=8302) INFO: Started server process [8302]
(APIServer pid=8302) INFO: Waiting for application startup.
(APIServer pid=8302) INFO: Application startup complete.
"""

The VLLM startup log does include warnings about quantization and fallback. However, I had no issues launching another 4-bit quantization model before that. The only difference is that my comprehension ability may be slightly worse than that of the Llama deployment

I have previously launched GLM-4.7-Flash-AWQ-4bit without any issues
image
This is the GLM-4.7-Flash-UD-Q4_K_XL.gguf deployed by llama. Testing has found that its comprehension ability will be better than that of the vllm deployment

image

It isn't the model, or vLLM, whatever is going on is specific to your system / platform.

Looking at your startup output, the first thing that jumps out at me is that your cannot use any of the MLA backends, and the warnings that your Cutlass GPU's do not support FP4.

Sign up or log in to comment