can not run
#1
by
aliez-ren
- opened
Dockerfile
FROM vllm/vllm-openai:nightly-4753f3bf69a2b975361afa7c49e8d948047613f6
RUN apt-get update && apt-get install -y git && \
apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pip install git+https://github.com/huggingface/transformers.git@76732b4e7120808ff989edbd16401f61fa6a0afa
EXPOSE 8000
ENTRYPOINT ["vllm", "serve", "GadflyII/GLM-4.7-Flash-NVFP4", \
"--tensor-parallel-size", "1", \
"--max-model-len", "65536", \
"--trust-remote-code", \
"--gpu-memory-utilization", "0.90", \
"--host", "0.0.0.0"]
Log
(EngineCore_DP0 pid=137) File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/glm4_moe_lite.py", line 508, in load_weights
(EngineCore_DP0 pid=137) weight_loader(param, loaded_weight)
(EngineCore_DP0 pid=137) TypeError: FusedMoE.weight_loader() missing 3 required positional arguments: 'weight_name', 'shard_id', and 'expert_id'
Is this for the NVFP4 or MXFP4? (your docker file says NVFP4, but you opened this in the MXFP4 page).
Edit:
If you are trying to run this MXFP4 model on blackwell GPU's. You will need to pull my fork listed in the model card. You will also need to make sure you have the release version of transformers 5.0.0 installed.
sorry, I pasted a wrong dockerfile.
I both tried MXFP4 and NVFP4. NVFP4 works fine.
so it's necessary to use your vllm fork to run MXFP4, right?
I'll try again. my machine stucked when compiling vllm yesterday.
I see your new comment in model card.
I choose to keep using NVFP4 for speed.
thanks!
aliez-ren
changed discussion status to
closed
Pull and build my fork, make sure you have 5.0.0
release transformers installed, NV4FP model got a lot faster ;)