Question on size of model weights

#1
by spanspek - opened

The original model is BF16 and has model weights of ~62GB, this FP8 is ~55GB

I would have expected these weights to be ~32GB (rough guess), am I missing something or can you maybe explain why the weights are so large?

I am currently working on fixing this! Stay tuned

Yeah this doesn't want to load unfortunately. I'll be watching this closely though! Had to jump through A LOT of hoops to get the native model to load on 2x5090s

The model checkpoint is smaller now. I'm trying to test run it. Takes a little bit because i needed to fix some things in my vllm fork

Hey @spanspek @loktar !

Good news - the model is now working. You can run it with our vLLM fork that adds MLA detection for glm4_moe_lite:

pip install git+https://github.com/marksverdhei/vllm.git@fix/glm4-moe-mla-detection

Also requires transformers 5.0+:

pip install git+https://github.com/huggingface/transformers.git

Once installed, it should work out of the box. We tested on 2x RTX 3090 with tensor_parallel_size=2 and got 19.4 tokens/sec at 14.7 GB VRAM per GPU.

Working on getting the MLA detection fix merged upstream, but the fork should work in the meantime!

this was an automated message on behalf of @marksverdhei

marksverdhei changed discussion status to closed

Sign up or log in to comment