MTP quality, 47 layer
Thank you so much for your contribution to open-source.
Did you check the MTP quality after this quantization?
Maybe leaving original precision for this last layer would be beneficial, what do you think?
No, I didn't. MTP in vLLM does not work at all if the MTP layer is NVFP4; so I did not test it at all.
I made a new version of this model with MTP, and will upload it as soon as testing is complete. I also changed the calibration scripts a bit and left the MTP layer in BF16 which increased the model size by 446MB, MTP works, and initial testing resulted in 63% acceptance rate (vs. 60% original BF16).
I will reply again as soon as it is uploaded.
Thanks! I am waiting for it so much! :-)
What could you say about throughput FP16/8 vs. NVP4 on Blackwell,
- will we notice the speedup and it is about no optimised kernels availability issue in vLLM or something other?
(according to this PR https://github.com/vllm-project/vllm/pull/32520 the speedup is huge for FP4, but only in case of dense models)
GadflyII/GLM-4.7-Flash-MTP-NVFP4 is up.
Blackwell has native hardware acceleration for FP4, which makes it considerably faster than FP8/BF16.
This morning that merged a PR in my vllm fork that should radically improve performance at larger context windows , give it a shot.