UD-Q4_K_XL of MiniMax-M2.7-GGUF in BROKEN
Please, pretty please... Unsloth, get well with your QA and abide by the already accepted "GGUF quanting community" minimal QA baseline on HF and transparently provide PPL and KLD data. At least do it internally as a hygene measure to avoid such flops. Rush it not!
For the people askign what is "NaN" in quant PPL measurement that would normally point out the existence of numerical issues with the backend kernels or the quant itself, it's about a rushed in / never checked quant error.
I have checked similar quants from other HF providers (aessedai/MiniMax-M2.7-Q5_K_M --> 157.226 GiB (5.906 BPW) and ubergarm/MiniMax-M2.7-IQ5_K --> 157.771 GiB (5.926 BPW)) and no such error is present. But this is not about backend kernels, nor about the much-hyped "poisoned CUDA 13.2".
Ive noticed off feeling answers like mostly right but (factual errors on things m2.5 didnt get wrong) with the UD-IQ4_NL quant I wonder if it has NaN issues as well
When we ran perplexity and KLD benchmarks on MiniMax-M2.7 for all 4-bit quants, it did in fact show unusually high PPL compared with the other quants. AesSedai and ubergam reported seeing similar issues as well.
That said, we initially kept it up because Benjamin Marie’s benchmarks on M2.5 (which uses the same arch as M2.7) suggested that Q4_K_XL performed the best overall, so we did not remove it at the time. In fact this time, our Q4_K_XL had even more layers upcasted than M2.5
In our own internal testing, Q4_K_XL also performed very well, which led us to believe the elevated PPL might have been a fluke, since that does happen from time to time.
But, as a precaution, we’ll remove the Q4_K_XL quant for now in case there are any further issues, and we’ll pay closer attention to PPL in future evaluations.
We're still doing more investigation on the matter on what could be the cause and how we can alleviate the issue.
Thanks for your reply, Daniel.
A perplexity check (without KLD) on a model sized as MiniMax takes roughly 5 minutes. I can imagine you could batched such test at least for "pure" and/or UD quants so that such accidents won't happen again. Also, even if not published the first day you push the quants, it would still be of good help / assert trust from the community if you publish PPL / KLD at a later time in the model card. It doesn't have to reference any other fellow quanter similar PPL/KLD figures (to avoid useless competition) but this could also do baseline sanity checks for most of the interesting, meaningful quants for the community.
I think Minimax m2.7 needs its own GGUF evaluation chart
We're still doing more investigation on the matter on what could be the cause and how we can alleviate the issue.
Might be worth looking into this clue, taken over from an exl3 quant model card:
"Some experts in the model produce inf values during calibration (e.g. experts 61 and 74 in the last layer had inf values in their down-projection calibration state). The lm_head layer also exhibited NaN values in its calibration state (445K NaN out of 1.5B elements).
This causes Cholesky decomposition to fail during quantization, as the Hessian matrix is no longer positive-definite. ExLlamaV3 does not handle this case gracefully — quantization crashes after exhausting retry attempts. A local patch was applied to fall back to uncalibrated quantization for the affected tensors. Given that only a handful of experts out of 256 in the last layer are affected, the impact on output quality is expected to be minimal."
Hello we have fixed it - see https://www.reddit.com/r/LocalLLaMA/comments/1slk4di/minimax_m27_gguf_investigation_fixes_benchmarks/
Would be great if you could update your original post on Localllama with the new findings thanks so much!
Also note:
10/26 NaNs (38%) found at https://huggingface.co/bartowski/MiniMaxAI_MiniMax-M2.7-GGUF: Chunk-32 failures (9): IQ3_XXS, IQ3_XS, IQ3_M, Q3_K_M, Q3_K_L, Q3_K_XL, Q4_K_S, Q4_1, Q5_K_S. Late failure (1): IQ1_S (crashed at chunk 311)
5/23 NaNs (21%) ours had NaNs - all fixed now at https://huggingface.co/unsloth/MiniMax-M2.7-GGUF: UD-Q4_K_S, UD-Q4_K_M, UD-Q4_K_XL, UD-Q5_K_S, MXFP4_MOE. All block 32.
1/4 NaN Q4_K_M at https://huggingface.co/AesSedai/MiniMax-M2.7-GGUF was deleted due to NaNs. Block 32 as well.
CC: @dehnhaide

