Different sizes of same quants?

#2
by inputout - opened

I was wondering why same quants yield different sizes of gguf. Just out of interest, why is that? (differences up to 1,3 GB !!)
255,3 GB Q5_K_M @bartowski
254,2 GB Q5_K_M unsloth
254,0 GB Q5_K_M mradermacher

My quants use this fork for the quantization resulting in slightly different layouts of quantization by layer for MoE models:

https://github.com/ggml-org/llama.cpp/pull/12727

Unsloth uses something else, not sure if it's public

mradermacher uses mainline afaik

Interesting, I had assumed it was related to different imatrixes, but then there are even more fundamental reasons.
Do you think that under these circumstances, the ppl of the quants are still comparable as a metric?
I found it exciting that in a test, your PPLs were better than the others on my own “uncontaminated” text, but the others were better on Wikitest, as if the imatrix were “benchmaxxed” on Wikitest (if that's even technically possible). It could also be a coincidence.

@inputout if the imatrix is calibrated on WikiText, it will get better PPL results on WikiText, which is why it's generally a good idea to do PPL on a different dataset than the one used to calibrate the imatrix.

yeah imatrix itself won't change the actual size of the result, it only affects the scales and offsets chosen while quantizing

but as ilintar said, using the same dataset for imatrix and PPL can result in minor advantages to the result, which is why your use of an "uncontaminated" text is most likely to be the most accurate results

do you mind sharing your results? I'm highly curious

I also included mradermacher non-imatrix, and due to my modest hardware, it takes a long time. I limited the text input (~80-150KB of text and shortened the end of wiki.test).
At first, I had the impression that the results of wiki.test and non-wiki.test were swapped, but it's not quite that simple.
I also tested mradermacher non-imatrix, and that was very surprising: non-imatrix was clearly the best in wiki.test, so I can't really talk about benchmaxxing with imatrix quants if imatrix actually worsens the perplexity.
In non-wiki.test, @bartowski quants tend to be the best. It's interesting that -ctk q8_0 and -ctk q8_0 -ctv q8_0 reshuffle the cards a bit anyway.

Actually, only the following can be said with certainty:

  • Every other text delivers different results. Non-wiki.test texts in other languages tend little to show a kind of swap in the results.
  • Surprisingly, and for me most illogically, mradermacher's non-imatrix is best for wiki.test, refuting any benchmaxxing assumptions. I would have expected imatrix to be better on average than non-imatrix, even on wiki.
  • ctk q8_0 and -ctk q8_0 -ctv q8_0 reshuffle the cards.
  • On non-wiki.test texts with non-English, on average your quants are the better ones. (Perhaps also because of the slightly larger file size = better ppl)

Its not totally logical, so real benchmarks of the GGUFs would probably be better, but no one can do this work when new models and quants come out every week.
Regardless of how meaningful the tests actually are, many questions remain, such as: How meaningful is the ppl anyway? Does a better ppl also correlate with a genuinely better answer quality (real benchmarks)? What influence does the file size/amount of information have due to different quantification methods? What influence does imatrix have on non-English texts? etc...

Here are the results ("*" shows the ranking)

wiki.test.raw (shortened)
GLM-4.7-Q5_K_M(bar): 4.6279 +/- 0.08825 *3
GLM-4.7-Q5_K_M(uns): 4.6038 +/- 0.08732 *2
GLM-4.7.i1-Q5_K_M(mra): 4.6921 +/- 0.09047 *4 ("worst" overall)
GLM-4.7.Q5_K_M(mra): 4.5374 +/- 0.08610 *1
-ctk q8_0
GLM-4.7-Q5_K_M(bar): 4.6659 +/- 0.08935 *3
GLM-4.7-Q5_K_M(uns): 4.6126 +/- 0.08770 *2
GLM-4.7.i1-Q5_K_M(mra): 4.6432 +/- 0.08892 *3
GLM-4.7.Q5_K_M(mra): 4.5335 +/- 0.08591 *1 ("best" overall)
-ctk q8_0 -ctv q8_0
GLM-4.7-Q5_K_M(bar): 4.6760 +/- 0.08973 *4
GLM-4.7-Q5_K_M(uns): 4.6319 +/- 0.08834 *2
GLM-4.7.i1-Q5_K_M(mra): 4.6724 +/- 0.08953 *3
GLM-4.7.Q5_K_M(mra): 4.5640 +/- 0.08682 *1

non-english input:
GLM-4.7-Q5_K_M(bar): 4.6485 +/- 0.13850 *2
GLM-4.7-Q5_K_M(uns): 4.6544 +/- 0.13872 *3
GLM-4.7.i1-Q5_K_M(mra): 4.6417 +/- 0.13801 *1
GLM-4.7.Q5_K_M(mra): 4.6747 +/- 0.13983 *4
-ctk q8_0
GLM-4.7-Q5_K_M(bar): 4.6203 +/- 0.13709 *1 ("best" overall)
GLM-4.7-Q5_K_M(uns): 4.6515 +/- 0.13840 *2
GLM-4.7.i1-Q5_K_M(mra): 4.7015 +/- 0.14110 *4 ("worst" overall)
GLM-4.7.Q5_K_M(mra): 4.6877 +/- 0.14012 *3
-ctk q8_0 -ctv q8_0
GLM-4.7-Q5_K_M(bar): 4.6647 +/- 0.13913 *3
GLM-4.7-Q5_K_M(uns): 4.6356 +/- 0.13766 *1
GLM-4.7.i1-Q5_K_M(mra): 4.6430 +/- 0.13789 *2
GLM-4.7.Q5_K_M(mra): 4.6649 +/- 0.13931 *4

english/non-english mixed input:
GLM-4.7-Q5_K_M(bar): 2.3418 +/- 0.04566 *1 ("best" overall)
GLM-4.7-Q5_K_M(uns): 2.3587 +/- 0.04635 *4
GLM-4.7.i1-Q5_K_M(mra): 2.3562 +/- 0.04619 *3
GLM-4.7.Q5_K_M(mra): 2.3521 +/- 0.04621 *2
-ctk q8_0
GLM-4.7-Q5_K_M(bar): 2.3465 +/- 0.04576 *1
GLM-4.7-Q5_K_M(uns): 2.3560 +/- 0.04620 *2
GLM-4.7.i1-Q5_K_M(mra): 2.3751 +/- 0.04703 *4 ("worst" overall)
GLM-4.7.Q5_K_M(mra): 2.3599 +/- 0.04633 *3
-ctk q8_0 -ctv q8_0
GLM-4.7-Q5_K_M(bar): 2.3441 +/- 0.04566 *1
GLM-4.7-Q5_K_M(uns): 2.3660 +/- 0.04647 *3
GLM-4.7.i1-Q5_K_M(mra): 2.3544 +/- 0.04609 *2
GLM-4.7.Q5_K_M(mra): 2.3687 +/- 0.04688 *4

Sign up or log in to comment