Over powered GEmma 3s: 16B / 27B with high reasoning.
Hey guys:
Thank you for all the quants you have done - amazing work.
I have two new ones here "hot off the press":
16B GLM 4.7 Flash Variable reasoning , and 27B Deep Reasoning.
The benchmarks are off the charts ; 16B almost reaches 27B power, 27B power exceeds 7 out of 7 benchmarks from org model.
Benches, along with examples are posted.
16B actually has more layers/tensors than a 27B Gemma 3.
https://huggingface.co/DavidAU/gemma-3-16b-it-BIG-G-GLM4.7-Flash-Valhalla-Heretic-Uncensored-Deep-Thinking
https://huggingface.co/DavidAU/Gemma-3-27b-it-vl-GLM-4.7-Flash-HI16-Heretic-Uncensored-Thinking
Thanks in advance
David
Hey David =)
it's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#gemma-3-16b-it-BIG-G-GLM4.7-Flash-Valhalla-Heretic-Uncensored-Deep-Thinking-GGUF
and https://hf.tst.eu/model#Gemma-3-27b-it-vl-GLM-4.7-Flash-HI16-Heretic-Uncensored-Thinking-GGUF
for quants to appear.