Some other models to make quantized versions!
As a part 2 of this request!
Let's start, my own models!
- Qwen 3 0.6B Claude 4.7 Opus Distilled
- LFM 2.5 Heretic (very impressive for a small model)
Variants:
- LFM 2.5 Heretic - xAgressive / Lower Safety
- LFM 2.5 Heretic - Agressive / Low Safety
- LFM 2.5 Heretic - High Reasoning / Low KL Diverge
- LFM 2.5 Heretic - xHigh Reasoning / Lower KL Diverge
Let's continue:
- LFM 2 English <-> Romanian
- M2M100 Klingon (a translation model for Klingon and yes, the model card was generated with AI!)
Not my own models, but I add them:
that's a lot of models =)
It's queued! most of LFM models failed before, but I queued them just in case, let's hope they work now
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Qwen-3-0.6B-Claude-4.7-Opus-Distilled-GGUF
https://hf.tst.eu/model#LFM2.5-350M-heretic-GGUF
https://hf.tst.eu/model#LFM2.5-350M-heretic-xagressive-GGUF
https://hf.tst.eu/model#LFM2.5-350M-heretic-agressive-GGUF
https://hf.tst.eu/model#LFM2.5-350M-heretic-high-reasoning-GGUF
https://hf.tst.eu/model#LFM2.5-350M-heretic-xhigh-reasoning-GGUF
https://hf.tst.eu/model#LFM-2-350M-En-Ro-MT-GGUF
https://hf.tst.eu/model#M2M100-418M-Klingon-GGUF
https://hf.tst.eu/model#Qwen3-4B-Instruct-2507-heretic-GGUF
for quants to appear.
the duplicate name:
https://huggingface.co/helixdouble/GLM-4.7-Flash-heretic
if you want it to be queued from this author, either rename on his side or clone the repo with different name =)
"the duplicate name:
https://huggingface.co/helixdouble/GLM-4.7-Flash-heretic
if you want it to be queued from this author, either rename on his side or clone the repo with different name =)"
Just name it helixdouble-GLM-4.7-Flash-heretic-GGUF!
yes, please name it for me differently, we have automatic system and I cant do it myself because I have been banned from uploading =)