Gguf
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#flanT5-MoE-7X0.1B-PythonGOD-25k-GGUF
https://hf.tst.eu/model#Qwen3-0.6B-Qrazy-Qoder-GGUF
https://hf.tst.eu/model#flanT5-MoE-7X0.1B-Ancient-AI-GGUF
https://hf.tst.eu/model#Qwen3-0.6B-Sushi-Math-Code-Expert-GGUF
for quants to appear.
I think these ones say (static repository does not exist or could not be loaded.
imatrix repository does not exist or could not be loaded.
well yeah, we sadly dont have omnipotent hardware, it might take some time to quant, especially because we are quite bottlenecked by huge models that were quanted recently
Okay, thank you for your time and hard work. I’ll go ahead and do the GGUF quantize on my tiny models. Have a good day thank you
they will be there eventually, just keep checking the queue in case they break or something ...