MuXodious/GLM-4.7-Flash-impotent-heresy

#1712
by MuXodious - opened

I have got it hereticated. You may need to update llama.cpp as the support was only recently merged. Thanks for your efforts!

https://huggingface.co/MuXodious/GLM-4.7-Flash-impotent-heresy

Edit: https://github.com/ggml-org/llama.cpp/pull/18980

It's queued!

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#GLM-4.7-Flash-impotent-heresy-GGUF for quants to appear.

Thanks for the quants!

MuXodious changed discussion status to closed

Sign up or log in to comment