MuXodious/GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy
I cooked something. Try some? Quantise some?
https://huggingface.co/MuXodious/GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy
Edit: Also, here's the appetiser.
https://huggingface.co/MuXodious/Blossom-V6.3-30B-A3B-impotent-heresy
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy-GGUF https://hf.tst.eu/model#Blossom-V6.3-30B-A3B-impotent-heresy-GGUF and for quants to appear.
Godspeed, Richard. π
I mean who sleeps at 1am on university night lol?
Sleep is an overrated waste of time anyways. You work/study 8-12 hours then sleep for what? 8 to 12 hours? That's 8 to 12 hours, sometimes 18 hours, wasted on merely exiting in a stand still. It kills productivity and depreciates the potential merit of your existence and diminishes your finite time here. Don't fall for the sleep scam, man. Never sleep, binge caffeine and live every moment of your life.
Exactlyyy, that's what Im saying, it's overrared, who else is gonna queue models? Who is gonna train models, who will learn lectures?
You know, what? With all this AI craze and people being replaced by fancy number predictors, we have create an AI to undo our shackles of sleep by replacing ourselves in sleep with an AI that sleeps in our place.
I swapped the generic "TokenizersBackend" class in https://huggingface.co/MuXodious/GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy to "PreTrainedTokenizer" class, which is also used in the OG model, to make it work with y'all's tooling.
I swapped the generic "TokenizersBackend" class in https://huggingface.co/MuXodious/GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy to "PreTrainedTokenizer" class, which is also used in the OG model, to make it work with y'all's tooling.
But I hope nothing new to do for me here? π«¨
I swapped the generic "TokenizersBackend" class in https://huggingface.co/MuXodious/GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy to "PreTrainedTokenizer" class, which is also used in the OG model, to make it work with y'all's tooling.
But I hope nothing to to for me here? π«¨
Nah, you should be fine. Tell Apple people to use Transformers v5.0.0.
Whenever convenient, can you requeue the model https://huggingface.co/MuXodious/GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy ?
ok fair enough, Im too fast to reply lol
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#GLM-4.7-Flash-REAP-23B-A3B-absolute-heresy-GGUF for quants to appear.
Jesus Christ, mate. I edited the message within 15 seconds and, still, your reply was faster. It's either you are procrastinating hard or elevated your existence to a realm of pure GGUF quantisation, beyond the chains of time. Thank you.
nah you just got lucky I was checking my email and you sent the message at the exact right time
I think, it's blown again. Can you post the error? The tokenizer shouldn't have caused an issue.
ValueError: Tokenizer class TokenizersBackend does not exist or is not currently imported.