Fresh out of the oven
Hello, mradermacher team. I'm once again making an appeal to have a bundle of models GGUF'd. Thank you for your attention.
https://huggingface.co/MuXodious/Hearthfire-24B-absolute-heresy
https://huggingface.co/MuXodious/Blossom-V6.3-36B-tainted-heresy
https://huggingface.co/MuXodious/LFM2.5-VL-1.6B-absolute-heresy
They are all queued! :D
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary pages under the following locations for quants to appear:
Godspeed π
I went down a rabbit hole to address the llama3~bpe errors in the following models and make them work on my end. Feel free to retry or not:
https://huggingface.co/MuXodious/Hearthfire-24B-absolute-heresy
https://huggingface.co/MuXodious/Harbinger-24B-noslop-absolute-heresy
Note: Noslop version is highly experimental.
As for the following model, I can try changing the original tokenizer class TokenizersBackend to PreTrainedTokenizerFast found in 1.2B-Base version to make it run with your tooling. I have already made some quants for it so, if you call the shot, I'll make the swap.
https://huggingface.co/MuXodious/LFM2.5-VL-1.6B-absolute-heresy
Edit: Thanks for the Quants!