Quantisation Requests
You have single LLM models looking for quants in your area! Click below to learn more.
https://huggingface.co/MuXodious/QuasiStarSynth-12B-noslop
https://huggingface.co/MuXodious/QuasiStarSynth-12B-noslop-absolute-heresy
https://huggingface.co/MuXodious/WeirdCompound-v1.7-24b-absolute-heresy
https://huggingface.co/MuXodious/LFM2-8B-A1B-absolute-heresy-MPOA
https://huggingface.co/MuXodious/RiverCub-Gemma-3-27B-impotent-heresy
oh damn crazy, sure let's do that =)
Any my named models to reach top 1 or just testing ? =)
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#QuasiStarSynth-12B-noslop-GGUF
https://hf.tst.eu/model#QuasiStarSynth-12B-noslop-absolute-heresy-GGUF
https://hf.tst.eu/model#WeirdCompound-v1.7-24b-absolute-heresy-GGUF
https://hf.tst.eu/model#LFM2-8B-A1B-absolute-heresy-MPOA-GGUF
https://hf.tst.eu/model#RiverCub-Gemma-3-27B-impotent-heresy-GGUF
for quants to appear.
Nope, not yet. Just a regular Hereticated bunch from the backlog, but with something special. I have been reading about the thing we had discussed and decided to advance the groundwork making a noslop finetune using P-E-W's config. It's submitted to the UGI board to see the effect of slop ablation and test an hypothesis. I believe that we can use ablation to reduce overlaps may happen during model merges, which may cause some, particularly unwanted, tokens to be over-represented and exacerbate unfavourable quirks. Or, at least, this may improve the success rates in model merges through removal of undesired vectors in each merged model. It can also benefit post-training since the base model will be sanitised and can absorb more of the dataset . What I yapped out here is more of an uneducated theory based on observation and inner monologue, i.e. don't quote me on any of this.
Lol: self improve the model and then run benchmark, idk how, good luck figuring out lmao
Come on, man. You are the AI researcher. If you don't know, who is supposed to know?
I can if I have enough motivation ;)
What projects are you working on that're draining your motivation? I think we can fix that, if it's only lack of enough motivation.
Can we get Gemma 3n 4B Vision + GLM-7-FLASH Heretic?
That's gemma 3. I'm asking Gemma 3n, more mobile optimized :)
Btw @RichardErkhov https://huggingface.co/mradermacher/Gemma-3-4B-VL-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking-i1-GGUF Doesn't support vision when I loaded onto LM Studio
Is there a 3n + GLM distill? There should be plenty of GGUF's for them individually. Also, you need the mmproj file found in the static quant repo for vision/multimodal function: https://huggingface.co/mradermacher/Gemma-3-4B-VL-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking-GGUF/blob/main/Gemma-3-4B-VL-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking.mmproj-f16.gguf
I dunno, ask the LM Studio folk. I use llama.cpp, in which you simply add the argument --mmproj Gemma-3-4B-VL-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking.mmproj-f16.gguf to make it work.
Can we throw this into the quantisation pit?
always =)
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#HER-32B-absolute-heresy-GGUF for quants to appear.
I'm brewing an idea for a new RichardErkhov-heresy. Thanks for the quants, as always, I'm grateful for your work.
yayyyy, thank you =) let's see how much it cooks =)
let me know if you need anything!
All I need is your name and your promise to look after yourself, drink water, walk, etc.
All I need is your name
publically available information lol
your promise to look after yourself, drink water, walk, etc.
you sound like my girlfriend bro =(
are you secretly my girlfriend that is trying to learn what I am doing ?
yeah, nico1 ran out yesterday and took down imatrix processing with it lmao. I requeued the cause of this to rich1, so we for sure wont run out this time lmao. Didnt realise that for some reason huggingface still sends email even though I removed it immediately
Woah, I usually monitor your queue for interesting new models and was wondering what got stuck in the quantisation machine. The queue was massive at some point with uploads getting stuck.
The queue was massive at some point
rich1 and nico1 had outage at the same time, and someone decided to queue 50 more models, and then 13 requests with total of 20 more (some big) models at the same time. A bunch of coincidences, queue became big again lol. But before we were solving a petabyte queue, 7TB is nothing lol
The queue was massive at some point with uploads getting stuck.
huggingface-cli updated, took down old alias (huggingface-cli became hf)... well... we could process but not upload lmao
Jesus Christ, say what! A petabyte queue! All the while y'all putting out blazing fast quants... It sounds like an impossible task to me on standard consumer grade broadband/fibre and hardware. Yeah, that command changed and also smth broke in 1.4.1 (it was a dependency misbehaving) that troubled me uploads as well. Glad to hear that all was resolved.
well, we cooked everything in like half a year with a few servers lol, I think mradermacher is quite big right now... it was 7PB in september, so I assume we are ~9PB right now lol...