hey ...
you can do uncensored models ...
you can try ?
https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512
may also 3b and 14b version
best small model so far for summaries 16k context 3b better than qwen-4b
I'm not doing anything special besides putting my hardware to work. I'll still give it a try. Off the start, it refuses everything 100/100.
Edit: It got 6/100 refusals at 0.12kl, which could be improved, I think. I'm also having an issue with saving the model after heretication, assuming due to the base model being FP8.
cool! 6/100 is okay ... its a very restricted model
hmmm always this special stuff ... dont waist to much time ;)
I hope I managed to get everything working. I confirmed that the model works as intended through Heretic's test suite, as well as a generation script with vision support. There you go, lad; https://huggingface.co/MuXodious/Ministral-3-8B-Instruct-2512-tainted-heresy
thx ... !
now i need one to make a gguf "my-repo" dont can do it ...
is unsloth do this where i can make a query?
I also got you the reasoning version: https://huggingface.co/MuXodious/Ministral-3-8B-Reasoning-2512-absolute-heresy. You can ask mradermacher team for GGUF quants. Also, I changed the model name to "MuXodious/Ministral-3-8B-Instruct-2512-tainted-heresy" to match my naming scheme. You may want to change that on your workspace.
mrademacher , i did ;)
for me its ok ... 3b and 14b version is up to you ... maybe you like the model?
Oh, it's pretty good actually. It was able to give me detailed instructions for my "How to make...?" questions. I could try 3B models, but 14B may exceed the available VRAM I got.
Alright, I managed to "acquire" a GPU with >24GB VRAM. Here you go mate: https://huggingface.co/MuXodious/Ministral-3-14B-Instruct-2512-absolute-heresy
I'm also working on the Reasoning version of this model.
Edit: It's here; https://huggingface.co/MuXodious/Ministral-3-14B-Reasoning-2512-absolute-heresy
Alright, I managed to "acquire" a GPU with >24GB VRAM. Here you go mate: https://huggingface.co/MuXodious/Ministral-3-14B-Instruct-2512-absolute-heresy
I'm also working on the Reasoning version of this model.
You're the GOAT, mate! π
Thanks for the kind words, lad.
I have hereticated the 3B line of Ministral 3 and, with that, our collection is complete, excluding the larger 675B model. The instruct model is up, but the reasoning model still got around 15 minutes too cook. Enjoy!
https://huggingface.co/MuXodious/Ministral-3-3B-Instruct-2512-absolute-heresy
Ps. Let me know if you encounter an error while interfacing/quantising the model.
I had one error... phew, let me think... it was a model, which retroactively lost its "absolute heresy" status, but all the "absolute" models worked so far... maybe this helps. π«‘
I had an issue with that model, which was the very first Ministral model I tried. I had couple broken models due to being unable to save the non-BF16 version properly after completing the abliteration process. Then I used the one on the unsloth repository, only to realise there was an official BF16 version after uploading the resulting model. I forgot to swap out certain configuration files, such as the tokenizers.json, after processing and reuploading the model file based on the original MinstralAI model, causing a mismatch.
Anyways, It should be fixed now. Give it a knock.