McG-221/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored-mlx-8Bit Image-Text-to-Text • 12B • Updated 5 days ago • 317 • 2
mradermacher/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored-i1-GGUF 12B • Updated 4 days ago • 2.58k • 1
Markobes/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored-Q8_0-GGUF Image-Text-to-Text • 12B • Updated 5 days ago • 284 • 1
zelk12/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored-Q6_K-GGUF Image-Text-to-Text • 12B • Updated 5 days ago • 119 • 2
mradermacher/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored-GGUF 12B • Updated 5 days ago • 1.07k • 1
DavidAU/OpenAI-gpt-oss-20B-INSTRUCT-Heretic-Uncensored-MXFP4 Text Generation • 2B • Updated 4 days ago • 39 • 2
DavidAU/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored Image-Text-to-Text • 12B • Updated 5 days ago • 45 • 4
Dark / Evil / NSFW Reasoning Models (gguf/source) Collection Models that are dark/evil/corrupt (and many times NSFW!) to begin with then I add reasoning/thinking to them to make them even... ahh... better. • 115 items • Updated about 3 hours ago • 81
DavidAU/OpenAI-gpt-oss-20B-Claude-4.5-Opus-Heretic-Uncensored Text Generation • 21B • Updated 4 days ago • 49 • 6
Merges Collection Stuff I've cooked myself, for better or for worse. • 14 items • Updated Jun 18, 2025 • 3
The Bestiary Collection Decensored language models made using Heretic (https://github.com/p-e-w/heretic) • 6 items • Updated Nov 16, 2025 • 85
DavidAU/LFM2.5-1.2B-MEGABRAIN2-Thinking-Kimi-V2-DISTILL Text Generation • 1B • Updated 4 days ago • 37 • 2
view post Post 4631 Tiny but mighty: LFM 1.2B - 11 Distill / Fine tunes : Exceeding all benchmarks at 300-700+ T/S on GPU, 60+ T/S CPU.Almost all exceed LFM 1.2B Benchmarks - which are already very impressive.All benchmarks posted.A specialized merge of multiple of these fine tunes by @nightmedia FAR exceeds the benchmarks set by the already impressive LFM.(LFM2.5-1.2B-MEGABRAIN-Thinking-Polaris-ClaudeHOPUS-Deepseek-GLM)Included are GLM 4.7 Flash, DeepSeek, Claude, Kimi V2 and other distill fine tunes.Here is the collection ( Quants by MRadermarcher).https://huggingface.co/collections/DavidAU/lfm-12b-sota-400-700-t-s-enhanced-fine-tunes-distills See translation 2 replies · 🔥 10 10 😎 1 1 + Reply
view post Post 5940 Uncensored, Heretic GGUF quants of GLM 4.7 (30B-A3B) with correct Llamacpp and all updates ; NEO-CODE Imatrix W 16 bit OTs.Also specialized quants (balanced for this model), and all quants are NEO-CODE Imatrix W 16 bit output tensor. DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF"Reg quants, non-heretic" :Also 16 bit ot, NEO-CODE Imatrix and specialized: DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF See translation 🔥 7 7 👀 3 3 + Reply