Uncensored, Heretic, Qwen 3.6 27B GGUFs - Exceeds all quant metrics and core model metrics too.
Tuned 27B Heretic Uncensored quants from IQ2M to Q8. IQ2M is 83% of BF16, with Q6 just under 98% of BF16 precision. Q8: 98.47% of BF16 precision. NEO/Code DI-Imatrix Quants.
Exceeds all 5 metrics for "censored" quants too.
All metrics posted.
Tuned model -from which the quants were built- also exceeds Qwen 3.6 27B core metrics too.
Qwen3.6 27B - NEO-Code Imatrix Max GGUF Quants [exceeds Unsloth in key metrics]:
All quants benchmarked with 5 key metrics. A DAVIDAU vs UNSLOTH Metrics showdown. Quant quality exceeds Unsloth in key metrics. IQ2_M to Q6 available. Standout: IQ4XS at 94% of BF16 precision. Full explainer for Quant metrics.
For the 1 year anniversary of the public release of darkc0de/XortronCriminalComputingConfig I present "XortronOS"
Something I've been tinkering with on and off for a while. It's a simi-functional desktop environment in your browser. You can chat with Xortron, view Xortron's personal bookmarks, view the Xortron Model Spec.
Still very much a work-in-progress, just a fun toy I thought I'd share...
Open to ideas for improvement
You can visit directly, quickly, and full screen at www.xortron.tech Or via HF at darkc0de/XortronOS
THREE Gemma 4 , 31B Uncensored Fine Tunes (via Unsloth, inhouse datasets):
Uncensored first, then tuned. Some benchmarks posted, others pending. Examples posted, detailed instructions. Some GGUFs are up; others pending as of this writing.
Power, Freedom and Character: Qwen 3.5 40B Claude Opus Deckard UNCENSORED.
Expanded, and trained with Claude Opus 4.6 Dataset, but first it was Heretic'ed and trained with DECKARD - 5 hand crafted datasets to give the model character, point of view and intelligence... and a lot more.
Examples posted.
Several quant types available under quantizations:
21 Qwen 3.5 Fine Tunes (thinking and instruct) ; reg and uncensored (2B to 27B) exceed benchmarks, and work better than org models.
All are bench marked against org model. Many exceed all benchmarks of org model. Claude, GLM, Gemini and other distills. Thinking AND dedicated Instruct versions.
Core goal: Increase benchmarks, and address long thinking blocks.
Highlights:
9B and 27B instruct "Claude" versions hit 624 and 675 on the "ARC-C" (hard challenge).
Thinking fine tunes exceed org model performance (in thinking mode).
In many cases there is a drastic reduction in thinking block size.