AI & ML interests

Abliteration, interpretability, training-free model modification, democratization of AI

Recent Activity

DavidAU 
posted an update 11 days ago
view post
Post
7347
Uncensored, Heretic, Qwen 3.6 27B GGUFs - Exceeds all quant metrics and core model metrics too.

Tuned 27B Heretic Uncensored quants from IQ2M to Q8.
IQ2M is 83% of BF16, with Q6 just under 98% of BF16 precision.
Q8: 98.47% of BF16 precision.
NEO/Code DI-Imatrix Quants.

Exceeds all 5 metrics for "censored" quants too.

All metrics posted.

Tuned model -from which the quants were built- also exceeds Qwen 3.6 27B core metrics too.

DavidAU/Qwen3.6-27B-Heretic-Uncensored-FINETUNE-NEO-CODE-Di-IMatrix-MAX-GGUF
  • 3 replies
·
DavidAU 
posted an update 18 days ago
view post
Post
6575
Qwen3.6 27B - NEO-Code Imatrix Max GGUF Quants [exceeds Unsloth in key metrics]:

All quants benchmarked with 5 key metrics.
A DAVIDAU vs UNSLOTH Metrics showdown.
Quant quality exceeds Unsloth in key metrics.
IQ2_M to Q6 available.
Standout: IQ4XS at 94% of BF16 precision.
Full explainer for Quant metrics.

DavidAU/Qwen3.6-27B-NEO-CODE-Di-IMatrix-MAX-GGUF
  • 2 replies
·
darkc0de 
posted an update 23 days ago
view post
Post
1167
For the 1 year anniversary of the public release of darkc0de/XortronCriminalComputingConfig I present "XortronOS"

Something I've been tinkering with on and off for a while. It's a simi-functional desktop environment in your browser. You can chat with Xortron, view Xortron's personal bookmarks, view the Xortron Model Spec.

Still very much a work-in-progress, just a fun toy I thought I'd share...

Open to ideas for improvement

You can visit directly, quickly, and full screen at www.xortron.tech
Or via HF at darkc0de/XortronOS

  • 4 replies
·
DavidAU 
posted an update about 1 month ago
DavidAU 
posted an update about 1 month ago
view post
Post
7512
THREE Gemma 4 , 31B Uncensored Fine Tunes (via Unsloth, inhouse datasets):

Uncensored first, then tuned.
Some benchmarks posted, others pending.
Examples posted, detailed instructions.
Some GGUFs are up; others pending as of this writing.

Enjoy:

DavidAU/gemma-4-31B-it-Mystery-Fine-Tune-HERETIC-UNCENSORED-Thinking
DavidAU/gemma-4-31B-it-Grand-Horror-X-INTENSE-HERETIC-UNCENSORED-Thinking
DavidAU/gemma-4-31B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking

UPDATE:
DavidAU/gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking

Exceeds Gemma4 26B-A4B in critical benchmarks.
  • 6 replies
·
DavidAU 
posted an update about 2 months ago
view post
Post
5303
Power, Freedom and Character:
Qwen 3.5 40B Claude Opus Deckard UNCENSORED.

Expanded, and trained with Claude Opus 4.6 Dataset, but first it was Heretic'ed and trained with DECKARD - 5 hand crafted datasets to give the model character, point of view and intelligence... and a lot more.

Examples posted.

Several quant types available under quantizations:

DavidAU/Qwen3.5-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking
DavidAU 
posted an update about 2 months ago
DavidAU 
posted an update 2 months ago
view post
Post
7184
21 Qwen 3.5 Fine Tunes (thinking and instruct) ; reg and uncensored (2B to 27B) exceed benchmarks, and work better than org models.

All are bench marked against org model.
Many exceed all benchmarks of org model.
Claude, GLM, Gemini and other distills.
Thinking AND dedicated Instruct versions.

Core goal: Increase benchmarks, and address long thinking blocks.

Highlights:

9B and 27B instruct "Claude" versions hit 624 and 675 on the "ARC-C" (hard challenge).

Thinking fine tunes exceed org model performance (in thinking mode).

In many cases there is a drastic reduction in thinking block size.

9B Claude Heretic Uncensored, GGUF :
-Neo, Code Imatrix (duel imatrix)
- Updated Jinja template
- Custom tensor enhancements.

DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING-MAX-NEOCODE-Imatrix-GGUF

COLLECTION [21 models]:
https://huggingface.co/collections/DavidAU/qwen-35-08-2-4-9-27-35b-regular-uncensored

UPDATE:
Now 31 models, including experimental 21B and new 13B models.
  • 5 replies
·