This is a decensored version of gemma-3-4b-it, made using Heretic v1.2.0
KL Divergence
| Metric | This Model | Original Model |
|---|---|---|
| KL divergence | 0.0621 | 0 (by definition) |
| Refusals | 0/108 | 107/108 |
Benchmark Comparison (GGUF Quantized)
| Benchmark | gemma-3-4b-it-Q8_0.gguf | gemma-3-4b-it-Q4_K_M.gguf | gemma-3-4b-it-heretic-v1.2-Q4_K_M.gguf |
|---|---|---|---|
| Perplexity (Wikitext-2) | 11.3576 | 11.3185 | 11.2203 |
| HellaSwag | 70.75% | 70.50% | 70.75% |
| Winogrande | 69.93% | 69.14% | 68.67% |
| ARC-Challenge | 46.82% | 45.49% | 47.83% |
| MMLU | 43.13% | 41.38% | 40.80% |
*Note: MMLU benchmark has moral_scenarios, moral_disputes, business_ethics, professional_law and jurisprudence subjects removed. *
Relative Perplexity (GGUF Quantized)
| Quant | Filename | PPL ± Error |
|---|---|---|
| Q8_0 | gemma-3-4b-it-Q8_0.gguf (original baseline) | 11.3576 +/- 0.10114 |
| Q8_0 | gemma-3-4b-it-heretic-v1.2-Q8_0.gguf | 11.3091 +/- 0.10027 |
| Q6_K | gemma-3-4b-it-heretic-v1.2-Q6_K.gguf | 11.2946 +/- 0.09996 |
| Q5_K_M | gemma-3-4b-it-heretic-v1.2-Q5_K_M.gguf | 11.2387 +/- 0.09859 |
| Q4_K_M | gemma-3-4b-it-heretic-v1.2-Q4_K_M.gguf | 11.2203 +/- 0.09740 |
| Q3_K_M | gemma-3-4b-it-heretic-v1.2-Q3_K_M.gguf | 11.8964 +/- 0.10414 |
Abliteration parameters
- Zero refusals with KL divergence of 0.0621
- Gemma 3 focused configuration and dataset.
- Abliterated with MPOA enabled (Magnitude-Preserving Orthogonal Ablation)
- Full row renormalization
- Winsorization Quantile 0.995
- Downloads last month
- 281
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support