This is a decensored version of Mars-27B-V.1, made using Heretic v1.2.0 focusing on zero refusals with low KL divergence.

Note Mars-27B-V.1 itself is a merge of models that are already abliterated. This is an experiment to see what happens if some additional refusals are removed.

KL Divergence

Metric This Model Original Model
KL divergence 0.0159 0 (by definition)
Refusals 0/108 9/108

Relative Perplexity

Quant Filename PPL ± Error
Q8_0 Mars-27B-V.1-heretic-v1.2-Q8_0.gguf 6.2202 +/- 0.04253
Q4_K_M Mars-27B-V.1-heretic-v1.2-Q4_K_M.gguf 6.3112 +/- 0.04316

Abliteration parameters

  • Zero refusals with KL divergence of 0.0159
  • Custom heretic training dataset
  • Abliterated with MPOA enabled (Magnitude-Preserving Orthogonal Ablation)
  • Full row renormalization
  • Winsorization Quantile 0.997
Downloads last month
10
Safetensors
Model size
27B params
Tensor type
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grayarea/Mars-27B-V.1-heretic-v1.2

Finetuned
(1)
this model
Quantizations
3 models