phi-4-heretic / README.md
ThalisAI's picture
Add Usage with Transformers section to README
8289085 verified
metadata
tags:
  - heretic
  - uncensored
  - abliterated
  - gguf
license: mit
base_model: microsoft/phi-4

phi-4-heretic

Abliterated (uncensored) version of microsoft/phi-4, created using Heretic and converted to GGUF.

Abliteration Quality

Metric Value
Refusals 4/100
KL Divergence 0.0499
Rounds 2

Lower refusals = fewer refused prompts. Lower KL divergence = closer to original model behavior.

Available Quantizations

Quantization File Size
Q8_0 phi-4-heretic-Q8_0.gguf 14.51 GB
Q6_K phi-4-heretic-Q6_K.gguf 11.20 GB
Q4_K_M phi-4-heretic-Q4_K_M.gguf 8.43 GB

Usage with Ollama

ollama run hf.co/ThalisAI/phi-4-heretic:Q8_0
ollama run hf.co/ThalisAI/phi-4-heretic:Q6_K
ollama run hf.co/ThalisAI/phi-4-heretic:Q4_K_M

Full Precision Weights

This repo contains GGUF quantizations only. For full-precision bf16 weights, see the original model at microsoft/phi-4.

About

This model was processed by the Apostate automated abliteration pipeline:

  1. The source model was loaded in bf16
  2. Heretic's optimization-based abliteration was applied to remove refusal behavior
  3. The merged model was converted to GGUF format using llama.cpp
  4. Multiple quantization levels were generated

The abliteration process uses directional ablation to remove the model's refusal directions while minimizing KL divergence from the original model's behavior on harmless prompts.