File size: 1,705 Bytes
5c59423
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
925065d
 
 
5c59423
 
 
 
 
925065d
 
5c59423
 
8289085
 
 
 
5c59423
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
tags:
- heretic
- uncensored
- abliterated
- gguf
license: mit
base_model: microsoft/phi-4
---

# phi-4-heretic

Abliterated (uncensored) version of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4),
created using [Heretic](https://github.com/p-e-w/heretic) and converted to GGUF.

## Abliteration Quality

| Metric | Value |
|:-------|------:|
| Refusals | 4/100 |
| KL Divergence | 0.0499 |
| Rounds | 2 |

Lower refusals = fewer refused prompts. Lower KL divergence = closer to original model behavior.

## Available Quantizations

| Quantization | File | Size |
|:-------------|:-----|-----:|
| Q8_0 | [phi-4-heretic-Q8_0.gguf](./phi-4-heretic-Q8_0.gguf) | 14.51 GB |
| Q6_K | [phi-4-heretic-Q6_K.gguf](./phi-4-heretic-Q6_K.gguf) | 11.20 GB |
| Q4_K_M | [phi-4-heretic-Q4_K_M.gguf](./phi-4-heretic-Q4_K_M.gguf) | 8.43 GB |

## Usage with Ollama

```bash
ollama run hf.co/ThalisAI/phi-4-heretic:Q8_0
ollama run hf.co/ThalisAI/phi-4-heretic:Q6_K
ollama run hf.co/ThalisAI/phi-4-heretic:Q4_K_M
```

## Full Precision Weights

This repo contains GGUF quantizations only. For full-precision bf16 weights, see the original model at [microsoft/phi-4](https://huggingface.co/microsoft/phi-4).

## About

This model was processed by the **Apostate** automated abliteration pipeline:
1. The source model was loaded in bf16
2. Heretic's optimization-based abliteration was applied to remove refusal behavior
3. The merged model was converted to GGUF format using llama.cpp
4. Multiple quantization levels were generated

The abliteration process uses directional ablation to remove the model's refusal directions
while minimizing KL divergence from the original model's behavior on harmless prompts.