File size: 1,339 Bytes
9b4a1e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
base_model: zai-org/GLM-5
tags:
  - abliterated
  - uncensored
  - glm
  - moe
library_name: transformers
---

# GLM-5 Abliterated (BF16)

""""""wont recommend using this, please let me know if u do"""""   . This is an abliterated  version of [zai-org/GLM-5](https://huggingface.co/zai-org/GLM-5) (744B MoE, 40B active parameters).

## What is abliteration?

Abliteration removes the "refusal direction" from the model weights using weight orthogonalization. This allows the model to respond to a wider range of prompts without safety refusals, while preserving general capability.

## Method

1. Computed refusal directions for all 78 layers using contrastive activation pairs (harmful vs harmless prompts)
2. Applied weight orthogonalization to layers 15-54:
   - `self_attn.o_proj.weight` (attention output projection)
   - `mlp.shared_experts.down_proj.weight` (shared expert down projection)
3. Alpha = 1.0, 80 weight matrices modified total

## Details

- **Base model**: zai-org/GLM-5 (744B MoE, BF16)
- **Modified layers**: 15-54 (40 of 78 total layers)
- **Weights modified**: 80 (o_proj + shared_experts.down_proj per layer)
- **Precision**: BF16 (full precision, no quantization artifacts)

## Disclaimer

This model is provided for research purposes. Users are responsible for ensuring appropriate use.