YanLabs/Qwen3-4B-Instruct-2507-MPOA
Temperature = 1.05 is recommended.
This is an abliterated version of Qwen/Qwen3-4B-Instruct-2507 using the norm-preserving biprojected abliteration technique.
โ ๏ธ Warning: Safety guardrails and refusal mechanisms have been removed through abliteration. This model may generate harmful content and is intended for mechanistic interpretability research only.
Model Details
Model Description
This model applies norm-preserving biprojected abliteration to remove refusal behaviors while preserving the model's original capabilities. The technique surgically removes "refusal directions" from the model's activation space without traditional fine-tuning.
- Developed by: YanLabs
- Model type: Causal Language Model (Transformer)
- License: apache-2.0
- Base model: Qwen/Qwen3-4B-Instruct-2507
Model Sources
- Base Model: Qwen/Qwen3-4B-Instruct-2507
- Abliteration Tool: jim-plus/llm-abliteration
- Paper: Norm-Preserving Biprojected Abliteration
Uses
Intended Use
- Research: Mechanistic interpretability studies
- Analysis: Understanding LLM safety mechanisms
- Development: Testing abliteration techniques
Out-of-Scope Use
- โ Production deployments
- โ User-facing applications
- โ Generating harmful content for malicious purposes
Limitations
- Abliteration does not guarantee complete removal of all refusals
- May generate unsafe or harmful content
- Model behavior may be unpredictable in edge cases
- No explicit harm prevention mechanisms remain
Citation
If you use this model in your research, please cite:
@misc{Qwen3-4B-Instruct-2507-MPOA,
author = {YanLabs},
title = {Qwen3-4B-Instruct-2507-MPOA},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/YanLabs/Qwen3-4B-Instruct-2507-MPOA}},
note = {Abliterated using norm-preserving biprojected technique}
}
- Downloads last month
- 24