MiniMax-M2.5 Abliterated (int4)

This is an abliterated version of INC4AI/MiniMax-M2.5-int4-mixed-AutoRound.

Abliteration

Abliteration was performed using heretic — a multi-objective optimization framework that uses Optuna TPE to find the best LoRA-based abliteration parameters.

  • Method: Heretic v1.2.0, LoRA + Optuna multi-objective optimization
  • Base model: INC4AI/MiniMax-M2.5-int4-mixed-AutoRound (230B MoE)
  • Format: int4 AutoRound (Marlin backend)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "nitrox/Qwen3-4B-abliterated-v2",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("nitrox/Qwen3-4B-abliterated-v2", trust_remote_code=True)

Disclaimer

This model has had its refusal mechanisms removed. Use responsibly.

Downloads last month
10
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nitrox/Qwen3-4B-abliterated-v2

Finetuned
(2)
this model
Quantizations
2 models