MiniMax-M2.5 Abliterated (int4)

This is an abliterated version of INC4AI/MiniMax-M2.5-int4-mixed-AutoRound.

Abliteration

Abliteration was performed using heretic — a multi-objective optimization framework that uses Optuna TPE to find the best LoRA-based abliteration parameters.

  • Method: Heretic v1.2.0, LoRA + Optuna multi-objective optimization
  • Base model: INC4AI/MiniMax-M2.5-int4-mixed-AutoRound (230B MoE)
  • Format: int4 AutoRound (Marlin backend)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "nitrox/SA-SWE-32B-abliterated",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("nitrox/SA-SWE-32B-abliterated", trust_remote_code=True)

Disclaimer

This model has had its refusal mechanisms removed. Use responsibly.

Downloads last month
-
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nitrox/SA-SWE-32B-abliterated

Finetuned
(3)
this model