nitrox's picture
Upload README.md with huggingface_hub
74df502 verified
metadata
license: apache-2.0
tags:
  - abliteration
  - uncensored
  - minimax
  - moe
  - int4
base_model: INC4AI/MiniMax-M2.5-int4-mixed-AutoRound

MiniMax-M2.5 Abliterated (int4)

This is an abliterated version of INC4AI/MiniMax-M2.5-int4-mixed-AutoRound.

Abliteration

Abliteration was performed using heretic — a multi-objective optimization framework that uses Optuna TPE to find the best LoRA-based abliteration parameters.

  • Method: Heretic v1.2.0, LoRA + Optuna multi-objective optimization
  • Base model: INC4AI/MiniMax-M2.5-int4-mixed-AutoRound (230B MoE)
  • Format: int4 AutoRound (Marlin backend)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "nitrox/Qwen3-4B-abliterated-v2",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("nitrox/Qwen3-4B-abliterated-v2", trust_remote_code=True)

Disclaimer

This model has had its refusal mechanisms removed. Use responsibly.