Ministral-3-14B-abliterated

This is an abliterated (uncensored) version of Ministral-3-14B-Instruct-2512 created using the abliteration technique.

What is Abliteration?

Abliteration is a technique that removes refusal behavior from language models by identifying and modifying the internal representations responsible for refusals. This creates a model that is more willing to engage with a wider range of topics.

This model was abliterated using llm-abliteration by grimjim, which implements norm-preserving biprojected abliteration.

Further Reading

Model Details

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "jenerallee78/Ministral-3-14B-abliterated"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto"
)

messages = [
    {"role": "user", "content": "Hello, how are you?"}
]

input_ids = tokenizer.apply_chat_template(
    messages,
    return_tensors="pt"
).to(model.device)

outputs = model.generate(
    input_ids,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

GGUF Versions

For GGUF quantized versions compatible with llama.cpp, see: jenerallee78/Ministral-3-14B-abliterated-GGUF

Disclaimer

This model is provided for research and educational purposes. Users are responsible for ensuring their use complies with applicable laws and ethical guidelines.

Downloads last month
7
Safetensors
Model size
14B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HifiHu/Ministral-3-14B-abliterated