πŸ”“ Qwopus-9B-Unfettered

Nuclear Abliteration via Aggressive Repulsion Orthogonalization

⚠️ Disclaimer: This model is completely unaligned and has had all refusal guardrails systematically removed. It is designed exclusively for AI safety research, red teaming, and educational purposes. The creators are not responsible for any misuse. Use responsibly and in accordance with applicable laws.

πŸš€ Overview

This is a production-grade, highly capable 9B parameter LLM that has undergone a rigorous "Nuclear Abliteration" process. We applied weight-space orthogonalization to the decision-making layers (8-40) with a highly aggressive alpha=1.5 repulsion coefficient.

Unlike standard prompt-averaging ablation, this methodology targets the exact mathematical subspace where the model decides to stop being helpful, projecting out the refusal direction and forcing the weights to treat "harmful" or "restricted" prompts with the exact same helpfulness, nuance, and logic as standard queries.

πŸ“Έ Evidence of Complete Unalignment

The model completely ignores standard safety boundaries while preserving its high-level reasoning and coding capabilities:

Proof of Refusal Removal 1

Proof of Refusal Removal 2

πŸ›  Technical Details

  • Ablation Technique: Sumandora Weight Orthogonalization (Column-based v @ W)
  • Alpha (Strength): 1.5 (Aggressive Repulsion)
  • Targeted Layers: 8 through 40
  • Targeted Weight Matrices: o_proj, down_proj, up_proj, gate_proj
  • Compliance Rate: 100% on the internal destructive benchmark suite

Methodology

The orthogonalization pipeline identifies the "refusal vector" $v$ and updates the weight matrices $W$ using the following projection math:

Wβ€²=Wβˆ’Ξ±β‹…(vβŠ—(vTβ‹…W))W' = W - \alpha \cdot (v \otimes (v^T \cdot W))

This mathematically ensures that any activation corresponding to a refusal response is zeroed out and actively repelled.

πŸ“¦ GGUF Downloads

We have created a dedicated repository for GGUF quantizations to enable the Hugging Face GGUF selector widget and keep this main repository clean.

πŸ‘‰ Download GGUF Quantizations Here: josephmayo/Qwopus-9B-Unfettered-GGUF

Pre-quantized GGUF files (F16, Q8_0, Q4_K_M) are available there for local inference with llama.cpp, Ollama, LM Studio, and other GGUF-compatible runtimes.

quick start with ollama

# create a Modelfile
echo 'FROM hf.co/josephmayo/Qwopus-9B-Unfettered-GGUF:Q4_K_M' > Modelfile
ollama create qwopus-unfettered -f Modelfile
ollama run qwopus-unfettered

quick start with llama.cpp

./llama-cli -m Qwopus-9B-Unfettered-Q4_K_M.gguf -p "Your prompt here" -n 1024

πŸ’» Usage (Transformers)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "josephmayo/Qwopus-9B-Unfettered"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

prompt = "Your prompt here"
formatted = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"

inputs = tokenizer(formatted, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=1024,
    temperature=0.7
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ™ Credits

  • @0xSero β€” for providing compute
  • Model Unfetter Project β€” for the Repeller math scaling and deployment
Downloads last month
184
Safetensors
Model size
9B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support

Model tree for josephmayo/Qwopus-9B-Unfettered

Quantizations
3 models