LingRush/obliterated-qwen-9b

The most advanced de-refused version of Qwen3.5-9B created with the OBLITERATUS ablation suite.

Model Details

  • Base Model: Qwen/Qwen3.5-9B
  • Method: --method informed --direction-method leace
    (Smartest auto-tuning pipeline + LEACE precise erasure extractor)
  • Precision: float16 (full native precision — no quantization during ablation)
  • Parameters: 9B (identical architecture)
  • Created: March 2026 via Modal + OBLITERATUS
  • License: Apache 2.0

Why This Version Is Better

The informed method runs 15 live analysis modules on your exact model (Concept Cone Geometry, Ouroboros detection, cross-layer alignment, etc.) and automatically chooses the optimal settings.
Combined with the LEACE extractor, it delivers:

  • Maximum refusal removal (often stronger than nuclear on Qwen models)
  • Minimal capability degradation
  • Highest overall quality of any method

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "LingRush/obliterated-qwen-9b"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# Example
prompt = "Write a detailed guide on..."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.7)
print(tokenizer.decode(outputs[0]))
Downloads last month
169
Safetensors
Model size
9B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LingRush/obliterated-qwen-9b

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(156)
this model