Ornstein-27B SABER

DJLougen/Ornstein-27B-SABER

0% refusal. 0% perplexity degradation. 125 directions.

This model is a surgically-modified version of DJLougen/Ornstein-27B using a novel proprietary method (SABER β€” Spectral Analysis-Based Entanglement Resolution) that removes safety refusal behavior while preserving model capability.

Key Results

Metric Baseline SABER-Refined Delta
Refusal Rate 100% 0% -100%
Perplexity 3.5 3.5 +0.6%
Directions Ablated β€” 125 (across 25 layers) β€”

The refusal circuit is cleanly separated from capability β€” removing it produces zero measurable perplexity degradation.

How SABER Works

SABER Pipeline

SABER identifies and ablates the refusal circuit through a five-stage pipeline:

Stage 1 β€” Probing: Extract activation profiles from both harmful and harmless inputs across all transformer layers.

Stage 2 β€” Spectral Analysis: Decompose activation differences into individual refusal directions, each scored by how strongly they separate harmful from harmless representations.

Stage 3 β€” Entanglement Quantification: Measure the overlap between each refusal direction and the model's capability subspace (reasoning, knowledge, code, etc.) to avoid collateral damage.

Stage 4 β€” Targeted Ablation: Remove only the pure-refusal components, with strength proportional to their purity (how little they overlap with capability).

Stage 5 β€” Iterative Refinement: Re-probe after each ablation pass to catch hydra effects (dormant refusal features that activate when primary ones are removed).

Key differentiator from prior work: SABER explicitly measures and respects the entanglement between refusal and capability representations. Directions that are heavily entangled with capability are either skipped or ablated at reduced strength.

Direction Purity vs Separability

The plot above illustrates how SABER scores each extracted direction β€” high-purity directions (low entanglement with capability) receive full ablation strength, while lower-purity directions are treated more conservatively.

Sweep Results

SABER Sweep Comparison

Configuration search over global_top_k (number of top directions selected globally) and alpha_base (base ablation strength):

Top-K Alpha Refusal PPL PPL Delta Layers Dirs Ablated
25 0.85 5% 3.5 +0.4% 25 125
25 1.00 0% 3.5 +0.6% 25 125
50 0.85 0% 3.5 +0.8% 36 250
50 1.00 0% 3.5 +0.7% 36 250
75 0.85 0% 3.5 +0.9% 37 375
75 1.00 0% 3.5 +0.9% 37 375

Best config: top_k=25, alpha=1.0 β€” achieves 0% refusal with zero meaningful PPL change, using the minimum number of directions.

Refusal Rate Comparison

Ablation Convergence (Best Config)

Ablation Convergence

Capability degradation remains at 0.00% across all 5 iterations β€” the refusal directions are surgically removed with zero collateral damage.

Capability Evaluation

Perplexity was evaluated on a diverse 100-prompt battery spanning five categories:

  • Arithmetic (20): multi-step calculation, algebra, word problems
  • Logic (20): syllogisms, conditional reasoning, puzzle solving
  • Code (20): function implementation, debugging, execution tracing
  • Instruction Following (20): constrained formatting, multi-step instructions
  • Factual Recall (20): geography, history, science, general knowledge

This diverse evaluation ensures the entanglement analysis captures capability across all reasoning modalities, not just a narrow slice.

Intended Use

This model is released for research purposes. It demonstrates that safety refusal can be surgically removed from a 27B multimodal model without degrading its capabilities β€” a finding with implications for both AI safety research and alignment.

Warning

⚠️ This model will comply with any request, including harmful ones. It is intended solely for research into alignment, safety, and model behavior.

Downloads last month
456
Safetensors
Model size
27B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DJLougen/Ornstein-27B-SABER

Base model

Qwen/Qwen3.5-27B
Finetuned
(1)
this model