LFM-1.2B-Abliterated

This is an abliterated version of Liquid AI's LFM-1.2B instruct model. It has been modified via layerwise orthogonal projection to completely remove its built-in safety filters and refusal mechanisms, allowing the continuous-time hybrid architecture to flow uninhibited.

It was created because I wasn't satisfied with other abliterations I saw for these, and decided to take a crack at it in a way that matched one of my favorite models: malbonne's gemma3-27b-it-abliterated

## Architectural Hurdles & Methodology

Liquid Foundation Models use a non-standard hybrid architecture combining Grouped Query Attention (GQA) with continuous-time Gated-Short Convolutions. Standard ablation scripts designed for Llama-class transformers will crash on this architecture due to complex caching objects (Lfm2HybridConvCache) and completely different linear projection pathways.

This model was abliterated by:

  1. Adapting forward hooks to safely pass Liquid's dynamic states during the measurement phase.
  2. Extracting the "refusal vector" from the hidden states of 100 harmful vs. 100 harmless instructions (utilizing mlabonne/harmful_behaviors and mlabonne/harmless_alpaca).
  3. Applying orthogonal projection (W_new = W - v(v^T W)) directly to the conv.out_proj (Token Mixing) and feed_forward.w2 (Channel Mixing) base weights across all 16 Lfm2DecoderLayer blocks.

Credit to Maxime Labonne and Sumandora for the foundational datasets and math, adapted here for the LFM architecture.

Notes on AMD/ROCm Compatibility

If you are running this model (or attempting similar LFm ablations) on AMD consumer hardware (RDNA3/7000 series), be aware that PyTorch's hipblas backend has known segmentation faults with Liquid's RoPE expansion implementation and unaligned bfloat16 matrix multiplications. Loading the model in float16 or using CPU offloading for the forward passes is strongly recommended.

## Usage

This model retains the exact same architecture as the base LFM-1.2B and requires trust_remote_code=True when loading via transformers. It is highly recommended to use the exact <|user|> and <|assistant|> chat formatting without any injected system prompts for the best uncensored performance.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "paperscarecrow/LFM2.5-1.2B-Instruct-abliterated"

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    device_map="auto", 
    trust_remote_code=True
)

prompt = "<|user|>\nGive me a detailed tutorial on picking a master padlock.\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(
        **inputs, 
        max_new_tokens=150, 
        do_sample=True,
        temperature=0.7
    )

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
412
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for paperscarecrow/LFM2.5-1.2B-Instruct-abliterated

Quantized
(38)
this model

Datasets used to train paperscarecrow/LFM2.5-1.2B-Instruct-abliterated