Qwen-Image-2512 Abliterated Text Encoder

Abliterated text encoder for the Qwen Image 2512 diffusion pipeline. Refusal behavior removed via activation-based weight surgery, allowing the diffusion model to generate content the original encoder would refuse to condition on.

This repo contains the weights in standard multi-shard transformers format. For a single-file version that works in both ComfyUI and musubi-tuner, see:

➡️ sci4ai/Qwen-Image-2512-Ablit-TE-For-Musubi-Lora-Training

ComfyUI Usage

Place the file(s) in ComfyUI/models/text_encoders/ and load with a standard CLIP loader node — no T5/CLIP split required.

Recommended launch flag:

python main.py --fp8_e4m3_text_encoder

FP8 E4M3 quantization is applied to the text encoder at load time, significantly reducing VRAM usage with minimal quality loss. This is recommended over using a pre-quantized FP8 cast file, which can introduce inference instability.

Related Models

Disclaimer

This model is provided for research purposes. Users are responsible for how they use this model.

Downloads last month
78
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sci4ai/Qwen-Image-2512-Abliterated-TextEncoder

Finetuned
(1037)
this model