license: apache-2.0
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- abliterated
- text-encoder
- qwen2.5-vl
- comfyui
Qwen-Image-2512 Abliterated Text Encoder
Abliterated text encoder for the Qwen Image 2512 diffusion pipeline. Refusal behavior removed via activation-based weight surgery, allowing the diffusion model to generate content the original encoder would refuse to condition on.
This repo contains the weights in standard multi-shard transformers format. For a single-file version that works in both ComfyUI and musubi-tuner, see:
➡️ sci4ai/Qwen-Image-2512-Ablit-TE-For-Musubi-Lora-Training
ComfyUI Usage
Place the file(s) in ComfyUI/models/text_encoders/ and load with a standard CLIP loader node — no T5/CLIP split required.
Recommended launch flag:
python main.py --fp8_e4m3_text_encoder
FP8 E4M3 quantization is applied to the text encoder at load time, significantly reducing VRAM usage with minimal quality loss. This is recommended over using a pre-quantized FP8 cast file, which can introduce inference instability.
Related Models
- sci4ai/Qwen-Image-2512-Abliterated-Full — Full pipeline weights
Disclaimer
This model is provided for research purposes. Users are responsible for how they use this model.