File size: 1,439 Bytes
bc4ca12 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | ---
license: apache-2.0
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- abliterated
- text-encoder
- qwen2.5-vl
- comfyui
---
# Qwen-Image-2512 Abliterated Text Encoder
Abliterated text encoder for the Qwen Image 2512 diffusion pipeline. Refusal behavior removed via activation-based weight surgery, allowing the diffusion model to generate content the original encoder would refuse to condition on.
This repo contains the weights in standard multi-shard transformers format. For a single-file version that works in both ComfyUI and musubi-tuner, see:
➡️ [sci4ai/Qwen-Image-2512-Ablit-TE-For-Musubi-Lora-Training](https://huggingface.co/sci4ai/Qwen-Image-2512-Ablit-TE-For-Musubi-Lora-Training)
## ComfyUI Usage
Place the file(s) in `ComfyUI/models/text_encoders/` and load with a standard CLIP loader node — no T5/CLIP split required.
**Recommended launch flag:**
```bash
python main.py --fp8_e4m3_text_encoder
```
FP8 E4M3 quantization is applied to the text encoder at load time, significantly reducing VRAM usage with minimal quality loss. This is recommended over using a pre-quantized FP8 cast file, which can introduce inference instability.
## Related Models
- [sci4ai/Qwen-Image-2512-Abliterated-Full](https://huggingface.co/sci4ai/Qwen-Image-2512-Abliterated-Full) — Full pipeline weights
## Disclaimer
This model is provided for research purposes. Users are responsible for how they use this model.
|