Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
base_model: Qwen/Qwen2.5-VL-7B-Instruct
|
| 4 |
+
tags:
|
| 5 |
+
- abliterated
|
| 6 |
+
- text-encoder
|
| 7 |
+
- qwen2.5-vl
|
| 8 |
+
- comfyui
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# Qwen-Image-2512 Abliterated Text Encoder
|
| 12 |
+
|
| 13 |
+
Abliterated text encoder for the Qwen Image 2512 diffusion pipeline. Refusal behavior removed via activation-based weight surgery, allowing the diffusion model to generate content the original encoder would refuse to condition on.
|
| 14 |
+
|
| 15 |
+
This repo contains the weights in standard multi-shard transformers format. For a single-file version that works in both ComfyUI and musubi-tuner, see:
|
| 16 |
+
|
| 17 |
+
➡️ [sci4ai/Qwen-Image-2512-Ablit-TE-For-Musubi-Lora-Training](https://huggingface.co/sci4ai/Qwen-Image-2512-Ablit-TE-For-Musubi-Lora-Training)
|
| 18 |
+
|
| 19 |
+
## ComfyUI Usage
|
| 20 |
+
|
| 21 |
+
Place the file(s) in `ComfyUI/models/text_encoders/` and load with a standard CLIP loader node — no T5/CLIP split required.
|
| 22 |
+
|
| 23 |
+
**Recommended launch flag:**
|
| 24 |
+
|
| 25 |
+
```bash
|
| 26 |
+
python main.py --fp8_e4m3_text_encoder
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
FP8 E4M3 quantization is applied to the text encoder at load time, significantly reducing VRAM usage with minimal quality loss. This is recommended over using a pre-quantized FP8 cast file, which can introduce inference instability.
|
| 30 |
+
|
| 31 |
+
## Related Models
|
| 32 |
+
|
| 33 |
+
- [sci4ai/Qwen-Image-2512-Abliterated-Full](https://huggingface.co/sci4ai/Qwen-Image-2512-Abliterated-Full) — Full pipeline weights
|
| 34 |
+
|
| 35 |
+
## Disclaimer
|
| 36 |
+
|
| 37 |
+
This model is provided for research purposes. Users are responsible for how they use this model.
|