Update model card with metadata, paper link, and sample usage
Browse filesThis PR improves the model card by:
- Adding the `text-to-image` pipeline tag.
- Adding the `diffusers` library name based on the provided code examples.
- Linking the model to the research paper: [Rethinking Global Text Conditioning in Diffusion Transformers](https://huggingface.co/papers/2602.09268).
- Adding a link to the official GitHub repository.
- Providing a sample usage code snippet for the FLUX.1-schnell implementation.
README.md
CHANGED
|
@@ -1,3 +1,76 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
--
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: text-to-image
|
| 4 |
+
library_name: diffusers
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# Rethinking Global Text Conditioning in Diffusion Transformers
|
| 8 |
+
|
| 9 |
+
This repository contains the research artifacts and implementation for the paper [Rethinking Global Text Conditioning in Diffusion Transformers](https://huggingface.co/papers/2602.09268).
|
| 10 |
+
|
| 11 |
+
## Summary
|
| 12 |
+
|
| 13 |
+
Diffusion transformers typically incorporate textual information via attention layers and a modulation mechanism using a pooled text embedding. This research addresses whether modulation-based text conditioning is necessary and reveals that while conventional usage of pooled embeddings contributes little to performance, they can provide significant gains when used as **guidance**.
|
| 14 |
+
|
| 15 |
+
This approach, called **Modulation Guidance**, enables controllable shifts toward desirable properties such as increased complexity, better aesthetics, and improved hand generation. It is:
|
| 16 |
+
- **Training-free**: No additional training or fine-tuning is required.
|
| 17 |
+
- **Efficient**: It incurs negligible runtime overhead.
|
| 18 |
+
- **Versatile**: Applicable to various models including FLUX, SD3.5, HunyuanVideo, and COSMOS.
|
| 19 |
+
|
| 20 |
+
For more information, please refer to the [official GitHub repository](https://github.com/quickjkee/modulation-guidance).
|
| 21 |
+
|
| 22 |
+
## Sample Usage (FLUX.1-schnell)
|
| 23 |
+
|
| 24 |
+
The following example demonstrates how to apply modulation guidance to enhance image complexity using the `diffusers` library. This requires the helper functions provided in the [official repository](https://github.com/quickjkee/modulation-guidance).
|
| 25 |
+
|
| 26 |
+
```python
|
| 27 |
+
import types
|
| 28 |
+
import torch
|
| 29 |
+
from functools import partial
|
| 30 |
+
from diffusers import FluxPipeline
|
| 31 |
+
# Assumes the 'models' directory from the official repo is in your python path
|
| 32 |
+
from models.flux_schnell import encode_prompt, forward_modulation_guidance
|
| 33 |
+
|
| 34 |
+
# 1. Load the model
|
| 35 |
+
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to('cuda')
|
| 36 |
+
|
| 37 |
+
# 2. Define hyperparameters for complexity guidance
|
| 38 |
+
prompt = 'A wolf on a plain background'
|
| 39 |
+
prompt_positive = 'Extremely complex, the highest quality'
|
| 40 |
+
prompt_negative = 'Very simple, no details at all'
|
| 41 |
+
w = 3 # Guidance strength
|
| 42 |
+
start_layer = 5 # Layer to start applying guidance
|
| 43 |
+
|
| 44 |
+
# 3. Get pooled CLIP embeddings
|
| 45 |
+
clip_positive = encode_prompt(pipe=pipe, prompt=prompt_positive)
|
| 46 |
+
clip_negative = encode_prompt(pipe=pipe, prompt=prompt_negative)
|
| 47 |
+
|
| 48 |
+
# 4. Override the transformer forward pass with modulation guidance
|
| 49 |
+
forward_fn = partial(forward_modulation_guidance,
|
| 50 |
+
pooled_projections_1=clip_positive,
|
| 51 |
+
pooled_projections_0=clip_negative,
|
| 52 |
+
w=w, start_layer=start_layer)
|
| 53 |
+
pipe.transformer.forward = types.MethodType(forward_fn, pipe.transformer)
|
| 54 |
+
|
| 55 |
+
# 5. Run generation
|
| 56 |
+
seed = 0
|
| 57 |
+
image = pipe([prompt],
|
| 58 |
+
guidance_scale=0.0,
|
| 59 |
+
num_inference_steps=4,
|
| 60 |
+
max_sequence_length=256,
|
| 61 |
+
generator=torch.Generator("cpu").manual_seed(seed),
|
| 62 |
+
output_type='pil').images[0]
|
| 63 |
+
|
| 64 |
+
image.save("output.png")
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Citation
|
| 68 |
+
|
| 69 |
+
```bibtex
|
| 70 |
+
@article{starodubcev2025rethinking,
|
| 71 |
+
title={Rethinking Global Text Conditioning in Diffusion Transformers},
|
| 72 |
+
author={Starodubcev, Nikita and Pakhomov, Daniil and Wu, Zongze and Drobyshevskiy, Ilya and Liu, Yuchen and Wang, Zhonghao and Zhou, Yuqian and Lin, Zhe and Baranchuk, Dmitry},
|
| 73 |
+
journal={arXiv preprint arXiv:2602.09268},
|
| 74 |
+
year={2025}
|
| 75 |
+
}
|
| 76 |
+
```
|