File size: 5,908 Bytes
6200ad9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
<h1 align="center">🎭SRA  <br> Self-Representation Alignment for Diffusion Transformers<br>
</h1>

<p align="center">
  <a href="https://arxiv.org/abs/2505.02831">
    <img src="https://img.shields.io/badge/arXiv%20paper-2505.02831-b31b1b.svg" alt="arXiv Paper">
  </a>
  <a href="https://www.xiaohongshu.com/user/profile/60195f8f0000000001009cc6">
    <img src="https://img.shields.io/badge/Contact via Xiaohongshu(RedNote)-Dy Jiang-red" alt="RedNote Profile">
  </a>
</p>


<h3 align="center">[<a href="https://vvvvvjdy.github.io/sra/">project page</a>]&emsp;[<a href="https://arxiv.org/pdf/2505.02831">paper</a>]</h3>
<br>

![SiT+SRA samples](selected_samples.png)

### πŸ’₯1.News
- **[2025.05.06]** We have released the paper and code of SRA! 


### 🌟2.Highlight

-  **Diffusion transformer itself to provide representation guidance:** We assume the unique  discriminative process of diffusion transformer makes it possible to provide the guidance without introducing extraneous representation component.

- **Self-Representation Alignment (SRA):** SRA aligns the output 
         latent representation of the diffusion transformers in earlier layer with higher noise to that in later layer  with lower noise to achieve self-representation enhancement.

- **Improved Performance**. SRA accelerates training and improves generation performance for both DiTs and SiTs.

### 🏑3.Environment Setup

```bash
conda create -n sra python=3.12 -y
conda activate sra
pip install -r requirements.txt
```

### πŸ“œ4.Dataset Preparation


Currently, we provide experiments for [ImageNet](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data). You can place the data that you want and can specify it via `--data-dir` arguments in training scripts. \
Note that we preprocess the data for faster training. Please refer to [preprocessing guide](https://github.com/vvvvvjdy/SRA/tree/main/preprocessing) for detailed guidance.

### πŸ”₯5.Training
Here we provide the training code for SiTs and DiTs.

##### 5.1.Training with SiT + SRA
```bash
cd SiT-SRA
accelerate launch --config_file configs/default.yaml train.py \
  --mixed-precision="fp16" \
  --seed=0 \
  --path-type="linear" \
  --prediction="v" \
  --resolution=256 \
  --batch-size=32 \
  --weighting="uniform" \
  --model="SiT-XL/2" \
  --block-out-s=8 \
  --block-out-t=20 \
  --t-max=0.2 \
  --output-dir="exps" \
  --exp-name="sitxl-ab820-t0.2-res256" \
  --data-dir=[YOUR_DATA_PATH]
```

Then this script will automatically create the folder in `exps` to save logs,samples, and checkpoints. You can adjust the following options:

- `--models`: Choosing from [SiT-B/2, SiT-L/2, SiT-XL/2]
- `--block-out-s`: Student's output block layer for alignment
- `--block-out-t`: Teacher's output block layer for alignment
- `--t-max`: Maximum time interval for alignment (we only use dynamic interval here)
- `--output-dir`: Any directory that you want to save checkpoints, samples, and logs
- `--exp-name`: Any string name (the folder will be created under `output-dir`)
- `--batch-size`: The local batch size (by default we use 1 node of 8 GPUs), you need to adjust this value according to your GPU number to make total batch size of 256


##### 5.2.Training with DiT + SRA
```bash
cd DiT-SRA
accelerate launch --config_file configs/default.yaml train.py \
  --mixed-precision="fp16" \
  --seed=0 \
  --resolution=256 \
  --batch-size=32 \
  --model="DiT-XL/2" \
  --block-out-s=8 \
  --block-out-t=16 \
  --t-max=0.2 \
  --output-dir="exps" \
  --exp-name="ditxl-ab816-t0.2-res256" \
  --data-dir=[YOUR_DATA_PATH]
```

Then this script will automatically create the folder in `exps` to save logs and checkpoints. You can adjust the following options (others are same as above SiTs):

- `--models`: Choosing from [DiT-B/2, DiT-L/2, DiT-XL/2]



### 🌠6.Evaluation
Here we provide the generating code for SiTs and DiTs to get the samples for evaluation. (and the .npz file can be used for [ADM evaluation](https://github.com/openai/guided-diffusion/tree/main/evaluations) suite) through the following script:

##### 6.1.Sampling with SiT + SRA

You can download our pretrained model here:

| Model                   | Image Resolution | Epochs  | FID-50K | Inception Score |
|-------------------------|------------------| --------|---------|-----------------|
| [SiT-XL/2 + SRA](https://huggingface.co/DyJiang/SRA/resolve/main/sitxl-sra-res256-ep800.pt) | 256x256          |  800    | 1.58    |   311.4        |
```bash
cd SiT-SRA
bash gen.sh
```
Note that there are several options in `gen.sh` file that you need to complete:
- `SAMPLE_DIR`: Base directory to save the generated images and .npz file
- `CKPT`: Checkpoint path (This can also be your downloaded local file of the ckpt file we provide above)

##### 6.2.Sampling with DiT + SRA
```bash
cd DiT-SRA
bash gen.sh
```
### πŸ“£7.Note

It's possible that this code may not accurately replicate the results outlined in the paper due to potential human errors during the preparation and cleaning of the code for release as well as the difference of the hardware facility. If you encounter any difficulties in reproducing our findings, please don't hesitate to inform us. 

### 🀝🏻8.Acknowledgement

This code is mainly built upon [REPA](https://github.com/sihyun-yu/REPA), [DiT](https://github.com/facebookresearch/DiT), [SiT](https://github.com/willisma/SiT) repositories. 
Thanks for their solid work!


### 🌺9.Citation
If you find SRA useful, please kindly cite our paper:
```bibtex
@article{jiang2025sra,
  title={No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by Themselves},
  author={Jiang, Dengyang and Wang, Mengmeng and Li, Liuzhuozheng and Zhang, Lei and Wang, Haoyu and Wei, Wei and Zhang, Yanning and Dai, Guang and Wang, Jingdong},
  journal={arXiv preprint arXiv:2505.02831},
  year={2025}
}
```