YHLLEO commited on
Commit
6bfeacc
·
verified ·
1 Parent(s): 73db77c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ILSVRC/imagenet-1k
5
+ ---
6
+
7
+ # Efficient Training of Diffusion Mixture-of-Experts Models: A Practical Recipe
8
+
9
+
10
+
11
+ <p align="center">
12
+ &nbsp&nbsp🤗 <a href="https://huggingface.co/collections/YHLLEO/efficientmoe">HuggingFace</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2512.01252">Tech Report</a> &nbsp&nbsp
13
+ </p>
14
+
15
+
16
+ ## 📖 Introduction
17
+
18
+ We release the MoE Transformer that can be applied to both latent and pixel-space diffusion frameworks, employing DeepSeek-style expert modules, alternative intermediate widths, varying expert counts, and enhanced attention positional encodings. The models are already relased to Huggingface. <br>
19
+
20
+ ## Main results
21
+
22
+ ### Latent diffusion framework
23
+
24
+ - Ours DSMoE v.s. [DiffMoE](https://arxiv.org/pdf/2503.14487) on 700K training steps with CFG = 1.0 (* refers to the reported results in the official paper):
25
+
26
+ | Model Name | # Act. Params | FID-50K↓ | Inception Score↑ |
27
+ |----------------------------|-------------------------|---------|----------------|
28
+ |DiffMoE-S-E16|32M|41.02|37.53|
29
+ |DSMoE-S-E16|33M|39.84|38.63|
30
+ |DSMoE-S-E48|30M|40.20|38.09|
31
+ |DiffMoE-B-E16|130M|20.83|70.26|
32
+ |DSMoE-B-E16|132M|20.33|71.42|
33
+ |DSMoE-B-E48|118M|19.46|72.69|
34
+ |DiffMoE-L-E16|458M|11.16 (14.41*)|107.74 (88.19*)|
35
+ |DSMoE-L-E16|465M|9.80|115.45|
36
+ |DSMoE-L-E48|436M|9.19|118.52|
37
+ |DSMoE-3B-E16|965M|7.52|135.29|
38
+
39
+ - Ours DSMoE v.s. DiffMoE on 700K training steps with CFG = 1.5:
40
+
41
+ | Model Name | # Act. Params | FID-50K↓ | Inception Score↑ |
42
+ |----------------------------|-------------------------|---------|----------------|
43
+ |DiffMoE-S-E16|32M|15.47|94.04|
44
+ |DSMoE-S-E16|33M|14.53|97.55|
45
+ |DSMoE-S-E48|30M|14.81|96.51|
46
+ |DiffMoE-B-E16|130M|4.87|183.43|
47
+ |DSMoE-B-E16|132M|4.50|186.79|
48
+ |DSMoE-B-E48|118M|4.27|191.03|
49
+ |DiffMoE-L-E16|458M|2.84|256.57|
50
+ |DSMoE-L-E16|465M|2.59|272.55|
51
+ |DSMoE-L-E48|436M|2.55|278.35|
52
+ |DSMoE-3B-E16|965M|2.38|304.93|
53
+
54
+
55
+ ### Pixel-space diffusion framework
56
+
57
+ - Ours JiTMoE v.s. [JiT](https://arxiv.org/pdf/2511.13720) on 200 training epochs with CFG interval (* refers to the reported results in the official paper):
58
+
59
+ | Model Name | # Act. Params | FID-50K↓ | Inception Score↑ |
60
+ |----------------------------|-------------------------|---------|----------------|
61
+ |JiT-B/16|131M|4.81 (4.37*)| 222.32 (-)|
62
+ |JiTMoE-B/16-E16|133M|4.23| 245.53|
63
+ |JiT-L/16|459M| 3.19 (2.79*)| 309.72 (-)|
64
+ |JiTMoE-L/16-E16|465M|3.10| 311.34|
65
+
66
+
67
+ ## 🌟 Citation
68
+
69
+ ```
70
+ @article{liu2025efficient,
71
+ title={Efficient Training of Diffusion Mixture-of-Experts Models: A Practical Recipe},
72
+ author={Liu, Yahui and Yue, Yang and Zhang, Jingyuan and Sun, Chenxi and Zhou, Yang and Zeng, Wencong and Tang, Ruiming and Zhou, Guorui},
73
+ journal={arXiv preprint arXiv:2512.01252},
74
+ year={2025}
75
+ }
76
+ ```
77
+