Unconditional Image Generation

Improve model card: Add pipeline tag and detailed content for Neon

#6
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +160 -3
README.md CHANGED
@@ -1,3 +1,160 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: unconditional-image-generation
4
+ ---
5
+
6
+ # Neon: Negative Extrapolation From Self-Training Improves Image Generation
7
+
8
+ This repository contains the official checkpoints and code for the paper "[Neon: Negative Extrapolation From Self-Training Improves Image Generation](https://huggingface.co/papers/2510.03597)".
9
+
10
+ Scaling generative AI models is bottlenecked by the scarcity of high-quality training data. Neon (for Negative Extrapolation frOm self-traiNing) introduces a new learning method that turns the degradation from self-training into a powerful signal for self-improvement. Given a base model, Neon first fine-tunes it on its own self-synthesized data but then, counterintuitively, reverses its gradient updates to extrapolate away from the degraded weights. This process corrects the predictable anti-alignment between synthetic and real data population gradients, thereby preventing model autophagy disorder (MAD, aka model collapse) and better aligning the model with the true data distribution.
11
+
12
+ Neon is remarkably easy to implement, requires no new real data, works effectively with as few as 1k synthetic samples, and typically uses less than 1% additional training compute. It demonstrates universality across a range of architectures (diffusion, flow matching, autoregressive, and inductive moment matching models) and datasets (ImageNet, CIFAR-10, and FFHQ).
13
+
14
+ For the full code and additional details, please refer to the [official GitHub repository](https://github.com/SinaAlemohammad/Neon).
15
+
16
+ ## Method
17
+
18
+ ![Algorithm 1: Neon — Negative Extrapolation from Self‑Training](https://github.com/SinaAlemohammad/Neon/raw/main/assets/algorithm.png)
19
+
20
+ **In one line:** sample with your usual inference to form a synthetic set $S$; briefly fine-tune the reference model on $S$ to get $\theta_s$; then **reverse** that update with a merge $\theta_{\text{neon}}=(1+w)\,\theta_r - w\,\theta_s$ (small $w>0$), which cancels mode-seeking drift and improves FID.
21
+
22
+ ## Benchmark Performance
23
+
24
+ Neon demonstrates state-of-the-art results. For instance, on ImageNet 256x256, Neon elevates the xAR-L model to a new state-of-the-art FID of 1.02 with only 0.36% additional training compute.
25
+
26
+ | Model type | Dataset | Base model FID | Neon FID (paper) |
27
+ | ------------- | ---------------- | -------------: | ---------------: |
28
+ | xAR-L | ImageNet-256 | 1.28 | **1.02** |
29
+ | xAR-B | ImageNet-256 | 1.72 | **1.31** |
30
+ | VAR d16 | ImageNet-256 | 3.30 | **2.01** |
31
+ | VAR d36 | ImageNet-512 | 2.63 | **1.70** |
32
+ | EDM (cond.) | CIFAR-10 (32×32) | 1.78 | **1.38** |
33
+ | EDM (uncond.) | CIFAR-10 (32×32) | 1.98 | **1.38** |
34
+ | EDM | FFHQ-64×64 | 2.39 | **1.12** |
35
+ | IMM | ImageNet-256 | 1.99 | **1.46** |
36
+
37
+ ## Quickstart
38
+
39
+ ### 1) Environment
40
+
41
+ ```bash
42
+ # from repo root
43
+ conda env create -f environment.yml
44
+ conda activate neon
45
+ ```
46
+
47
+ ### 2) Download pretrained models & FID stats
48
+
49
+ ```bash
50
+ bash download_models.sh
51
+ ```
52
+
53
+ This populates `checkpoints/` and `fid_stats/`.
54
+ **Pretrained Neon models can also be downloaded from Hugging Face:** [https://huggingface.co/sinaalemohammad/Neon](https://huggingface.co/sinaalemohammad/Neon)
55
+
56
+ ### 3) Evaluate (FID/IS)
57
+
58
+ > All examples assume 8 GPUs; adjust `--nproc_per_node` / batch sizes as needed.
59
+
60
+ **xAR @ ImageNet‑256**
61
+
62
+ ```bash
63
+ # 1) VAE for xAR (credit: MAR)
64
+ hf download xwen99/mar-vae-kl16 --include kl16.ckpt --local-dir xAR/pretrained
65
+ # 2) Use it via:
66
+ # --vae_path xAR/pretrained/kl16.ckpt
67
+
68
+ # xAR‑L
69
+ PYTHONPATH=xAR torchrun --standalone --nproc_per_node=8 xAR/calculate_fid.py \
70
+ --model xar_large \
71
+ --model_ckpt checkpoints/Neon_xARL_imagenet256.pth \
72
+ --cfg 2.3 --vae_path xAR/pretrained/kl16.ckpt \
73
+ --num_images 50000 --batch_size 64 --flow_steps 40 --img_size 256 \
74
+ --fid_stats fid_stats/adm_in256_stats.npz
75
+
76
+ # xAR‑B
77
+ PYTHONPATH=xAR torchrun --standalone --nproc_per_node=8 xAR/calculate_fid.py \
78
+ --model xar_base \
79
+ --model_ckpt checkpoints/Neon_xARB_imagenet256.pth \
80
+ --cfg 2.7 --vae_path xAR/pretrained/kl16.ckpt \
81
+ --num_images 50000 --batch_size 32 --flow_steps 50 --img_size 256 \
82
+ --fid_stats fid_stats/adm_in256_stats.npz
83
+ ```
84
+
85
+ **VAR @ ImageNet‑256 / 512**
86
+
87
+ ```bash
88
+ # d16 @ 256
89
+ PYTHONPATH=VAR/VAR_imagenet_256 torchrun --standalone --nproc_per_node=8 \
90
+ VAR/VAR_imagenet_256/calculate_fid.py \
91
+ --var_ckpt checkpoints/Neon_VARd16_imagenet256.pth \
92
+ --num_images 50000 --batch_size 64 --img_size 256 \
93
+ --fid_stats fid_stats/adm_in256_stats.npz
94
+
95
+ # d36 @ 512
96
+ PYTHONPATH=VAR/VAR_imagenet_512 torchrun --standalone --nproc_per_node=8 \
97
+ VAR/VAR_imagenet_512/calculate_fid.py \
98
+ --var_ckpt checkpoints/Neon_VARd36_imagenet512.pth \
99
+ --num_images 50000 --batch_size 32 --img_size 512 \
100
+ --fid_stats fid_stats/adm_in512_stats.npz
101
+ ```
102
+
103
+ **EDM (Karras et al.) @ CIFAR‑10 / FFHQ**
104
+
105
+ ```bash
106
+ # CIFAR‑10 (conditional)
107
+ PYTHONPATH=edm torchrun --standalone --nproc_per_node=8 edm/calculate_fid.py \
108
+ --network_pkl checkpoints/Neon_EDM_conditional_CIFAR10.pkl \
109
+ --ref https://nvlabs-fi-cdn.nvidia.com/edm/fid-refs/cifar10-32x32.npz \
110
+ --seeds 0-49999 --max_batch_size 256 --num_steps 18
111
+
112
+ # CIFAR‑10 (unconditional)
113
+ PYTHONPATH=edm torchrun --standalone --nproc_per_node=8 edm/calculate_fid.py \
114
+ --network_pkl checkpoints/Neon_EDM_unconditional_CIFAR10.pkl \
115
+ --ref https://nvlabs-fi-cdn.nvidia.com/edm/fid-refs/cifar10-32x32.npz \
116
+ --seeds 0-49999 --max_batch_size 256 --num_steps 18
117
+
118
+ # FFHQ‑64 (unconditional)
119
+ PYTHONPATH=edm torchrun --standalone --nproc_per_node=8 edm/calculate_fid.py \
120
+ --network_pkl checkpoints/Neon_EDM_FFHQ.pkl \
121
+ --ref https://nvlabs-fi-cdn.nvidia.com/edm/fid-refs/ffhq-64x64.npz \
122
+ --seeds 0-49999 --max_batch_size 256 --num_steps 40
123
+ ```
124
+
125
+ **IMM @ ImageNet‑256**
126
+
127
+ ```bash
128
+ # IMM @ T = 8
129
+ PYTHONPATH=imm torchrun --standalone --nproc_per_node=8 imm/calculate_fid.py \
130
+ --model_ckpt checkpoints/Neon_IMM_imagenet256.pth \
131
+ --num_images 50000 --batch_size 64 --img_size 256 \
132
+ --fid_stats fid_stats/adm_in256_stats.npz
133
+ ```
134
+
135
+ ## Citation
136
+
137
+ If you find Neon useful, please consider citing the paper:
138
+
139
+ ```bibtex
140
+ @article{neon2025,
141
+ title={Neon: Negative Extrapolation from Self-Training for Generative Models},
142
+ author={Alemohammad, Sina and collaborators},
143
+ journal={arXiv preprint},
144
+ year={2025}
145
+ }
146
+ ```
147
+
148
+ ## Acknowledgments
149
+
150
+ This repository builds upon and thanks the following projects:
151
+
152
+ * [VAR — Visual AutoRegressive Modeling](https://github.com/FoundationVision/VAR)
153
+ * [xAR — Beyond Next‑Token: Next‑X Prediction](https://github.com/OliverRensu/xAR)
154
+ * [IMM — Inductive Moment Matching](https://github.com/lumalabs/imm)
155
+ * [EDM — Elucidating the Design Space of Diffusion Models](https://github.com/NVlabs/edm)
156
+ * [MAR VAE (KL‑16) tokenizer](https://huggingface.co/xwen99/mar-vae-kl16)
157
+
158
+ ## Contact
159
+
160
+ Questions? Reach out to **Sina Alemohammad** — [sinaalemohammad@gmail.com](mailto:sinaalemohammad@gmail.com).