Unconditional Image Generation
Neon / README.md
nielsr's picture
nielsr HF Staff
Improve model card for Neon project
e29d18f verified
|
raw
history blame
5.74 kB
metadata
license: mit
pipeline_tag: unconditional-image-generation

Neon: Negative Extrapolation From Self-Training Improves Image Generation

This repository contains models improved by Neon, a novel method for enhancing generative AI models, as presented in the paper Neon: Negative Extrapolation From Self-Training Improves Image Generation.

Official code: https://github.com/SinaAlemohammad/Neon

Introduction

Scaling generative AI models is bottlenecked by the scarcity of high-quality training data. The ease of synthesizing from a generative model suggests using (unverified) synthetic data to augment a limited corpus of real data for the purpose of fine-tuning in the hope of improving performance. Unfortunately, however, the resulting positive feedback loop leads to model autophagy disorder (MAD, aka model collapse) that results in a rapid degradation in sample quality and/or diversity. In this paper, we introduce Neon (for Negative Extrapolation frOm self-traiNing), a new learning method that turns the degradation from self-training into a powerful signal for self-improvement. Given a base model, Neon first fine-tunes it on its own self-synthesized data but then, counterintuitively, reverses its gradient updates to extrapolate away from the degraded weights. We prove that Neon works because typical inference samplers that favor high-probability regions create a predictable anti-alignment between the synthetic and real data population gradients, which negative extrapolation corrects to better align the model with the true data distribution. Neon is remarkably easy to implement via a simple post-hoc merge that requires no new real data, works effectively with as few as 1k synthetic samples, and typically uses less than 1% additional training compute. We demonstrate Neon’s universality across a range of architectures (diffusion, flow matching, autoregressive, and inductive moment matching models) and datasets (ImageNet, CIFAR-10, and FFHQ). In particular, on ImageNet 256x256, Neon elevates the xAR-L model to a new state-of-the-art FID of 1.02 with only 0.36% additional training compute.

Method

Algorithm 1: Neon — Negative Extrapolation from Self‑Training

In one line: sample with your usual inference to form a synthetic set $S$; briefly fine-tune the reference model on $S$ to get $\theta_s$; then reverse that update with a merge $\theta_{\text{neon}}=(1+w),\theta_r - w,\theta_s$ (small $w>0$), which cancels mode-seeking drift and improves FID.

Benchmark Performance

Model type Dataset Base model FID Neon FID (paper) Download model
xAR-L ImageNet-256 1.28 1.02 Download
xAR-B ImageNet-256 1.72 1.31 Download
VAR d16 ImageNet-256 3.30 2.01 Download
VAR d36 ImageNet-512 2.63 1.70 Download
EDM (cond.) CIFAR-10 (32×32) 1.78 1.38 Download
EDM (uncond.) CIFAR-10 (32×32) 1.98 1.38 Download
EDM FFHQ-64×64 2.39 1.12 Download
IMM ImageNet-256 1.99 1.46 Download

Quickstart

Pretrained Neon-improved models can be downloaded directly from this Hugging Face repository via the links in the table above. For detailed instructions on environment setup, downloading models, and evaluating performance, please refer to the official GitHub repository.

Citation

If you find Neon useful, please consider citing the paper:

@article{neon2025,
  title={Neon: Negative Extrapolation from Self-Training for Generative Models},
  author={Alemohammad, Sina and collaborators},
  journal={arXiv preprint},
  year={2025}
}

Acknowledgments

This repository builds upon and thanks the following projects:

Contact

Questions? Reach out to Sina Alemohammadsinaalemohammad@gmail.com.