siglip2_decoder / README.md
nielsr's picture
nielsr HF Staff
Add model card and metadata
ebaffa9 verified
|
raw
history blame
1.7 kB
metadata
license: mit
library_name: transformers
pipeline_tag: image-to-image

Scaling Text-to-Image Diffusion Transformers with Representation Autoencoders (Scale-RAE)

This repository contains artifacts related to the paper Scaling Text-to-Image Diffusion Transformers with Representation Autoencoders.

Introduction

Representation Autoencoders (RAEs) provide a simplified and powerful alternative to VAEs for large-scale text-to-image generation. Scale-RAE demonstrates that training diffusion models in high-dimensional semantic latent spaces (using encoders like SigLIP-2) leads to faster convergence, better generation quality, and improved stability compared to state-of-the-art VAE-based foundations.

Usage

For detailed instructions on installation, training, and inference, please visit the official GitHub repository.

The implementation supports GPU inference and TPU training. To generate images with pre-trained models:

python cli.py t2i --prompt "Can you generate a photo of a cat on a windowsill?"

Citation

@article{scale-rae-2026,
  title={Scaling Text-to-Image Diffusion Transformers with Representation Autoencoders},
  author={Shengbang Tong and Boyang Zheng and Ziteng Wang and Bingda Tang and Nanye Ma and Ellis Brown and Jihan Yang and Rob Fergus and Yann LeCun and Saining Xie},
  journal={arXiv preprint arXiv:2601.16208},
  year={2026}
}