---
license: mit
pipeline_tag: text-to-image
---
# TokenBridge: Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation
This repository hosts the checkpoints for **TokenBridge**, a novel approach to autoregressive visual generation presented in the paper [Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation](https://arxiv.org/abs/2503.16430).
**[📚 Paper](https://arxiv.org/abs/2503.16430)** | **[🏡 Project Page](https://yuqingwang1029.github.io/TokenBridge/)** | **[💻 Code](https://github.com/YuqingWang1029/TokenBridge)**
## Abstract
Autoregressive visual generation models typically rely on tokenizers to compress images into tokens that can be predicted sequentially. A fundamental dilemma exists in token representation: discrete tokens enable straightforward modeling with standard cross-entropy loss, but suffer from information loss and tokenizer training instability; continuous tokens better preserve visual details, but require complex distribution modeling, complicating the generation pipeline. In this paper, we propose TokenBridge, which bridges this gap by maintaining the strong representation capacity of continuous tokens while preserving the modeling simplicity of discrete tokens. To achieve this, we decouple discretization from the tokenizer training process through post-training quantization that directly obtains discrete tokens from continuous representations. Specifically, we introduce a dimension-wise quantization strategy that independently discretizes each feature dimension, paired with a lightweight autoregressive prediction mechanism that efficiently model the resulting large token space. Extensive experiments show that our approach achieves reconstruction and generation quality on par with continuous methods while using standard categorical prediction. This work demonstrates that bridging discrete and continuous paradigms can effectively harness the strengths of both approaches, providing a promising direction for high-quality visual generation with simple autoregressive modeling.
## Highlights
* 🔮 Bridging continuous and discrete tokens, continuous-level reconstruction and generation quality with discrete modeling simplicity
* 🪐 Post-training quantization approach that decouples discretization from tokenizer training
* 💥 Directly obtains discrete tokens from pretrained continuous representations, enabling seamless conversion between token types
* 🛸 Lightweight autoregressive mechanism that efficiently handles exponentially large token spaces
## Usage
For detailed instructions on installation, reconstruction evaluation, and image generation, please refer to the official [GitHub repository](https://github.com/YuqingWang1029/TokenBridge).
### Installation
Download the code:
```bash
git clone -b main --single-branch https://github.com/YuqingWang1029/TokenBridge.git
cd TokenBridge
```
A suitable [conda](https://conda.io/) environment named `tokenbridge` can be created and activated with:
```bash
conda env create -f environment.yaml
conda activate tokenbridge
```
Download pre-trained TokenBridge models from [huggingface](https://huggingface.co/Epiphqny/TokenBridge), and save the corresponding folder as `pretrained_models`.
### Reconstruction
To evaluate the reconstruction quality of our post-training quantization approach:
```bash
python reconstruction.py --bits 6 --range 5.0 --image_dir ${IMAGENET_PATH}
```
### Generation
Example for evaluating TokenBridge-L with classifier-free guidance:
```bash
torchrun --nproc_per_node=8 --nnodes=1 --node_rank=0 \
main_tokenbridge.py \
--model tokenbridge_large \
--eval_bsz 256 --num_images 50000 \
--num_iter 256 --cfg 3.1 --quant_bits 6 --cfg_schedule linear --temperature 0.96 \
--output_dir test_tokenbridge_large \
--resume pretrained_models/tokenbridge/tokenbridge_large \
--data_path ${IMAGENET_PATH} --evaluate
```
Generation speed can be significantly increased by reducing the number of autoregressive iterations (e.g., `--num_iter 64`).
## Citation
If you find our work useful, please consider citing:
```bibtex
@article{wang2025bridging,
title={Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation},
author={Wang, Yuqing and Lin, Zhijie and Teng, Yao and Zhu, Yuanzhi and Ren, Shuhuai and Feng, Jiashi and Liu, Xihui},
journal={arXiv preprint arXiv:2503.16430},
year={2025}
}
```