Improve model card: add pipeline tag, abstract, project page, and usage examples

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +78 -2
README.md CHANGED
@@ -1,6 +1,82 @@
1
  ---
2
  license: mit
 
3
  ---
4
- This repo is used for hosting TokenBridge's checkpoints. For more details see https://github.com/YuqingWang1029/TokenBridge.
5
 
6
- Paper: https://arxiv.org/abs/2503.16430
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-to-image
4
  ---
 
5
 
6
+ # TokenBridge: Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation
7
+
8
+ This repository hosts the checkpoints for **TokenBridge**, a novel approach to autoregressive visual generation presented in the paper [Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation](https://arxiv.org/abs/2503.16430).
9
+
10
+ **[๐Ÿ“š Paper](https://arxiv.org/abs/2503.16430)** | **[๐Ÿก Project Page](https://yuqingwang1029.github.io/TokenBridge/)** | **[๐Ÿ’ป Code](https://github.com/YuqingWang1029/TokenBridge)**
11
+
12
+ <p align="center">
13
+ <img width="1350" alt="TokenBridge Demo" src="https://github.com/YuqingWang1029/TokenBridge/raw/main/demo.png" />
14
+ </p>
15
+
16
+ ## Abstract
17
+
18
+ Autoregressive visual generation models typically rely on tokenizers to compress images into tokens that can be predicted sequentially. A fundamental dilemma exists in token representation: discrete tokens enable straightforward modeling with standard cross-entropy loss, but suffer from information loss and tokenizer training instability; continuous tokens better preserve visual details, but require complex distribution modeling, complicating the generation pipeline. In this paper, we propose TokenBridge, which bridges this gap by maintaining the strong representation capacity of continuous tokens while preserving the modeling simplicity of discrete tokens. To achieve this, we decouple discretization from the tokenizer training process through post-training quantization that directly obtains discrete tokens from continuous representations. Specifically, we introduce a dimension-wise quantization strategy that independently discretizes each feature dimension, paired with a lightweight autoregressive prediction mechanism that efficiently model the resulting large token space. Extensive experiments show that our approach achieves reconstruction and generation quality on par with continuous methods while using standard categorical prediction. This work demonstrates that bridging discrete and continuous paradigms can effectively harness the strengths of both approaches, providing a promising direction for high-quality visual generation with simple autoregressive modeling.
19
+
20
+ ## Highlights
21
+
22
+ * ๐Ÿ”ฎ Bridging continuous and discrete tokens, continuous-level reconstruction and generation quality with discrete modeling simplicity
23
+ * ๐Ÿช Post-training quantization approach that decouples discretization from tokenizer training
24
+ * ๐Ÿ’ฅ Directly obtains discrete tokens from pretrained continuous representations, enabling seamless conversion between token types
25
+ * ๐Ÿ›ธ Lightweight autoregressive mechanism that efficiently handles exponentially large token spaces
26
+
27
+ ## Usage
28
+
29
+ For detailed instructions on installation, reconstruction evaluation, and image generation, please refer to the official [GitHub repository](https://github.com/YuqingWang1029/TokenBridge).
30
+
31
+ ### Installation
32
+
33
+ Download the code:
34
+ ```bash
35
+ git clone -b main --single-branch https://github.com/YuqingWang1029/TokenBridge.git
36
+ cd TokenBridge
37
+ ```
38
+
39
+ A suitable [conda](https://conda.io/) environment named `tokenbridge` can be created and activated with:
40
+
41
+ ```bash
42
+ conda env create -f environment.yaml
43
+ conda activate tokenbridge
44
+ ```
45
+
46
+ Download pre-trained TokenBridge models from [huggingface](https://huggingface.co/Epiphqny/TokenBridge), and save the corresponding folder as `pretrained_models`.
47
+
48
+ ### Reconstruction
49
+
50
+ To evaluate the reconstruction quality of our post-training quantization approach:
51
+
52
+ ```bash
53
+ python reconstruction.py --bits 6 --range 5.0 --image_dir ${IMAGENET_PATH}
54
+ ```
55
+
56
+ ### Generation
57
+
58
+ Example for evaluating TokenBridge-L with classifier-free guidance:
59
+ ```bash
60
+ torchrun --nproc_per_node=8 --nnodes=1 --node_rank=0 \
61
+ main_tokenbridge.py \
62
+ --model tokenbridge_large \
63
+ --eval_bsz 256 --num_images 50000 \
64
+ --num_iter 256 --cfg 3.1 --quant_bits 6 --cfg_schedule linear --temperature 0.96 \
65
+ --output_dir test_tokenbridge_large \
66
+ --resume pretrained_models/tokenbridge/tokenbridge_large \
67
+ --data_path ${IMAGENET_PATH} --evaluate
68
+ ```
69
+ Generation speed can be significantly increased by reducing the number of autoregressive iterations (e.g., `--num_iter 64`).
70
+
71
+ ## Citation
72
+
73
+ If you find our work useful, please consider citing:
74
+
75
+ ```bibtex
76
+ @article{wang2025bridging,
77
+ title={Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation},
78
+ author={Wang, Yuqing and Lin, Zhijie and Teng, Yao and Zhu, Yuanzhi and Ren, Shuhuai and Feng, Jiashi and Liu, Xihui},
79
+ journal={arXiv preprint arXiv:2503.16430},
80
+ year={2025}
81
+ }
82
+ ```