selftok-team commited on
Commit
da8507d
·
verified ·
1 Parent(s): e704302

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +265 -3
README.md CHANGED
@@ -1,3 +1,265 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ <h2>⚡ Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning</h2>
4
+ <p><strong>Selftok Team, Media Technology Institute, Huawei</strong></p>
5
+
6
+ <p>
7
+ <a href="LICENSE">
8
+ <img src="https://img.shields.io/badge/license-MIT-blue" alt="license">
9
+ </a>
10
+ <a href="https://selftok-team.github.io/report/">
11
+ <img src="https://img.shields.io/badge/Project-Page-blue?logo=githubpages" alt="project page">
12
+ </a>
13
+ <a href="https://arxiv.org/abs/2505.07538">
14
+ <img src="https://img.shields.io/badge/arXiv-2505.07538-b31b1b?logo=arxiv" alt="arXiv">
15
+ </a>
16
+ </p>
17
+
18
+ </div>
19
+
20
+
21
+ </div>
22
+ <div align="center">
23
+ <img src="https://raw.githubusercontent.com/selftok-team/SelftokTokenizer/main/assets/recon.PNG" alt="Visualization" width="100%">
24
+ </div>
25
+
26
+
27
+ ## ✨ Highlights
28
+
29
+ - Propose Self-Consistency Tokenizer (Selftok), a **SOTA tokenizer** that achieves both high-quality reconstruction and high compression bit rate.
30
+
31
+ - Selftok offers an elegant and minimalist approach to unify diffusion and AR for vision-language models (VLM).
32
+
33
+ - Our VLM achieves both SOTA visual comprehension and generation performances.
34
+
35
+
36
+ ## 📰 News
37
+
38
+ - **[2025.05.18]** The weights of tokenizer for Selftok are available on [HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/tree/main).
39
+
40
+ - **[2025.05.15]** We have released the code of tokenizer for Selftok! The weights will be released soon.
41
+
42
+ - **[2025.05.12]** We have released the paper of Selftok ([arXiv](https://arxiv.org/abs/2505.07538))!
43
+
44
+ - **[2025.04.04]** Our preliminary work **DDT-LLaMA** ([project page](https://ddt-llama.github.io/)) has been accepted as an **Oral Presentation** at CVPR 2025!
45
+
46
+
47
+ ## 📄 Introduction
48
+
49
+ We completely discard the conventional spatial prior in image representation and introduce a novel discrete visual tokenizer: **Self-Consistency Tokenizer (Selftok)**. At its design core, we compose an autoregressive (AR) prior—mirroring the causal structure of language—into visual tokens by using the reverse diffusion process of image generation. The AR property makes Selftok fundamentally distinct from traditional spatial tokens in the following two key ways:
50
+
51
+ - *Selftok offers an elegant and minimalist approach to unify diffusion and AR for vision-language models*: By representing images with Selftok tokens, we can train vision-language models (VLMs) using a purely discrete autoregressive architecture—like that in LLMs—without requiring additional modules or training objectives.
52
+ - *We theoretically show that the AR prior satisfies the Bellman equation*, whereas the spatial prior does not. Therefore, Selftok supports reinforcement learning (RL) for visual generation with effectiveness comparable to that achieved in LLMs.
53
+
54
+ Besides the AR property, *Selftok is also a SOTA tokenizer that achieves both high-quality reconstruction and high compression bit rate*. After representing the training images as Selftok tokens, as a pure AR model, our VLM achieves both SOTA visual comprehension and generation performances. Impressively, without using any text-image training pairs, a simple policy gradient RL working in the visual tokens can significantly boost the visual generation benchmark, surpassing all the existing models by a large margin.
55
+
56
+ Therefore, we believe that Selftok effectively addresses the long-standing challenge that visual tokens cannot support effective RL. When combined with the well-established strengths of RL in LLMs, this brings us one step closer to realizing a truly multimodal LLM.
57
+
58
+
59
+ ## 📝 Results
60
+
61
+ - **SoTA** Reconstruction Performance on ImageNet 256x256
62
+
63
+ <div align="center">
64
+ <img src="https://raw.githubusercontent.com/selftok-team/SelftokTokenizer/main/assets/results_table.PNG" alt="results" width="80%">
65
+ </div>
66
+
67
+
68
+ ## 🎯 How to Use
69
+
70
+ ---
71
+
72
+ ### 🛠️ Installation
73
+
74
+ ```bash
75
+ conda create -n selftok python=3.10 # or your preferred version
76
+ conda activate selftok
77
+
78
+ # For Ascend environment
79
+ pip install -r requirements.txt
80
+
81
+ # For GPU environment
82
+ pip install -r requirements_gpu.txt
83
+ ```
84
+
85
+ ---
86
+
87
+ ### 🧠 Tokenizer Inference with Pre-trained Models
88
+
89
+ * **Download Pretrained Weights**
90
+
91
+ | Tokenizer | Image Resolution | # Tokens | PSNR |
92
+ |:-------------------------------:|:----------:|:--------:|:-----:|
93
+ | Selftok w/o Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/tokenizer_512_ckpt.pth)) | 256×256 | 512 | 21.86 |
94
+ | Selftok w/ Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/renderer_512_ckpt.pth)) | 256×256 | 512 | 24.14 |
95
+ | Selftok w/o Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/tokenizer_1024_ckpt.pth)) | 256×256 | 1024 | 23.06 |
96
+ | Selftok w/ Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/renderer_1024_ckpt.pth)) | 256×256 | 1024 | 26.30 |
97
+
98
+ * **Pipeline Overview**
99
+
100
+ The inference pipeline includes three key stages:
101
+
102
+ 1. **Tokenization** – Convert images into discrete token sequences.
103
+ 2. **Diffusion Decoding** – Reconstruct images using a 50-step diffusion model.
104
+ 3. **One-step Decoding** – Quickly reconstruct images using a fast renderer.
105
+
106
+
107
+ ```
108
+ bash
109
+
110
+ git clone https://github.com/selftok-team/SelftokTokenizer.git
111
+ cd ./SelftokTokenizer
112
+ ```
113
+
114
+
115
+
116
+
117
+ #### 1. Tokenization
118
+
119
+ This script demonstrates how to convert images into token sequences using a pretrained Selftok model.
120
+
121
+ ```python
122
+ import argparse
123
+ from mimogpt.engine.utils import parse_args_from_yaml
124
+ from torchvision import transforms
125
+ from PIL import Image
126
+ import torch
127
+ import numpy as np
128
+ from mimogpt.infer.SelftokPipeline import SelftokPipeline
129
+ from mimogpt.infer.SelftokPipeline import NormalizeToTensor
130
+ from torchvision.utils import save_image
131
+
132
+ parser = argparse.ArgumentParser()
133
+ parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
134
+ parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
135
+ parser.add_argument("--data_size", type=int, default=256)
136
+
137
+ cfg = parse_args_from_yaml(args.yml_path)
138
+ model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
139
+
140
+ img_transform = transforms.Compose([
141
+ transforms.Resize(args.data_size),
142
+ transforms.CenterCrop(args.data_size),
143
+ NormalizeToTensor(),
144
+ ])
145
+
146
+ image_paths = ['path/to/image1.png', 'path/to/image2.png']
147
+ images = [img_transform(Image.open(p)) for p in image_paths]
148
+ images = torch.stack(images).to('cuda')
149
+
150
+ tokens = model.encoding(images, device='cuda')
151
+ np.save('token.npy', tokens.detach().cpu().numpy())
152
+ ```
153
+
154
+ ---
155
+
156
+ #### 2. Diffusion Decoding
157
+
158
+ Reconstruct images from token sequences using the full diffusion model (50 steps):
159
+
160
+ ```python
161
+ import argparse
162
+ from mimogpt.engine.utils import parse_args_from_yaml
163
+ from torchvision import transforms
164
+ from PIL import Image
165
+ import torch
166
+ import numpy as np
167
+ from mimogpt.infer.SelftokPipeline import SelftokPipeline
168
+ from mimogpt.infer.SelftokPipeline import NormalizeToTensor
169
+ from torchvision.utils import save_image
170
+
171
+ parser = argparse.ArgumentParser()
172
+ parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
173
+ parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
174
+ parser.add_argument("--data_size", type=int, default=256)
175
+
176
+ cfg = parse_args_from_yaml(args.yml_path)
177
+ model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
178
+
179
+ tokens = np.load('token.npy')
180
+ images = model.decoding(tokens, device='cuda')
181
+
182
+ for b, img in enumerate(images):
183
+ save_image(img, f"re_{b}.png")
184
+ ```
185
+
186
+ ---
187
+
188
+ #### 3. One-step Renderer Decoding
189
+
190
+ Reconstruct images using a fast, one-step renderer:
191
+
192
+ ```python
193
+ import argparse
194
+ from mimogpt.engine.utils import parse_args_from_yaml
195
+ from torchvision import transforms
196
+ from PIL import Image
197
+ import torch
198
+ import numpy as np
199
+ from mimogpt.infer.SelftokPipeline import SelftokPipeline
200
+ from mimogpt.infer.SelftokPipeline import NormalizeToTensor
201
+ from torchvision.utils import save_image
202
+
203
+ parser = argparse.ArgumentParser()
204
+ parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
205
+ parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
206
+ parser.add_argument("--data_size", type=int, default=256)
207
+
208
+ cfg = parse_args_from_yaml(args.yml_path)
209
+ model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
210
+
211
+ tokens = np.load('token.npy')
212
+ images = model.decoding_with_renderer(tokens, device='cuda')
213
+
214
+ for b, img in enumerate(images):
215
+ save_image(img, f"render_{b}.png")
216
+ ```
217
+
218
+ ---
219
+
220
+ ## Notes
221
+
222
+ * Replace all `path/to/...` with actual paths on your system or object storage.
223
+ * The scripts assume CUDA is available; modify `device='cuda'` to `'cpu'` if running on CPU.
224
+ * The scripts support both Ascend and GPU. If inference with GPU, replace `mimogpt.infer.SelftokPipeline` with `mimogpt.infer.SelftokPipeline_GPU`.
225
+ * If you use Selftok Tokenizer for AR training, note that we decoder the image token sequence **reversely**!
226
+
227
+
228
+
229
+ ## 🎮 Train Your Own Models
230
+
231
+ The training code is currently under preparation and will be released shortly. Please stay tuned for updates.
232
+
233
+
234
+
235
+ ## 📝 Citation
236
+
237
+ If you find our work useful, please cite our related paper:
238
+
239
+ ```
240
+
241
+ # Arxiv
242
+ @article{wang2025discretevisualtokensautoregression,
243
+ title={Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning},
244
+ author={Bohan Wang and Zhongqi Yue and Fengda Zhang and Shuo Chen and Li'an Bi and Junzhe Zhang and Xue Song and Kennard Yanting Chan and Jiachun Pan and Weijia Wu and Mingze Zhou and Wang Lin and Kaihang Pan and Saining Zhang and Liyu Jia and Wentao Hu and Wei Zhao and Hanwang Zhang},
245
+ year={2025},
246
+ eprint={2505.07538},
247
+ archivePrefix={arXiv},
248
+ primaryClass={cs.CV},
249
+ url={https://arxiv.org/abs/2505.07538},
250
+ }
251
+
252
+ # CVPR 2025
253
+ @inproceedings{pan2025ddt,
254
+ title={DDT-LLaMA: Generative Multimodal Pretraining with Discrete Diffusion Timestep Tokens},
255
+ author={Kaihang Pan, Wang Lin, Zhongqi Yue, Tenglong Ao, Liyu Jia, Wei Zhao, Juncheng Li, Siliang Tang, Hanwang Zhang},
256
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
257
+ year={2025}
258
+ }
259
+
260
+ ```
261
+
262
+ ## Disclaimer
263
+
264
+ This open-source project is **not an official Huawei product**. Huawei is **not responsible for providing support** or maintenance for this project.
265
+