File size: 10,381 Bytes
da8507d a3ee3c2 da8507d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 |
<div align="center">
<h2>⚡ Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning</h2>
<p><strong>Selftok Team, Media Technology Institute, Huawei</strong></p>
<p>
<a href="LICENSE">
<img src="https://img.shields.io/badge/license-MIT-blue" alt="license">
</a>
<a href="https://selftok-team.github.io/report/">
<img src="https://img.shields.io/badge/Project-Page-blue?logo=githubpages" alt="project page">
</a>
<a href="https://arxiv.org/abs/2505.07538">
<img src="https://img.shields.io/badge/arXiv-2505.07538-b31b1b?logo=arxiv" alt="arXiv">
</a>
</p>
</div>
</div>
<div align="center">
<img src="https://raw.githubusercontent.com/selftok-team/SelftokTokenizer/main/assets/recon.PNG" alt="Visualization" width="100%">
</div>
## ✨ Highlights
- Propose Self-Consistency Tokenizer (Selftok), a **SOTA tokenizer** that achieves both high-quality reconstruction and high compression bit rate.
- Selftok offers an elegant and minimalist approach to unify diffusion and AR for vision-language models (VLM).
- Our VLM achieves both SOTA visual comprehension and generation performances.
## 📰 News
- **[2025.05.18]** The weights of tokenizer for Selftok are available on [HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/tree/main).
- **[2025.05.15]** We have released the code of tokenizer for Selftok! The weights will be released soon.
- **[2025.05.12]** We have released the paper of Selftok ([arXiv](https://arxiv.org/abs/2505.07538))!
- **[2025.04.04]** Our preliminary work **DDT-LLaMA** ([project page](https://ddt-llama.github.io/)) has been accepted as an **Oral Presentation** at CVPR 2025!
## 📄 Introduction
We completely discard the conventional spatial prior in image representation and introduce a novel discrete visual tokenizer: **Self-Consistency Tokenizer (Selftok)**. At its design core, we compose an autoregressive (AR) prior—mirroring the causal structure of language—into visual tokens by using the reverse diffusion process of image generation. The AR property makes Selftok fundamentally distinct from traditional spatial tokens in the following two key ways:
- *Selftok offers an elegant and minimalist approach to unify diffusion and AR for vision-language models*: By representing images with Selftok tokens, we can train vision-language models (VLMs) using a purely discrete autoregressive architecture—like that in LLMs—without requiring additional modules or training objectives.
- *We theoretically show that the AR prior satisfies the Bellman equation*, whereas the spatial prior does not. Therefore, Selftok supports reinforcement learning (RL) for visual generation with effectiveness comparable to that achieved in LLMs.
Besides the AR property, *Selftok is also a SOTA tokenizer that achieves both high-quality reconstruction and high compression bit rate*. After representing the training images as Selftok tokens, as a pure AR model, our VLM achieves both SOTA visual comprehension and generation performances. Impressively, without using any text-image training pairs, a simple policy gradient RL working in the visual tokens can significantly boost the visual generation benchmark, surpassing all the existing models by a large margin.
Therefore, we believe that Selftok effectively addresses the long-standing challenge that visual tokens cannot support effective RL. When combined with the well-established strengths of RL in LLMs, this brings us one step closer to realizing a truly multimodal LLM.
## 📝 Results
- **SoTA** Reconstruction Performance on ImageNet 256x256
<div align="center">
<img src="https://raw.githubusercontent.com/selftok-team/SelftokTokenizer/main/assets/results_table.PNG" alt="results" width="80%">
</div>
## 🎯 How to Use
---
### 🛠️ Installation
```bash
conda create -n selftok python=3.10 # or your preferred version
conda activate selftok
# For Ascend environment
pip install -r requirements.txt
# For GPU environment
pip install -r requirements_gpu.txt
```
---
### 🧠 Tokenizer Inference with Pre-trained Models
* **Download Pretrained Weights**
| Tokenizer | Image Resolution | # Tokens | PSNR |
|:-------------------------------:|:----------:|:--------:|:-----:|
| Selftok w/o Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/tokenizer_512_ckpt.pth)) | 256×256 | 512 | 21.86 |
| Selftok w/ Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/renderer_512_ckpt.pth)) | 256×256 | 512 | 24.14 |
| Selftok w/o Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/tokenizer_1024_ckpt.pth)) | 256×256 | 1024 | 23.06 |
| Selftok w/ Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/renderer_1024_ckpt.pth)) | 256×256 | 1024 | 26.30 |
* **Pipeline Overview**
The inference pipeline includes three key stages:
1. **Tokenization** – Convert images into discrete token sequences.
2. **Diffusion Decoding** – Reconstruct images using a 50-step diffusion model.
3. **One-step Decoding** – Quickly reconstruct images using a fast renderer.
```
bash
git clone https://github.com/selftok-team/SelftokTokenizer.git
cd ./SelftokTokenizer
```
#### 1. Tokenization
This script demonstrates how to convert images into token sequences using a pretrained Selftok model.
```python
import argparse
from mimogpt.engine.utils import parse_args_from_yaml
from torchvision import transforms
from PIL import Image
import torch
import numpy as np
from mimogpt.infer.SelftokPipeline import SelftokPipeline
from mimogpt.infer.SelftokPipeline import NormalizeToTensor
from torchvision.utils import save_image
parser = argparse.ArgumentParser()
parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
parser.add_argument("--data_size", type=int, default=256)
cfg = parse_args_from_yaml(args.yml_path)
model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
img_transform = transforms.Compose([
transforms.Resize(args.data_size),
transforms.CenterCrop(args.data_size),
NormalizeToTensor(),
])
image_paths = ['path/to/image1.png', 'path/to/image2.png']
images = [img_transform(Image.open(p)) for p in image_paths]
images = torch.stack(images).to('cuda')
tokens = model.encoding(images, device='cuda')
np.save('token.npy', tokens.detach().cpu().numpy())
```
---
#### 2. Diffusion Decoding
Reconstruct images from token sequences using the full diffusion model (50 steps):
```python
import argparse
from mimogpt.engine.utils import parse_args_from_yaml
from torchvision import transforms
from PIL import Image
import torch
import numpy as np
from mimogpt.infer.SelftokPipeline import SelftokPipeline
from mimogpt.infer.SelftokPipeline import NormalizeToTensor
from torchvision.utils import save_image
parser = argparse.ArgumentParser()
parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
parser.add_argument("--data_size", type=int, default=256)
cfg = parse_args_from_yaml(args.yml_path)
model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
tokens = np.load('token.npy')
images = model.decoding(tokens, device='cuda')
for b, img in enumerate(images):
save_image(img, f"re_{b}.png")
```
---
#### 3. One-step Renderer Decoding
Reconstruct images using a fast, one-step renderer:
```python
import argparse
from mimogpt.engine.utils import parse_args_from_yaml
from torchvision import transforms
from PIL import Image
import torch
import numpy as np
from mimogpt.infer.SelftokPipeline import SelftokPipeline
from mimogpt.infer.SelftokPipeline import NormalizeToTensor
from torchvision.utils import save_image
parser = argparse.ArgumentParser()
parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
parser.add_argument("--data_size", type=int, default=256)
cfg = parse_args_from_yaml(args.yml_path)
model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
tokens = np.load('token.npy')
images = model.decoding_with_renderer(tokens, device='cuda')
for b, img in enumerate(images):
save_image(img, f"render_{b}.png")
```
---
## Notes
* Replace all `path/to/...` with actual paths on your system or object storage.
* The scripts assume CUDA is available; modify `device='cuda'` to `'cpu'` if running on CPU.
* The scripts support both Ascend and GPU. If inference with GPU, replace `mimogpt.infer.SelftokPipeline` with `mimogpt.infer.SelftokPipeline_GPU`.
* If you use Selftok Tokenizer for AR training, note that we decoder the image token sequence **reversely**!
## 🎮 Train Your Own Models
The training code is currently under preparation and will be released shortly. Please stay tuned for updates.
## 📝 Citation
If you find our work useful, please cite our related paper:
```
# Arxiv
@article{wang2025discretevisualtokensautoregression,
title={Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning},
author={Bohan Wang and Zhongqi Yue and Fengda Zhang and Shuo Chen and Li'an Bi and Junzhe Zhang and Xue Song and Kennard Yanting Chan and Jiachun Pan and Weijia Wu and Mingze Zhou and Wang Lin and Kaihang Pan and Saining Zhang and Liyu Jia and Wentao Hu and Wei Zhao and Hanwang Zhang},
year={2025},
eprint={2505.07538},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.07538},
}
# CVPR 2025
@article{pan2025generative,
title={Generative Multimodal Pretraining with Discrete Diffusion Timestep Tokens},
author={Pan, Kaihang and Lin, Wang and Yue, Zhongqi and Ao, Tenglong and Jia, Liyu and Zhao, Wei and Li, Juncheng and Tang, Siliang and Zhang, Hanwang},
journal={arXiv preprint arXiv:2504.14666},
year={2025}
}
```
## Disclaimer
This open-source project is **not an official Huawei product**. Huawei is **not responsible for providing support** or maintenance for this project.
|