File size: 4,415 Bytes
27b6e8e 6798619 9ae2890 27b6e8e acfb27a 27b6e8e 6798619 27b6e8e 2a722dc cbd8c57 27b6e8e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
<div align="center">
<h1>π WeTok: Powerful Discrete Tokenization for High-Fidelity Visual Reconstruction</h1>
[](https://arxiv.org/abs/2508.05599)
[](https://github.com/zhuangshaobin/WeTok)
[](https://huggingface.co/GrayShine/WeTok)
</div>
This project introduces **WeTok**, a powerful discrete visual tokenizer designed to resolve the long-standing conflict between compression efficiency and reconstruction fidelity. WeTok achieves state-of-the-art reconstruction quality, surpassing previous leading discrete and continuous tokenizers. <br><br>
> <a href="https://github.com/zhuangshaobin/WeTok">WeTok: Powerful Discrete Tokenization for High-Fidelity Visual Reconstruction</a><br>
> [Shaobin Zhuang](https://scholar.google.com/citations?user=PGaDirMAAAAJ&hl=zh-CN&oi=ao), [Yiwei Guo](https://scholar.google.com/citations?user=HCAyeJIAAAAJ&hl=zh-CN&oi=ao), [Canmiao Fu](), [Zhipeng Huang](), [Zeyue Tian](https://scholar.google.com/citations?user=dghq4MQAAAAJ&hl=zh-CN&oi=ao), [Ying Zhang](https://scholar.google.com/citations?user=R_psgxkAAAAJ&hl=zh-CN&oi=ao), [Chen Li](https://scholar.google.com/citations?hl=zh-CN&user=WDJL3gYAAAAJ), [Yali Wang](https://scholar.google.com/citations?hl=zh-CN&user=hD948dkAAAAJ)<br>
> Shanghai Jiao Tong University, WeChat Vision (Tencent Inc.), Shenzhen Institutes of Advanced Technology (Chinese Academy of Sciences), Hong Kong University of Science and Technology, Shanghai AI Laboratory<br>
> <a href="./docs/WeTok.md">πWeTok.md</a>
> ```
> @article{zhuang2026wetok,
> title={WeTok: Powerful Discrete Tokenization for High-Fidelity Visual Reconstruction},
> author={Zhuang, Shaobin and Guo, Yiwei and Fu, Canmiao and Huang, Zhipeng and Tian, Zeyue and Zhang, Ying and Li, Chen and Wang, Yali},
> journal={arXiv preprint arXiv:2508.05599},
> year={2025}
> }
> ```
<p align="center">
<img src="./assets/teaser.png" width="90%">
<br>
<em>WeTok achieves a new state-of-the-art in reconstruction fidelity, surpassing both discrete and continuous tokenizers, while offering high compression ratios.</em>
</p>
## π° News
<!-- * **[2025.08.05]**:fire::fire::fire: We release a series of WeTok models, achieving a record-low zero-shot rFID of **0.12** on ImageNet, surpassing top continuous tokenizers like FLUX-VAE and SD-VAE 3.5. -->
* **[2025.08.08]** π π π We are excited to release **WeTok**, a powerful discrete tokenizer featuring our novel **Group-Wise Lookup-Free Quantization (GQ)** and a **Generative Decoder (GD)**. Code and pretrained models are now available!
## π Implementations
### π οΈ Installation
- **Dependencies**:
```
bash env.sh
```
### Evaluation
- **Evaluation on ImageNet 50K Validation Set**
The dataset should be organized as follows:
```
imagenet
βββ val/
βββ ...
```
Run the 256Γ256 resolution evaluation script:
```
bash scripts/evaluation/imagenet_evaluation_256_dist.sh
```
Run the original resolution evaluation script:
```
bash scripts/evaluation/imagenet_evaluation_original_dist.sh
```
- **Evaluation on MS-COCO Val2017**
The dataset should be organized as follows:
```
MSCOCO2017
βββ val2017/
βββ ...
```
Run the evaluation script:
```
bash scripts/evaluation/mscocoval_evaluation_256_dist.sh
```
Run the original resolution evaluation script:
```
bash scripts/evaluation/mscoco_evaluation_original_dist.sh
```
### Inference
Simply test the effect of each model reconstruction:
```
bash scripts/inference/reconstruct_image.sh
```
<p align="center">
<img src="./assets/compare.png" width="90%">
<br>
<em>Qualitative comparison of 512 Γ 512 image reconstruction on TokBench.</em>
</p>
<p align="center">
<img src="./assets/gen.png" width="90%">
<br>
<em>WeTok-AR-XL generated samples at 256 Γ 256 resolution.</em>
</p>
## β€οΈ Acknowledgement
Our work builds upon the foundations laid by many excellent projects in the field. We would like to thank the authors of [Open-MAGVIT2](https://arxiv.org/abs/2409.04410). We also drew inspiration from the methodologies presented in [LFQ](https://arxiv.org/abs/2310.05737), [BSQ](https://arxiv.org/abs/2406.07548). We are grateful for their contributions to the community. |