English
Autoregressive
Tokenizer
asifnawazmiani commited on
Commit
a0d2022
·
verified ·
1 Parent(s): ff5d8a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -5,6 +5,8 @@ language:
5
  tags:
6
  - Autoregressive
7
  - Tokenizer
 
 
8
  ---
9
 
10
  ## Open-MAGVIT2: Democratizing Autoregressive Visual Generation
@@ -19,4 +21,4 @@ Until now, VQGAN, the initial tokenizer is still acting an indispensible role in
19
 
20
  Therefore, [MAGVIT2](https://arxiv.org/abs/2310.05737) proposes a powerful tokenizer for visual generation task, which introduces a novel LookUpFree technique when quantization and extends the size of codebook to $2^{18}$, exhibiting promising performance in both image and video generation tasks. And it plays an important role in the recent state-of-the-art AR video generation model [VideoPoet](https://arxiv.org/abs/2312.14125). However, we have no access to this strong tokenizer so far. ☹️
21
 
22
- In the codebase, we follow the significant insights of tokenizer design in MAGVIT-2 and re-implement it with Pytorch, achieving the closest results to the original so far. We hope that our effort can foster innovation, creativity within the field of Autoregressive Visual Generation. 😄
 
5
  tags:
6
  - Autoregressive
7
  - Tokenizer
8
+ base_model:
9
+ - TencentARC/Open-MAGVIT2-Tokenizer-256-resolution
10
  ---
11
 
12
  ## Open-MAGVIT2: Democratizing Autoregressive Visual Generation
 
21
 
22
  Therefore, [MAGVIT2](https://arxiv.org/abs/2310.05737) proposes a powerful tokenizer for visual generation task, which introduces a novel LookUpFree technique when quantization and extends the size of codebook to $2^{18}$, exhibiting promising performance in both image and video generation tasks. And it plays an important role in the recent state-of-the-art AR video generation model [VideoPoet](https://arxiv.org/abs/2312.14125). However, we have no access to this strong tokenizer so far. ☹️
23
 
24
+ In the codebase, we follow the significant insights of tokenizer design in MAGVIT-2 and re-implement it with Pytorch, achieving the closest results to the original so far. We hope that our effort can foster innovation, creativity within the field of Autoregressive Visual Generation. 😄