Add library name and pipeline tag
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,53 +1,55 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
[\[
|
| 10 |
-
[\[
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
We
|
| 16 |
-
|
| 17 |
-
We
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
|
| 26 |
-
|
|
| 27 |
-
| QLIP-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
- **
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
}
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
- [
|
| 50 |
-
|
| 51 |
-
- [
|
| 52 |
-
|
| 53 |
-
- [
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# QLIP
|
| 8 |
+
|
| 9 |
+
[\[📂 GitHub\]](https://github.com/NVlabs/QLIP)
|
| 10 |
+
[\[📃 QLIP Tech Report\]](http://arxiv.org/abs/2502.05178)
|
| 11 |
+
[\[🔗 Project Page\]](http://nvlabs.github.io/QLIP/)
|
| 12 |
+
[\[🤗 HF Model\]](https://huggingface.co/NVIDIA/QLIP-B-8-256)
|
| 13 |
+
|
| 14 |
+
## Introduction
|
| 15 |
+
We introduce Quantized Language-Image Pretraining (**QLIP**), a visual tokenization method that combines state-of-the-art reconstruction quality with state-of-the-art zero-shot image understanding.
|
| 16 |
+
QLIP trains a binary-spherical-quantization-based autoencoder with reconstruction and language-image alignment objectives.
|
| 17 |
+
We are the first to show that the two objectives do not need to be at odds.
|
| 18 |
+
We balance the two loss terms dynamically during training and show that a two-stage training pipeline effectively mixes the large-batch requirements of image-language pre-training with the memory bottleneck imposed by the reconstruction objective.
|
| 19 |
+
We validate the effectiveness of QLIP for multimodal understanding and text-conditioned image generation with a single model.
|
| 20 |
+
Specifically, QLIP serves as a drop-in replacement for the visual encoder for LLaVA and the image tokenizer for LlamaGen with comparable or even better performance.
|
| 21 |
+
Finally, we demonstrate that QLIP enables a unified mixed-modality auto-regressive model for understanding and generation.
|
| 22 |
+
|
| 23 |
+
## Model Zoo
|
| 24 |
+
We provide the following models:
|
| 25 |
+
| model name | #bits | CR<sub>↑<sub> | 0-shot<sub>↑<sub> | rFID<sub>↓<sub> | HF Link |
|
| 26 |
+
| ------------- | ------ | ----- | ------ | ---- | ------- |
|
| 27 |
+
| QLIP-B-16-256 | 28 | 219.4 | 74.3 | 3.21 | [🤗 link](https://huggingface.co/NVIDIA/QLIP-B-16-256) |
|
| 28 |
+
| QLIP-B-8-256 | 28 | 54.8 | 75.6 | 0.70 | [🤗 link](https://huggingface.co/NVIDIA/QLIP-B-8-256) |
|
| 29 |
+
| QLIP-L-14-392 | 28 | 168 | 79.1 | 1.46 | [🤗 link](https://huggingface.co/NVIDIA/QLIP-L-14-392) |
|
| 30 |
+
|
| 31 |
+
Note:
|
| 32 |
+
- **CR**: compression ratio = 24/(#bits)*patch_size^2;
|
| 33 |
+
- **0-shot**: zero-shot classification accuracy on IN-1k-val;
|
| 34 |
+
- **rFID**: reconstruction FID on IN-1k-val.
|
| 35 |
+
|
| 36 |
+
## Citing QLIP
|
| 37 |
+
|
| 38 |
+
```bibtex
|
| 39 |
+
@article{zhao2025qlip,
|
| 40 |
+
title={QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation},
|
| 41 |
+
author={Zhao, Yue and Xue, Fuzhao and Reed, Scott and Fan, Linxi and Zhu, Yuke and Kautz, Jan and Yu, Zhiding and Krähenbühl, Philipp and Huang, De-An},
|
| 42 |
+
journal={arXiv preprint arXiv:2502.05178},
|
| 43 |
+
year={2025}
|
| 44 |
+
}
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Acknowledgement
|
| 48 |
+
The project builds upon the following open-source efforts:
|
| 49 |
+
- [EVA-CLIP](https://github.com/baaivision/EVA/tree/master/EVA-CLIP/rei): We use EVA-CLIP as initialization which significantly speeds up the training convergence.
|
| 50 |
+
|
| 51 |
+
- [LLaVA](https://github.com/haotian-liu/LLaVA): We use LLaVA to evaluate the multimodal understanding performance.
|
| 52 |
+
|
| 53 |
+
- [LlamaGen](https://github.com/FoundationVision/LlamaGen): We build the text-to-image generation evaluation on top of LlamaGen.
|
| 54 |
+
|
| 55 |
+
- [Lingua](https://github.com/facebookresearch/lingua): We build the unified multimodal model on top of Lingua.
|