mishig HF Staff commited on
Commit
112ca56
·
verified ·
1 Parent(s): 4db6527

Add 1 files

Browse files
Files changed (1) hide show
  1. 2409/2409.04429.md +378 -0
2409/2409.04429.md ADDED
@@ -0,0 +1,378 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
2
+
3
+ URL Source: https://arxiv.org/html/2409.04429
4
+
5
+ Markdown Content:
6
+ Yecheng Wu 1,2 Zhuoyang Zhang 2 1 1 footnotemark: 1 Junyu Chen 1,2 Haotian Tang 2 2 2 footnotemark: 2
7
+
8
+ Dacheng Li 4 2 2 footnotemark: 2 Yunhao Fang 5 2 2 footnotemark: 2 Ligeng Zhu 3 Enze Xie 3
9
+
10
+ Hongxu Yin 3 Li Yi 1 Song Han 2,3 Yao Lu 3
11
+
12
+ Tsinghua University 1 MIT 2 NVIDIA 3 UC Berkeley 4 UC San Diego 5
13
+
14
+ [https://hanlab.mit.edu/projects/vila-u](https://hanlab.mit.edu/projects/vila-u)
15
+
16
+ ###### Abstract
17
+
18
+ VILA-U is a U nified foundation model that integrates V ideo, I mage, La nguage understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework. Our code is open sourced at [https://github.com/mit-han-lab/vila-u](https://github.com/mit-han-lab/vila-u).
19
+
20
+ 1 Introduction
21
+ --------------
22
+
23
+ In recent years, large language models (LLMs) have demonstrated superior capabilities in various language tasks. Their appealing properties like instruction following, zero-shot generalization, and few-shot in-context learning motivate researchers to combine them with vision models to build visual language models (VLMs) for multi-modal tasks. Many efforts (Dai et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib8); Liu et al., [2024b](https://arxiv.org/html/2409.04429v3#bib.bib35); Lin et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib32)) in this field have achieved remarkable performance on visual language understanding. In these works, visual inputs are projected onto LLMs’ semantic space through a vision model like CLIP (Radford et al., [2021](https://arxiv.org/html/2409.04429v3#bib.bib42)) to bridge two modalities by including text-image alignment objectives.
24
+
25
+ In addition to visual understanding, another essential research direction in combining visual and language modalities is visual generation. There are two popular approaches for text-guided image generation. One approach employs diffusion models(Rombach et al., [2022a](https://arxiv.org/html/2409.04429v3#bib.bib43)), a powerful tool for various generation tasks. The other line of work converts visual content into discrete tokens through vector quantization (VQ) and then leveraging autoregressive transformers for high-quality and diverse generation(Esser et al., [2021](https://arxiv.org/html/2409.04429v3#bib.bib11); Yu et al., [2021](https://arxiv.org/html/2409.04429v3#bib.bib59); Lee et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib24); Tian et al., [2024b](https://arxiv.org/html/2409.04429v3#bib.bib51); Sun et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib46)).
26
+
27
+ Witnessing the rapid advancements in both visual understanding and generation, an emerging trend is to unify these techniques into a single multi-modal framework. Prior to VILA-U, there are two main approaches to achieving such unification: (1) One approach (Liu et al., [2024a](https://arxiv.org/html/2409.04429v3#bib.bib34); Yu et al., [2023a](https://arxiv.org/html/2409.04429v3#bib.bib61); Xie et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib56)) utilizes a VQGAN-based (Esser et al., [2021](https://arxiv.org/html/2409.04429v3#bib.bib11)) tokenizer to convert visual inputs into discrete tokens and leverages an autoregressive model for both understanding and generation. However, (Xie et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib56)) has shown that visual tokens from VQGAN-based encoder lack semantic information and usually results in a severe performance drop in downstream visual understanding tasks. (2) Another approach (Zhan et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib64); Ge et al., [2023b](https://arxiv.org/html/2409.04429v3#bib.bib14); Jin et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib22)) utilizes a codebook to quantize features produced by a pre-trained vision model like CLIP. Since CLIP features encode rich semantic information, these approaches generally achieve significantly better performance on understanding tasks. However, these tokenizers lack decoding capability, requiring an external visual generation model, such as a diffusion model, to use the generated visual tokens as conditions for producing visual outputs. This approach adds complexity to infrastructure design. Available large-scale foundation model training pipelines and deployment systems have already been highly optimized for language modeling with next-token prediction. Designing and maintaining an additional stack to support diffusion models would incur significant engineering costs.
28
+
29
+ In this work, we present VILA-U, an end-to-end autoregressive framework with a unified next-token prediction objective for both visual and text inputs that can achieve competitive performance on both visual language understanding and generation tasks, without the help of external components like diffusion models. We identify two critical principles to unify vision and language modalities: (1) Existing unified end-to-end autoregressive VLMs cannot achieve competitive visual understanding performance because the discrete VQGAN tokens are trained solely on image reconstruction loss and are not aligned with textual inputs. Therefore, it is crucial to introduce text alignment during VQ vision tower pretraining to enhance perception capabilities. (2) Autoregressive image generation can attain similar quality as diffusion models if trained on high-quality data with sufficient size. Guided by these insights, VILA-U features a unified foundation vision tower that converts visual inputs into discrete tokens through vector quantization and aligns these tokens with textual inputs using contrastive learning. The multi-modal training of VILA-U takes advantage of a unified next-token prediction objective for both visual and textual tokens on a small-size high-quality image-text corpus.
30
+
31
+ We evaluate VILA-U on common visual language tasks, including image-language understanding, video-language understanding, image generation and video generation. VILA-U significantly narrows the gap in visual understanding performance between end-to-end autoregressive models and continuous-token VLMs, while introducing competitive native visual generation capabilities.
32
+
33
+ 2 Related Work
34
+ --------------
35
+
36
+ Large Language Models (LLMs). LLMs based on pre-trained large-scale transformers (Vaswani et al., [2017](https://arxiv.org/html/2409.04429v3#bib.bib54)) has drastically revolutionized natural language processing field. Featuring gigantic model size and pre-training data corpus, LLM has achieved remarkable performance on various linguistic tasks. The development of open-source LLMs such as LLaMA (Touvron et al., [2023a](https://arxiv.org/html/2409.04429v3#bib.bib52)), Mixtral (Jiang et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib21)) and Vicuna (Chiang et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib6)) has furthered nourished research on how to adopt LLM for complex language tasks. Besides excellent zero-shot generalizability to diverse domains, LLM is commonly finetuned on custom datasets for better performance on specific tasks. Instruction tuning (OpenAI, [2023](https://arxiv.org/html/2409.04429v3#bib.bib39); Chung et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib7); Ouyang et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib40)) also stands as a key step for better outputs in applying LLMs. In this work, we adopt the LLaMA-2-7B (Touvron et al., [2023a](https://arxiv.org/html/2409.04429v3#bib.bib52)) model as our basic LLM.
37
+
38
+ Visual Language Models (VLMs). Combining computer vision and natural language processing gives rise to VLM in this LLM era. In VLMs, researchers leverage vision foundation models such as CLIP (Radford et al., [2021](https://arxiv.org/html/2409.04429v3#bib.bib42)), BLIP (Li et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib27)) and CoCa (Yu et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib60)) to extract visual features, align with texts, and feed them into LLM to achieve the cross-modality understanding between texts and visual content. Building upon such progress, many VLMs (Alayrac et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib1); Li et al., [2023b](https://arxiv.org/html/2409.04429v3#bib.bib28); Liu et al., [2024b](https://arxiv.org/html/2409.04429v3#bib.bib35); Lin et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib32); Luo et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib37); Tian et al., [2024a](https://arxiv.org/html/2409.04429v3#bib.bib50)) have been designed and trained on extensive vision-language data to achieve remarkable performance on visual understanding and reasoning tasks. In this work, we aim to develop a VLM with visual understanding capacities comparable to prior works, while also possessing the new capacity of visual generation.
39
+
40
+ Unified Visual Language Models. Numerous efforts have been made to develop unified visual language models capable of generating both text and visual content, including images and videos. There are two mainstream methods to generate visual content in VLMs. Many works (Sun et al., [2023b](https://arxiv.org/html/2409.04429v3#bib.bib48); [a](https://arxiv.org/html/2409.04429v3#bib.bib47); Jin et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib22); Ge et al., [2023b](https://arxiv.org/html/2409.04429v3#bib.bib14); Li et al., [2023c](https://arxiv.org/html/2409.04429v3#bib.bib29); Ge et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib15); Jin et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib23); Ge et al., [2023a](https://arxiv.org/html/2409.04429v3#bib.bib13)) combine VLMs with diffusion models like Stable Diffusion (Rombach et al., [2022a](https://arxiv.org/html/2409.04429v3#bib.bib43)) for high-quality image generation. Other works (Liu et al., [2024a](https://arxiv.org/html/2409.04429v3#bib.bib34); Yu et al., [2023a](https://arxiv.org/html/2409.04429v3#bib.bib61); Lu et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib36); Team, [2024](https://arxiv.org/html/2409.04429v3#bib.bib49); Xie et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib56)) adopt VQGAN-based vision encoders to convert visual inputs into discrete tokens and make LLMs learn to predict them. For more details on the distinction between our method and other unified visual language models, please refer to Appendix[A](https://arxiv.org/html/2409.04429v3#A1 "Appendix A Difference with related works ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation").
41
+
42
+ 3 Methods
43
+ ---------
44
+
45
+ ![Image 1: Refer to caption](https://arxiv.org/html/2409.04429v3/x1.png)
46
+
47
+ Figure 1: An overview of our framework’s multi-modal training and inference process. Visual inputs are tokenized into discrete tokens and concatenated with textual tokens to form a multi-modal token sequence. All tokens are involved in our next-token prediction process, enabling a unified training objective. During inference, the output tokens are decoded by our text detokenizer or vision tower decoder to yield multi-modal content.
48
+
49
+ This work proposes a multi-modal framework that aims to unify visual and language modalities effectively. The key components enabling such unification are a unified foundation vision tower that converts visual inputs into discrete tokens aligned with text, and a unified multi-modal generative training procedure. An overview of the main multi-modal training and inference process within our framework is depicted in Figure [1](https://arxiv.org/html/2409.04429v3#S3.F1 "Figure 1 ‣ 3 Methods ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation").
50
+
51
+ ### 3.1 Unified Foundation Vision Tower
52
+
53
+ To support diverse visual understanding and generation tasks, we first build a unified foundation vision tower to provide appropriate visual features. We propose to include text-image contrastive loss and VQ-based image reconstruction loss in our vision tower training, empowering the text alignment and discrete tokenization abilities for our vision tower. As depicted in Figure[2](https://arxiv.org/html/2409.04429v3#S3.F2 "Figure 2 ‣ Unified Training Recipe. ‣ 3.1 Unified Foundation Vision Tower ‣ 3 Methods ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"), the features extracted from images are primarily discretized through residual quantization. Then in one route, the discrete visual features are fed into a decoder to reconstruct the image and compute the reconstruction loss; on the other route, we compute the image-text contrastive loss between the discrete visual features and the textual features provided by a text encoder. With this training procedure, the vision tower learns to extract discrete features suitable for both understanding and generation in our VLM.
54
+
55
+ #### Unified Training Recipe.
56
+
57
+ Training the unified vision tower with two objectives from scratch would be difficult, because alignment and reconstruction tasks require high-level semantic and low-level appearance features, respectively. Training the entire vision tower from scratch with both objectives could induce conflicting goals. In practice, we observe that training the vector-quantized vision tower from scratch with both image reconstruction and contrastive loss results in a mere 5% Top-1 accuracy for zero-shot image classification on ImageNet(Deng et al., [2009a](https://arxiv.org/html/2409.04429v3#bib.bib9)) after several epochs of training.
58
+
59
+ To address this issue, we experiment with different training recipes (failed recipes are listed in Appendix[C](https://arxiv.org/html/2409.04429v3#A3 "Appendix C Failed Training Recipes. ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation")) and find the following solution to be most effective. Instead of learning both objectives simultaneously, we suggest first equipping the model with text-image alignment ability and then learning reconstruction while maintaining alignment ability. We initialize the vision encoder and text encoder with pretrained weights from the CLIP model to ensure good text-image alignment. Next, we freeze the text encoder and keep all vision components trainable using both contrastive and reconstruction loss. The contrastive loss maintains alignment ability, while the reconstruction loss develops reconstruction ability. This approach converges quickly and yields strong performance. The pre-trained CLIP weights contain learned high-level priors, which are difficult and computationally expensive to learn from scratch. Initializing with these weights enables the binding of low-level and high-level features much faster and more tractably for the vision encoder. With this recipe, we can train a vision tower that exhibits both good text alignment and image reconstruction abilities. We use weighted sum to combine the text-image contrastive loss and VQ-based image reconstruction loss:
60
+
61
+ ℒ t⁢o⁢t⁢a⁢l=w c⁢o⁢n⁢t⁢r⁢a⁢ℒ c⁢o⁢n⁢t⁢r⁢a+w r⁢e⁢c⁢o⁢n⁢ℒ r⁢e⁢c⁢o⁢n subscript ℒ 𝑡 𝑜 𝑡 𝑎 𝑙 subscript 𝑤 𝑐 𝑜 𝑛 𝑡 𝑟 𝑎 subscript ℒ 𝑐 𝑜 𝑛 𝑡 𝑟 𝑎 subscript 𝑤 𝑟 𝑒 𝑐 𝑜 𝑛 subscript ℒ 𝑟 𝑒 𝑐 𝑜 𝑛\displaystyle\mathcal{L}_{total}=w_{contra}\mathcal{L}_{contra}+w_{recon}% \mathcal{L}_{recon}caligraphic_L start_POSTSUBSCRIPT italic_t italic_o italic_t italic_a italic_l end_POSTSUBSCRIPT = italic_w start_POSTSUBSCRIPT italic_c italic_o italic_n italic_t italic_r italic_a end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_c italic_o italic_n italic_t italic_r italic_a end_POSTSUBSCRIPT + italic_w start_POSTSUBSCRIPT italic_r italic_e italic_c italic_o italic_n end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_r italic_e italic_c italic_o italic_n end_POSTSUBSCRIPT(1)
62
+
63
+ In our experiments, we pick w c⁢o⁢n⁢t⁢r⁢a subscript 𝑤 𝑐 𝑜 𝑛 𝑡 𝑟 𝑎 w_{contra}italic_w start_POSTSUBSCRIPT italic_c italic_o italic_n italic_t italic_r italic_a end_POSTSUBSCRIPT = 1 and w r⁢e⁢c⁢o⁢n subscript 𝑤 𝑟 𝑒 𝑐 𝑜 𝑛 w_{recon}italic_w start_POSTSUBSCRIPT italic_r italic_e italic_c italic_o italic_n end_POSTSUBSCRIPT = 1.
64
+
65
+ ![Image 2: Refer to caption](https://arxiv.org/html/2409.04429v3/x2.png)
66
+
67
+ Figure 2: Overview of our unified foundation vision tower. Given input images the features extracted by the vision encoder are discretized using residual quantization. Then the discrete vision features are meanwhile put into the vision decoder to reconstruct images and used to perform the text-image alignment. During this process, the reconstruction loss and contrastive loss are computed to update the vision tower, endowing it to produce discrete visual features with text alignment.
68
+
69
+ #### Residual Vector Quantization.
70
+
71
+ Our visual features are discretely quantized, so their representation ability heavily depends on the code size used in our quantizer. Since we hope they contain both high-level and low-level features, we need more capacities in their vector feature space, making a larger code size necessary for good performance in downstream tasks. However, too many codes for each image will result in too many tokens for LLM to produce in the visual generation process, incurring much latency. So in an attempt to increase the vector feature capacity and meanwhile maintain a reasonable number of tokens for LLM, we adopt a residual vector quantization method following RQ-VAE (Lee et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib24)) to discretize a vector 𝐳 𝐳\mathbf{z}bold_z as D 𝐷 D italic_D discrete codes:
72
+
73
+ ℛ⁢𝒬⁢(𝐳;𝒞,D)ℛ 𝒬 𝐳 𝒞 𝐷\displaystyle\mathcal{R}\mathcal{Q}(\mathbf{z};\mathcal{C},D)caligraphic_R caligraphic_Q ( bold_z ; caligraphic_C , italic_D )=(k 1,⋯,k D)∈[K]D,absent subscript 𝑘 1⋯subscript 𝑘 𝐷 superscript delimited-[]𝐾 𝐷\displaystyle=\left(k_{1},\cdots,k_{D}\right)\in[K]^{D},= ( italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , ⋯ , italic_k start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ) ∈ [ italic_K ] start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT ,(2)
74
+
75
+ where 𝒞 𝒞\mathcal{C}caligraphic_C is the codebook, K=|𝒞|𝐾 𝒞 K=|\mathcal{C}|italic_K = | caligraphic_C | and k d subscript 𝑘 𝑑 k_{d}italic_k start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is the code of 𝐳 𝐳\mathbf{z}bold_z at depth d 𝑑 d italic_d. Starting with 𝐫 0=𝐳 subscript 𝐫 0 𝐳\mathbf{r}_{0}=\mathbf{z}bold_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = bold_z, we recursively perform vector quantization by
76
+
77
+ k d subscript 𝑘 𝑑\displaystyle k_{d}italic_k start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT=𝒬⁢(𝐫 d−1,𝒞),absent 𝒬 subscript 𝐫 𝑑 1 𝒞\displaystyle=\mathcal{Q}\left(\mathbf{r}_{d-1},\mathcal{C}\right),= caligraphic_Q ( bold_r start_POSTSUBSCRIPT italic_d - 1 end_POSTSUBSCRIPT , caligraphic_C ) ,(3)
78
+ 𝐫 d subscript 𝐫 𝑑\displaystyle\mathbf{r}_{d}bold_r start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT=𝐫 d−1−𝐞⁢(k d),absent subscript 𝐫 𝑑 1 𝐞 subscript 𝑘 𝑑\displaystyle=\mathbf{r}_{d-1}-\mathbf{e}\left(k_{d}\right),= bold_r start_POSTSUBSCRIPT italic_d - 1 end_POSTSUBSCRIPT - bold_e ( italic_k start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) ,
79
+
80
+ for each depth d=1,2,⋯,D 𝑑 1 2⋯𝐷 d=1,2,\cdots,D italic_d = 1 , 2 , ⋯ , italic_D, where 𝐞 𝐞\mathbf{e}bold_e is the codebook embedding table and 𝒬 𝒬\mathcal{Q}caligraphic_Q is the standard vector quantization:
81
+
82
+ 𝒬⁢(𝐳;𝒞)=arg⁡min k∈[K]⁢‖𝐳−𝐞⁢(k)‖2 2.𝒬 𝐳 𝒞 𝑘 delimited-[]𝐾 superscript subscript norm 𝐳 𝐞 𝑘 2 2\displaystyle\mathcal{Q}(\mathbf{z};\mathcal{C})=\underset{k\in[K]}{\arg\min}% \|\mathbf{z}-\mathbf{e}(k)\|_{2}^{2}.caligraphic_Q ( bold_z ; caligraphic_C ) = start_UNDERACCENT italic_k ∈ [ italic_K ] end_UNDERACCENT start_ARG roman_arg roman_min end_ARG ∥ bold_z - bold_e ( italic_k ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .(4)
83
+
84
+ The quantized vector for 𝐳 𝐳\mathbf{z}bold_z is the sum over the depth dim: 𝐳^=∑i=1 D 𝐞⁢(k i)^𝐳 superscript subscript 𝑖 1 𝐷 𝐞 subscript 𝑘 𝑖\widehat{\mathbf{z}}=\sum_{i=1}^{D}\mathbf{e}\left(k_{i}\right)over^ start_ARG bold_z end_ARG = ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT bold_e ( italic_k start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ). Intuitively, in each depth we choose a code to reduce the quantization error. So compared to the standard vector quantization methods, we have D 𝐷 D italic_D codes to quantize one vector, allowing for finer approximation and larger feature space. During multi-modal training and inference, LLM only needs to predict the code embedding, with codes in different depth sequentially produced by a depth transformer taking the code embedding as the initial input, as we will introduce in Section [3.2](https://arxiv.org/html/2409.04429v3#S3.SS2 "3.2 Unified Multi-modal Generative Pre-training ‣ 3 Methods ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). So with this residual quantization, we can enhance the representation capability of our vision tower while incurring little latency.
85
+
86
+ ### 3.2 Unified Multi-modal Generative Pre-training
87
+
88
+ Figure [1](https://arxiv.org/html/2409.04429v3#S3.F1 "Figure 1 ‣ 3 Methods ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") presents an overview of our unified multi-modal pre-training process. Our vision tower encoder processes visual inputs sequentially, generating a 1D token sequence. This sequence is then concatenated with text tokens to form a multi-modal sequence. To distinguish between modalities and enable visual content generation, we insert special tokens: <image_start> and <image_end> at the start and end of image tokens, and <video_start> and <video_end> at the start and end of video tokens. Video tokens are the direct concatenation of multi-frame image tokens.
89
+
90
+ Pre-training data form. In terms of unified pre-training data, we leverage different concatenation forms between text and visual tokens to facilitate both understanding and generation. We use [image, text], [text, image], and [text, video] forms, with supervision loss added only on the latter modality in each pair to avoid unconditional content generation and promote modality alignment. We also employ an interleaved text and image concatenation form for enhanced understanding, with supervision loss applied solely to the text. Notably, we exclude the [video, text] form during pre-training for efficiency reasons, as we find incorporating it during supervised fine-tuning effectively yields excellent video understanding ability.
91
+
92
+ Training Objective. Since both visual tokens and text tokens are discrete, we can train our LLM with the general language modeling next-token prediction objective. However, due to the use of residual quantization for visual tokens, the training objectives for text and visual tokens differ slightly. For text tokens, the negative log-likelihood loss is calculated as
93
+
94
+ ℒ text=−∑i=1 T log⁡P θ⁢(y i|y<i),subscript ℒ text superscript subscript 𝑖 1 𝑇 subscript 𝑃 𝜃 conditional subscript 𝑦 𝑖 subscript 𝑦 absent 𝑖\displaystyle\mathcal{L}_{\text{text}}=-\sum_{i=1}^{T}\log P_{\theta}\left(y_{% i}|y_{<i}\right),caligraphic_L start_POSTSUBSCRIPT text end_POSTSUBSCRIPT = - ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT roman_log italic_P start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_y start_POSTSUBSCRIPT < italic_i end_POSTSUBSCRIPT ) ,(5)
95
+
96
+ where T 𝑇 T italic_T is the length of the multi-modal sequence and i 𝑖 i italic_i only counts when the text token appears at position i 𝑖 i italic_i. For visual tokens, residual quantization introduces a depth-stacked structure of codes at each visual position j 𝑗 j italic_j. To address this, we leverage the depth transformer introduced in RQ-VAE (Lee et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib24)). Specifically, given the code embedding h j subscript ℎ 𝑗 h_{j}italic_h start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT generated by the LLM for visual tokens at position j 𝑗 j italic_j, the depth transformer autoregressively predicts D residual tokens (k j⁢1 subscript 𝑘 𝑗 1 k_{j1}italic_k start_POSTSUBSCRIPT italic_j 1 end_POSTSUBSCRIPT, …, k j⁢D subscript 𝑘 𝑗 𝐷 k_{jD}italic_k start_POSTSUBSCRIPT italic_j italic_D end_POSTSUBSCRIPT). During training, the input of the depth transformer v j⁢d subscript 𝑣 𝑗 𝑑 v_{jd}italic_v start_POSTSUBSCRIPT italic_j italic_d end_POSTSUBSCRIPT at depth d is defined as the sum of the code embeddings of up to depth d−1 𝑑 1 d-1 italic_d - 1 for d>1 𝑑 1 d>1 italic_d > 1 such that
97
+
98
+ v j⁢d=∑d′=1 d−1 𝐞⁢(k j⁢d′),subscript 𝑣 𝑗 𝑑 superscript subscript superscript 𝑑′1 𝑑 1 𝐞 subscript 𝑘 𝑗 superscript 𝑑′\displaystyle v_{jd}=\sum_{d^{\prime}=1}^{d-1}\mathbf{e}(k_{jd^{\prime}}),italic_v start_POSTSUBSCRIPT italic_j italic_d end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT bold_e ( italic_k start_POSTSUBSCRIPT italic_j italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ,(6)
99
+
100
+ and v j⁢1=h j subscript 𝑣 𝑗 1 subscript ℎ 𝑗 v_{j1}=h_{j}italic_v start_POSTSUBSCRIPT italic_j 1 end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. Thus, the depth transformer predicts the next code for a finer estimation of the feature 𝒛^j subscript bold-^𝒛 𝑗\bm{\hat{z}}_{j}overbold_^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT based on the previous estimations up to d−1 𝑑 1 d-1 italic_d - 1. Then the negative log-likelihood loss for visual tokens is
101
+
102
+ ℒ visual=−∑j=1 T∑d=1 D log⁡P δ⁢(k j⁢d|k j,<d),subscript ℒ visual superscript subscript 𝑗 1 𝑇 superscript subscript 𝑑 1 𝐷 subscript 𝑃 𝛿 conditional subscript 𝑘 𝑗 𝑑 subscript 𝑘 𝑗 absent 𝑑\displaystyle\mathcal{L}_{\text{visual}}=-\sum_{j=1}^{T}\sum_{d=1}^{D}\log P_{% \delta}\left(k_{jd}|k_{j,<d}\right),caligraphic_L start_POSTSUBSCRIPT visual end_POSTSUBSCRIPT = - ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_d = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT roman_log italic_P start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_k start_POSTSUBSCRIPT italic_j italic_d end_POSTSUBSCRIPT | italic_k start_POSTSUBSCRIPT italic_j , < italic_d end_POSTSUBSCRIPT ) ,(7)
103
+
104
+ where T 𝑇 T italic_T is the length of the multi-modal sequence and j 𝑗 j italic_j only counts when a visual token appears at position j 𝑗 j italic_j. During the multi-modal pre-training, the weights of the depth transformer are randomly initialized and updated together with the LLM.
105
+
106
+ 4 Experiments
107
+ -------------
108
+
109
+ In this section, we introduce comprehensive experiments to evaluate our method on various visual understanding and generation tasks. Firstly, we outline our experimental setup, including the model architecture, training datasets, and evaluation benchmarks. Subsequently, we evaluate the performance of our unified foundation vision tower. Then, we compare our method with other popular VLMs on various visual understanding and generation benchmarks. Finally, we give some qualitative results.
110
+
111
+ ### 4.1 Experimental Setup
112
+
113
+ In our experiments, we employ LLaMA-2-7B (Touvron et al., [2023b](https://arxiv.org/html/2409.04429v3#bib.bib53)) as our base language model. For the vision tower, we choose SigLIP-Large-patch16-256 / SigLIP-SO400M-patch14-384 (Zhai et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib63)) as our vision encoder architecture, and adopt the residual quantizer, depth transformer as well as the decoder architecture from RQ-VAE (Lee et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib24)). The quantizer codebook size is 16384. All images and videos are resized to a resolution of 256×256 256 256 256\times 256 256 × 256 / 384×384 384 384 384\times 384 384 × 384, with each image or video frame converted into a 16×16×4 16 16 4 16\times 16\times 4 16 × 16 × 4 / 27×27×16 27 27 16 27\times 27\times 16 27 × 27 × 16 code with the residual depth D=4 𝐷 4 D=4 italic_D = 4 / D=16 𝐷 16 D=16 italic_D = 16. We train our vision tower on COYO-700M (Byeon et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib2)) and evaluate it for zero-shot classification and reconstruction performance on ImageNet (Deng et al., [2009b](https://arxiv.org/html/2409.04429v3#bib.bib10)). For visual understanding, we leverage 1M [image, text] data from ShareGPT4V (Chen et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib5)), 6M interleaved text and image data from MMC4 (Zhu et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib66)). For visual generation, we incorporate 15M high-quality [text, image] data curated from our internal dataset and 1M [text, video] data from OpenVid (Nan et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib38)) datasets. Classifier-free guidance (Ho & Salimans, [2022](https://arxiv.org/html/2409.04429v3#bib.bib17)) is employed for visual generation with a CFG value of 3.
114
+
115
+ For examining visual understanding ability, we evaluate our model on the widely adopted zero-shot image-based visual-language benchmarks including VQAv2 (Goyal et al., [2017](https://arxiv.org/html/2409.04429v3#bib.bib16)), GQA (Hudson & Manning, [2019](https://arxiv.org/html/2409.04429v3#bib.bib20)), TextVQA (Singh et al., [2019](https://arxiv.org/html/2409.04429v3#bib.bib45)), POPE (Li et al., [2023d](https://arxiv.org/html/2409.04429v3#bib.bib30)), MME (Fu et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib12)), SEED (Li et al., [2023a](https://arxiv.org/html/2409.04429v3#bib.bib25)), MM-Vet (Yu et al., [2023b](https://arxiv.org/html/2409.04429v3#bib.bib62)) and video-based visual-language benchmarks including ActivityNet (Caba Heilbron et al., [2015](https://arxiv.org/html/2409.04429v3#bib.bib3)), MSVD (Chen & Dolan, [2011](https://arxiv.org/html/2409.04429v3#bib.bib4)), MSRVTT (Xu et al., [2017](https://arxiv.org/html/2409.04429v3#bib.bib57)), TGIF (Li et al., [2016](https://arxiv.org/html/2409.04429v3#bib.bib31)).
116
+
117
+ To evaluate the visual generation capability, we use MJHQ-30K (Li et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib26)) and GenAI-Bench (Lin et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib33)) for image generation and VBench (Huang et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib19)) for video generation. MJHQ-30K adopts the FID between generated images and 30K high-quality images to reflect the overall capability of image generation. GenAI-Bench is a challenging image-to-text generation benchmark that reflects the comprehensive generative abilities of image generation models. Vbench is a comprehensive benchmark suite for video generative models that decomposes the generation quality into multiple well-defined dimensions to facilitate fine-grained and objective evaluation.
118
+
119
+ ### 4.2 Unified Foundation Vision Tower
120
+
121
+ We present the commonly used metrics reconstruction FID (rFID) and Top-1 accuracy for zero-shot image classification on ImageNet to measure the reconstruction and text alignment capabilities of the unified foundation vision tower in Table [1](https://arxiv.org/html/2409.04429v3#S4.T1 "Table 1 ‣ 4.2 Unified Foundation Vision Tower ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). Please refer to the Appendix[B.1](https://arxiv.org/html/2409.04429v3#A2.SS1 "B.1 Reconstruction ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") for the qualitative reconstruction results. Our model achieves significantly better reconstruction results than VQ-GAN. Our rFID is slightly inferior to that of RQ-VAE when using the same code shape. This is expected as the introduction of contrastive loss during training, aimed at enhancing image understanding, led to a decrease in reconstruction quality. For the text alignment capability, our unified vision tower achieves a Top-1 accuracy of 73.3 / 78.0 under 256 / 384 resolution. This demonstrates the exceptional text alignment capability of our unified vision tower. However, it is worth noting that both the rFID and Top-1 accuracy of the vision tower only serves as a medium indicator. As the unified vision tower is an integral component of the entire autoregressive model, we believe that its performance on downstream tasks, such as visual understanding and generation, holds greater significance.
122
+
123
+ Table 1: The reconstruction FID (rFID) and Top-1 accuracy for zero-shot image classification of our unified vision tower on ImageNet.
124
+
125
+ ### 4.3 Quantitative Evaluation
126
+
127
+ Table 2: Comparison with leading methods on image-based visual language benchmarks. Our performance is close to leading VLMs, surpassing many methods by a large margin under the same LLM size, even with a discrete visual token type. * indicates that images in the training split of these datasets are observed during VLM training.
128
+
129
+ Table 3: Comparison with leading methods on video-based visual language benchmarks. The performance of our method is close to state-of-the-art VLMs, surpassing many methods under the same LLM size, even with a discrete visual token type.
130
+
131
+ Visual Understanding Tasks. Table [2](https://arxiv.org/html/2409.04429v3#S4.T2 "Table 2 ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") and Table [3](https://arxiv.org/html/2409.04429v3#S4.T3 "Table 3 ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") summarize the comparison between our method and other leading VLMs on the image-language and video-language benchmarks respectively. Compared to the mainstream choice of continuous visual tokens produced by foundation models like CLIP, the VQGAN-based discrete visual tokens have less alignment with text, thus harming VLMs’ performance on visual understanding tasks. With our unified foundation vision tower, our model can have a performance close to leading VLMs even with discrete visual tokens.
132
+
133
+ Table 4: Comparison with other visual generation methods on MJHQ-30K evaluation benchmark.
134
+
135
+ Table 5: Comparison with other visual generation methods on GenAI-Bench (Lin et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib33)). The results show that our method outperforms previous autoregressive visual generation methods. For advanced prompts that require better text following ability to generate, our method can have a relatively small performance gap with diffusion-based methods, even with much less training data.
136
+
137
+ {NiceTabular}
138
+ lccccccc|c \CodeBefore\Body Method Type#Training Images Attribute↑↑\uparrow↑Scene↑↑\uparrow↑Relation↑↑\uparrow↑Overall↑↑\uparrow↑
139
+
140
+ Spatial Action Part
141
+
142
+ SD v2.1 Diffusion 2000M 0.80 0.79 0.76 0.77 0.80 0.78
143
+
144
+ SD-XL Diffusion 2000M 0.84 0.84 0.82 0.83 0.89 0.83
145
+
146
+ Midjourney v6 Diffusion – 0.88 0.87 0.87 0.87 0.91 0.87
147
+
148
+ DALL-E 3 Diffusion – 0.91 0.90 0.92 0.89 0.91 0.90
149
+
150
+ LWM Autoregressive – 0.63 0.62 0.65 0.63 0.70 0.63
151
+
152
+ Show-o Autoregressive 36M 0.72 0.72 0.70 0.70 0.75 0.70
153
+
154
+ Ours (256) Autoregressive 15M 0.78 0.78 0.77 0.78 0.79 0.76
155
+
156
+ Ours (384) Autoregressive 15M 0.75 0.76 0.75 0.73 0.75 0.73
157
+
158
+ (a) VQAScores on basic prompts of GenAI-Bench
159
+
160
+ {NiceTabular}lccccccc|c \CodeBefore\Body Method Type#Training Images Count↑↑\uparrow↑Differ↑↑\uparrow↑Compare↑↑\uparrow↑ Logical↑↑\uparrow↑Overall↑↑\uparrow↑
161
+
162
+ Negate Universal
163
+
164
+ SD v2.1 Diffusion 2000M 0.68 0.70 0.68 0.54 0.64 0.62
165
+
166
+ SD-XL Diffusion 2000M 0.71 0.73 0.69 0.50 0.66 0.63
167
+
168
+ Midjourney v6 Diffusion – 0.78 0.78 0.79 0.50 0.76 0.69
169
+
170
+ DALL-E 3 Diffusion – 0.82 0.78 0.82 0.48 0.80 0.70
171
+
172
+ LWM Autoregressive – 0.59 0.58 0.54 0.49 0.52 0.53
173
+
174
+ Show-o Autoregressive 36M 0.70 0.62 0.71 0.51 0.65 0.60
175
+
176
+ Ours (256) Autoregressive 15M 0.70 0.71 0.74 0.53 0.66 0.64
177
+
178
+ Ours (384) Autoregressive 15M 0.68 0.67 0.71 0.51 0.64 0.61
179
+
180
+ (b) VQAScores on advanced prompts of GenAI-Bench
181
+
182
+ Visual Generation Tasks. As shown in Table [4](https://arxiv.org/html/2409.04429v3#S4.T4 "Table 4 ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"), VILA-U can achieve a better FID than other autoregressive methods and have comparable performance with some diffusion based methods. This result shows the feasibility of our method for visual generation. Table [4.3](https://arxiv.org/html/2409.04429v3#S4.SS3 "4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") summarizes the quantitative results of our method and other visual generation methods on GenAI-Bench. Although Our method is inferior to diffusion-based visual generation methods that have been trained on billions-level image-text pairs, our method has comparable performance with SD v2.1 (Rombach et al., [2022b](https://arxiv.org/html/2409.04429v3#bib.bib44)) and SD-XL (Podell et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib41)) on advanced prompts even trained with magnitude-level less data. This further shows that VILA-U can learn the correlation among visual and textual modalities effectively with our unified training framework. For video generation, we evaluate our method on VBench(Huang et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib19)) and compare it against Open-Sora([Zheng et al.,](https://arxiv.org/html/2409.04429v3#bib.bib65)), CogVideo(Hong et al., [2022](https://arxiv.org/html/2409.04429v3#bib.bib18)), and CogVideoX(Yang et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib58)). The results, presented in Table [6](https://arxiv.org/html/2409.04429v3#S4.T6 "Table 6 ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"), demonstrate that our method achieves performance that is better than CogVideo and comparable to Open-Sora, highlighting the effectiveness of our approach.
183
+
184
+ Table 6: Comparison with other visual generation methods on VBench (Huang et al., [2024](https://arxiv.org/html/2409.04429v3#bib.bib19)).
185
+
186
+ ### 4.4 Qualitative Evaluation
187
+
188
+ ![Image 3: Refer to caption](https://arxiv.org/html/2409.04429v3/x3.png)
189
+
190
+ Figure 3: VILA-U can correctly caption videos and cover all the details, thanks to the text alignment of our vision encoder.
191
+
192
+ Visual Understanding. To validate the effectiveness of VILA-U in comprehensive visual understanding tasks, we apply it in several understanding and reasoning tasks, as some examples shown in Figure[3](https://arxiv.org/html/2409.04429v3#S4.F3 "Figure 3 ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") and Figure[5](https://arxiv.org/html/2409.04429v3#S4.F5 "Figure 5 ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). From the results, we can see the versatility of VILA-U in various tasks including visual captioning and visual question answering. Besides, our model has inherited some important capabilities from VILA (Lin et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib32)) including multi-image understanding, in-context learning, as shown in Figure [5](https://arxiv.org/html/2409.04429v3#S4.F5 "Figure 5 ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") and Figure[6](https://arxiv.org/html/2409.04429v3#S4.F6 "Figure 6 ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). More visualizations can be found in the Appendix[B.2](https://arxiv.org/html/2409.04429v3#A2.SS2 "B.2 Visual Understanding ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") and [B.3](https://arxiv.org/html/2409.04429v3#A2.SS3 "B.3 In-context Learning Examples ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation").
193
+
194
+ ![Image 4: Refer to caption](https://arxiv.org/html/2409.04429v3/x4.png)
195
+
196
+ Figure 4: VILA-U has good visual question answering capability. The images and questions are from the test split of VQAv2 dataset.
197
+
198
+ ![Image 5: Refer to caption](https://arxiv.org/html/2409.04429v3/x5.png)
199
+
200
+ Figure 5: VILA-U has good in-context learning capability. We feed two image-text pairs and a third image as the context to prompt the VLM.
201
+
202
+ ![Image 6: Refer to caption](https://arxiv.org/html/2409.04429v3/x6.png)
203
+
204
+ Figure 6: VILA-U can correctly reason over multiple images.
205
+
206
+ Visual Generation. We present some examples of the visual generation results in Figure [7](https://arxiv.org/html/2409.04429v3#S4.F7 "Figure 7 ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). Our model can be employed in both image generation and video generation, even trained with a relatively small data corpus. In the given examples, our method can generate nice-looking images and continuous videos adhering to the user’s input. More visualizations can be found in the appendix[B.4](https://arxiv.org/html/2409.04429v3#A2.SS4 "B.4 Visual Generation ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation").
207
+
208
+ ![Image 7: Refer to caption](https://arxiv.org/html/2409.04429v3/x7.png)
209
+
210
+ Figure 7: VILA-U can generate high-quality images and videos given text input.
211
+
212
+ 5 Ablation Study
213
+ ----------------
214
+
215
+ ### 5.1 Impact of Contrastive Loss to Visual Understanding
216
+
217
+ We include contrastive loss in vision tower training, which endows it with the text alignment ability. During our multi-modal training, such text alignment ability is crucial in enhancing modality fusion and performance on downstream visual language tasks. We validate the importance of this alignment by training the vision tower with and without the contrastive loss, evaluating its impact on visual language understanding performance. For this ablation study, we randomly sample 25M data from COYO-700M to train the vision tower. For multi-modal training, we use ShareGPT4V and MMC4 without text-image and text-video data. The results of the first two lines in Table [7](https://arxiv.org/html/2409.04429v3#S5.T7 "Table 7 ‣ 5.1 Impact of Contrastive Loss to Visual Understanding ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") demonstrate the crucial role of text alignment in achieving strong visual language understanding performance. Scaling the dataset size from 25M to 700M further enhances performance, highlighting the importance of learning text alignment on a large-scale dataset.
218
+
219
+ Table 7: Impact of contrastive loss to visual understanding.
220
+
221
+ ### 5.2 Impact of Contrastive Loss to Visual Generation
222
+
223
+ We conduct two experiments to demonstrate the influence of contrastive loss to generation performance. For efficiency, we conduct only text-to-image pretraining and utilize Sheared-LLaMA-1.3B (Xia et al., [2023](https://arxiv.org/html/2409.04429v3#bib.bib55)) instead of LLaMA-2-7B as the LLM. In the first experiment, we use the RQ-VAE as the vision tower, which has an rFID of 1.30. In the second experiment, we employ our unified vision tower. Results are shown in Table [5.2](https://arxiv.org/html/2409.04429v3#S5.SS2 "5.2 Impact of Contrastive Loss to Visual Generation ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). Our Unified Vision Tower yielded slightly worse FID results than the RQ-VAE on MJHQ-30K, possibly due to its inferior rFID resulting from the contrastive loss.
224
+
225
+ Table 8: Impact of contrastive loss to visual generation.
226
+
227
+ Table 9: Impact of CFG.
228
+
229
+ ### 5.3 Impact of Classifier-free Guidance
230
+
231
+ We adopt classifier-free guidance during the visual content generation. We investigate the impact of the CFG value on our 256-resolution model. Results presented in Table [5.2](https://arxiv.org/html/2409.04429v3#S5.SS2 "5.2 Impact of Contrastive Loss to Visual Generation ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") indicate that a CFG value of 3.0 yields the best FID score.
232
+
233
+ 6 Conclusion and Limitation
234
+ ---------------------------
235
+
236
+ We present VILA-U, a novel and unified visual language model that integrates video, image and language understanding and generation tasks into one autoregressive next-token prediction framework. Our method is not only more concise than most VLMs that leverage additional components like diffusion models for unifying visual generation and understanding, but also demonstrates that autoregressive methods can achieve comparable performance to state-of-the-art VLMs. We believe VILA-U can serve as a general-purpose framework for diverse visual language tasks.
237
+
238
+ As demonstrated in Section 5.2, the introduction of contrastive loss impacts the reconstruction ability of the vision tower. Balancing these two capabilities within the unified vision tower presents an interesting and complex challenge that requires further exploration. Additionally, we currently do not observe significant synergy or mutual enhancement between understanding and generation tasks. In the future, we aim to investigate and explore more effective methods to enable these tasks to complement and reinforce each other, thereby fully realizing the untapped potential of a unified visual language model.
239
+
240
+ References
241
+ ----------
242
+
243
+ * Alayrac et al. (2022) Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. _Advances in Neural Information Processing Systems_, 35:23716–23736, 2022.
244
+ * Byeon et al. (2022) Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset. [https://github.com/kakaobrain/coyo-dataset](https://github.com/kakaobrain/coyo-dataset), 2022.
245
+ * Caba Heilbron et al. (2015) Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In _Proceedings of the ieee conference on computer vision and pattern recognition_, pp. 961–970, 2015.
246
+ * Chen & Dolan (2011) David Chen and William B Dolan. Collecting highly parallel data for paraphrase evaluation. In _Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies_, pp. 190–200, 2011.
247
+ * Chen et al. (2023) Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. _arXiv preprint arXiv:2311.12793_, 2023.
248
+ * Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. _See https://vicuna. lmsys. org (accessed 14 April 2023)_, 2(3):6, 2023.
249
+ * Chung et al. (2024) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. _Journal of Machine Learning Research_, 25(70):1–53, 2024.
250
+ * Dai et al. (2024) Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale N Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. _Advances in Neural Information Processing Systems_, 36, 2024.
251
+ * Deng et al. (2009a) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pp. 248–255. Ieee, 2009a.
252
+ * Deng et al. (2009b) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pp. 248–255. Ieee, 2009b.
253
+ * Esser et al. (2021) Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pp. 12873–12883, 2021.
254
+ * Fu et al. (2024) Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024.
255
+ * Ge et al. (2023a) Yuying Ge, Yixiao Ge, Ziyun Zeng, Xintao Wang, and Ying Shan. Planting a seed of vision in large language model. _arXiv preprint arXiv:2307.08041_, 2023a.
256
+ * Ge et al. (2023b) Yuying Ge, Sijie Zhao, Ziyun Zeng, Yixiao Ge, Chen Li, Xintao Wang, and Ying Shan. Making llama see and draw with seed tokenizer. In _The Twelfth International Conference on Learning Representations_, 2023b.
257
+ * Ge et al. (2024) Yuying Ge, Sijie Zhao, Jinguo Zhu, Yixiao Ge, Kun Yi, Lin Song, Chen Li, Xiaohan Ding, and Ying Shan. Seed-x: Multimodal models with unified multi-granularity comprehension and generation. _arXiv preprint arXiv:2404.14396_, 2024.
258
+ * Goyal et al. (2017) Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2017.
259
+ * Ho & Salimans (2022) Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. _arXiv preprint arXiv:2207.12598_, 2022.
260
+ * Hong et al. (2022) Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. _arXiv preprint arXiv:2205.15868_, 2022.
261
+ * Huang et al. (2024) Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 21807–21818, 2024.
262
+ * Hudson & Manning (2019) Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pp. 6700–6709, 2019.
263
+ * Jiang et al. (2024) Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. _arXiv:2401.04088_, 2024.
264
+ * Jin et al. (2023) Yang Jin, Kun Xu, Liwei Chen, Chao Liao, Jianchao Tan, Quzhe Huang, CHEN Bin, Chengru Song, Di ZHANG, Wenwu Ou, et al. Unified language-vision pretraining in llm with dynamic discrete visual tokenization. In _The Twelfth International Conference on Learning Representations_, 2023.
265
+ * Jin et al. (2024) Yang Jin, Zhicheng Sun, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, et al. Video-lavit: Unified video-language pre-training with decoupled visual-motional tokenization. _arXiv preprint arXiv:2402.03161_, 2024.
266
+ * Lee et al. (2022) Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image generation using residual quantization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 11523–11532, 2022.
267
+ * Li et al. (2023a) Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. _arXiv preprint arXiv:2307.16125_, 2023a.
268
+ * Li et al. (2024) Daiqing Li, Aleks Kamko, Ehsan Akhgari, Ali Sabet, Linmiao Xu, and Suhail Doshi. Playground v2. 5: Three insights towards enhancing aesthetic quality in text-to-image generation. _arXiv preprint arXiv:2402.17245_, 2024.
269
+ * Li et al. (2022) Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _ICML_, 2022.
270
+ * Li et al. (2023b) Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In _International conference on machine learning_, pp. 19730–19742. PMLR, 2023b.
271
+ * Li et al. (2023c) Yanwei Li, Yuechen Zhang, Chengyao Wang, Zhisheng Zhong, Yixin Chen, Ruihang Chu, Shaoteng Liu, and Jiaya Jia. Mini-gemini: Mining the potential of multi-modality vision language models. _arXiv:2403.18814_, 2023c.
272
+ * Li et al. (2023d) Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. _arXiv preprint arXiv:2305.10355_, 2023d.
273
+ * Li et al. (2016) Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry Goldberg, Alejandro Jaimes, and Jiebo Luo. Tgif: A new dataset and benchmark on animated gif description. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pp. 4641–4650, 2016.
274
+ * Lin et al. (2023) Ji Lin, Hongxu Yin, Wei Ping, Yao Lu, Pavlo Molchanov, Andrew Tao, Huizi Mao, Jan Kautz, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models, 2023.
275
+ * Lin et al. (2024) Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. _arXiv preprint arXiv:2404.01291_, 2024.
276
+ * Liu et al. (2024a) Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel. World model on million-length video and language with ringattention. _arXiv preprint arXiv:2402.08268_, 2024a.
277
+ * Liu et al. (2024b) Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. _Advances in neural information processing systems_, 36, 2024b.
278
+ * Lu et al. (2023) Jiasen Lu, Christopher Clark, Sangho Lee, Zichen Zhang, Savya Khosla, Ryan Marten, Derek Hoiem, and Aniruddha Kembhavi. Unified-io 2: Scaling autoregressive multimodal models with vision, language, audio, and action. _arXiv preprint arXiv:2312.17172_, 2023.
279
+ * Luo et al. (2024) Run Luo, Yunshui Li, Longze Chen, Wanwei He, Ting-En Lin, Ziqiang Liu, Lei Zhang, Zikai Song, Xiaobo Xia, Tongliang Liu, et al. Deem: Diffusion models serve as the eyes of large language models for image perception. _arXiv preprint arXiv:2405.15232_, 2024.
280
+ * Nan et al. (2024) Kepan Nan, Rui Xie, Penghao Zhou, Tiehan Fan, Zhenheng Yang, Zhijie Chen, Xiang Li, Jian Yang, and Ying Tai. Openvid-1m: A large-scale high-quality dataset for text-to-video generation. _arXiv preprint arXiv:2407.02371_, 2024.
281
+ * OpenAI (2023) OpenAI. Chatgpt. [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/), 2023.
282
+ * Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. _Advances in neural information processing systems_, 35:27730–27744, 2022.
283
+ * Podell et al. (2023) Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. _arXiv preprint arXiv:2307.01952_, 2023.
284
+ * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_, pp. 8748–8763. PMLR, 2021.
285
+ * Rombach et al. (2022a) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pp. 10684–10695, 2022a.
286
+ * Rombach et al. (2022b) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pp. 10684–10695, 2022b.
287
+ * Singh et al. (2019) Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pp. 8317–8326, 2019.
288
+ * Sun et al. (2024) Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. Autoregressive model beats diffusion: Llama for scalable image generation, 2024. URL [https://arxiv.org/abs/2406.06525](https://arxiv.org/abs/2406.06525).
289
+ * Sun et al. (2023a) Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Zhengxiong Luo, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative multimodal models are in-context learners. _arXiv preprint arXiv:2312.13286_, 2023a.
290
+ * Sun et al. (2023b) Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. _arXiv preprint arXiv:2307.05222_, 2023b.
291
+ * Team (2024) Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models, 2024.
292
+ * Tian et al. (2024a) Changyao Tian, Xizhou Zhu, Yuwen Xiong, Weiyun Wang, Zhe Chen, Wenhai Wang, Yuntao Chen, Lewei Lu, Tong Lu, Jie Zhou, et al. Mm-interleaved: Interleaved image-text generative modeling via multi-modal feature synchronizer. _arXiv preprint arXiv:2401.10208_, 2024a.
293
+ * Tian et al. (2024b) Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction, 2024b. URL [https://arxiv.org/abs/2404.02905](https://arxiv.org/abs/2404.02905).
294
+ * Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. _arXiv:2302.13971_, 2023a.
295
+ * Touvron et al. (2023b) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023b.
296
+ * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I.Guyon, U.Von Luxburg, S.Bengio, H.Wallach, R.Fergus, S.Vishwanathan, and R.Garnett (eds.), _Advances in Neural Information Processing Systems_, volume 30. Curran Associates, Inc., 2017. URL [https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf).
297
+ * Xia et al. (2023) Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, and Danqi Chen. Sheared llama: Accelerating language model pre-training via structured pruning. _arXiv preprint arXiv:2310.06694_, 2023.
298
+ * Xie et al. (2024) Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. _arXiv preprint arXiv:2408.12528_, 2024.
299
+ * Xu et al. (2017) Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. Video question answering via gradually refined attention over appearance and motion. In _Proceedings of the 25th ACM international conference on Multimedia_, pp. 1645–1653, 2017.
300
+ * Yang et al. (2024) Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. _arXiv preprint arXiv:2408.06072_, 2024.
301
+ * Yu et al. (2021) Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. _arXiv preprint arXiv:2110.04627_, 2021.
302
+ * Yu et al. (2022) Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models, 2022.
303
+ * Yu et al. (2023a) Lili Yu, Bowen Shi, Ramakanth Pasunuru, Benjamin Muller, Olga Golovneva, Tianlu Wang, Arun Babu, Binh Tang, Brian Karrer, Shelly Sheynin, et al. Scaling autoregressive multi-modal models: Pretraining and instruction tuning. _arXiv preprint arXiv:2309.02591_, 2(3), 2023a.
304
+ * Yu et al. (2023b) Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. _arXiv preprint arXiv:2308.02490_, 2023b.
305
+ * Zhai et al. (2023) Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 11975–11986, 2023.
306
+ * Zhan et al. (2024) Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, et al. Anygpt: Unified multimodal llm with discrete sequence modeling. _arXiv preprint arXiv:2402.12226_, 2024.
307
+ * (65) Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all, march 2024. _URL https://github. com/hpcaitech/Open-Sora_, 1(3):4.
308
+ * Zhu et al. (2024) Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus of images interleaved with text. _Advances in Neural Information Processing Systems_, 36, 2024.
309
+
310
+ Appendix
311
+ --------
312
+
313
+ Appendix A Difference with related works
314
+ ----------------------------------------
315
+
316
+ Prior to VILA-U, unified visual language models were dominated by two mainstream approaches:
317
+
318
+ (1) Represented by LWM, CM3Leon and Show-o which utilizes a VQGAN-based tokenizer to convert visual inputs into discrete tokens. However, as these tokenizers are trained solely with a reconstruction objective, the resulting tokens lack rich semantic information. This limitation leads to poor performance on multimodal understanding tasks. But it can easily support autoregressive visual generation and the generated visual tokens can be seamlessly decoded into visual outputs using the lightweight decoder of VQGAN.
319
+
320
+ (2) Represented by AnyGPT SEED-LLaMa and LaViT, which utilizes a codebook to quantize features produced by a pre-trained ViT model like CLIP. Since CLIP features encode rich semantic information, these approaches generally achieve significantly better performance on understanding tasks compared to VQGAN-based tokenizers. However, these tokenizers lack decoding capability, requiring an external visual generation model, such as a diffusion model, to use the generated visual tokens as conditions for producing visual outputs.
321
+
322
+ Compared to these two mainstream approaches, VILA-U introduces a solution that addresses the limitations of both. We design a unified vision tower that extracts features with rich semantic information, similar to CLIP, while also supporting image reconstruction capabilities akin to VQGAN. This is achieved by incorporating both reconstruction loss and contrastive loss into the autoencoder training process, along with utilizing residual quantization to enhance the representation capability of the visual features. Building on this foundation, we develop a single end-to-end autoregressive framework that eliminates the need for external visual generation models required by approach 2 and significantly outperforms the understanding results of methods in approach 1.
323
+
324
+ Appendix B Qualitative Results
325
+ ------------------------------
326
+
327
+ ### B.1 Reconstruction
328
+
329
+ ![Image 8: Refer to caption](https://arxiv.org/html/2409.04429v3/x8.png)
330
+
331
+ Figure 8: Visualization of the reconstruction results from text-aligned discrete visual tokens.
332
+
333
+ We present qualitative reconstruction results in Figure [8](https://arxiv.org/html/2409.04429v3#A2.F8 "Figure 8 ‣ B.1 Reconstruction ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") for our 256 / 384 resolution vision tower. These vision towers effectively reconstruct images in detail using text-aligned discrete visual tokens.
334
+
335
+ ### B.2 Visual Understanding
336
+
337
+ ![Image 9: Refer to caption](https://arxiv.org/html/2409.04429v3/x9.png)
338
+
339
+ Figure 9: Image understanding results. Examples are taken from the test split of VQAv2 dataset.
340
+
341
+ ![Image 10: Refer to caption](https://arxiv.org/html/2409.04429v3/x10.png)
342
+
343
+ Figure 10: Video understanding results. Examples are taken from the test split of TGIF dataset.
344
+
345
+ We provide more image understanding and video understanding examples in Figure[9](https://arxiv.org/html/2409.04429v3#A2.F9 "Figure 9 ‣ B.2 Visual Understanding ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") and Figure[10](https://arxiv.org/html/2409.04429v3#A2.F10 "Figure 10 ‣ B.2 Visual Understanding ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). VILA-U successfully answers the questions accurately.
346
+
347
+ ### B.3 In-context Learning Examples
348
+
349
+ ![Image 11: Refer to caption](https://arxiv.org/html/2409.04429v3/x11.png)
350
+
351
+ Figure 11: In-context learning examples. We try all in-context learning examples in Lin et al. ([2023](https://arxiv.org/html/2409.04429v3#bib.bib32)). The results demonstrate that VILA-U has inherited good in-context learning capabilties.
352
+
353
+ We provide more qualitative results to demonstrate in-context learning capabilities of VILA-U in Figure[11](https://arxiv.org/html/2409.04429v3#A2.F11 "Figure 11 ‣ B.3 In-context Learning Examples ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). VILA-U exhibits good in-context learning capabilties.
354
+
355
+ ### B.4 Visual Generation
356
+
357
+ ![Image 12: Refer to caption](https://arxiv.org/html/2409.04429v3/x12.png)
358
+
359
+ Figure 12: Image generation results. VILA-U can generate high-quality images given text input.
360
+
361
+ ![Image 13: Refer to caption](https://arxiv.org/html/2409.04429v3/x13.png)
362
+
363
+ Figure 13: Video generation results. VILA-U can generate high-quality videos given text input.
364
+
365
+ We provide more image generation and video generation examples in Figure[12](https://arxiv.org/html/2409.04429v3#A2.F12 "Figure 12 ‣ B.4 Visual Generation ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation") and Figure[13](https://arxiv.org/html/2409.04429v3#A2.F13 "Figure 13 ‣ B.4 Visual Generation ‣ Appendix B Qualitative Results ‣ 6 Conclusion and Limitation ‣ 5.3 Impact of Classifier-free Guidance ‣ 5 Ablation Study ‣ 4.4 Qualitative Evaluation ‣ 4.3 Quantitative Evaluation ‣ 4 Experiments ‣ VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation"). VILA-U can generate high-quality images and videos given text input.
366
+
367
+ Appendix C Failed Training Recipes.
368
+ -----------------------------------
369
+
370
+ We experiment with numerous training recipes and find none to be as effective as our final approach. We list four alternative recipes and discuss their shortcomings compared to our final recipe: 1) Load pre-trained CLIP weights into the text encoder only; 2) Load pre-trained RQ-VAE weights for the vision encoder and decoder while training other parts from scratch; 3) Freeze the vision encoder; 4) Make the text encoder trainable.
371
+
372
+ Recipes 1) and 2) fail due to the lack of pre-trained CLIP weights for the vision encoder. Training a CLIP model from scratch typically requires numerous GPU days with a large global batch size (e.g., 32k). However, VQ-based reconstruction training necessitates a relatively small global batch size (e.g., 512) for steady improvement. With such a small batch size, training a text-aligned vision tower from scratch would be prohibitively time-consuming and resource-intensive.
373
+
374
+ Recipe 3) fails because freezing the vision encoder prevents it from learning the low-level features essential for reconstruction. In this case, the burden of reconstruction falls entirely on the vision decoder, but it is impossible to reconstruct images well using only semantic features.
375
+
376
+ Recipe 4) fails because the quantized features are chaotic during the initial training steps, and the contrastive loss disrupts the text encoder weights, slowing down the entire training process.
377
+
378
+ In contrast, our final training recipe leverages pre-trained CLIP weights for the vision encoder, enabling it to maintain learned semantic features rather than grasping them from scratch. This allows us to train with a small batch size while keeping the vision encoder trainable, facilitating the learning of low-level features for reconstruction during training.