diff --git "a/intro_28K/test_introduction_long_2405.05949v1.json" "b/intro_28K/test_introduction_long_2405.05949v1.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.05949v1.json" @@ -0,0 +1,103 @@ +{ + "url": "http://arxiv.org/abs/2405.05949v1", + "title": "CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts", + "abstract": "Recent advancements in Multimodal Large Language Models (LLMs) have focused\nprimarily on scaling by increasing text-image pair data and enhancing LLMs to\nimprove performance on multimodal tasks. However, these scaling approaches are\ncomputationally expensive and overlook the significance of improving model\ncapabilities from the vision side. Inspired by the successful applications of\nMixture-of-Experts (MoE) in LLMs, which improves model scalability during\ntraining while keeping inference costs similar to those of smaller models, we\npropose CuMo. CuMo incorporates Co-upcycled Top-K sparsely-gated\nMixture-of-experts blocks into both the vision encoder and the MLP connector,\nthereby enhancing the multimodal LLMs with minimal additional activated\nparameters during inference. CuMo first pre-trains the MLP blocks and then\ninitializes each expert in the MoE block from the pre-trained MLP block during\nthe visual instruction tuning stage. Auxiliary losses are used to ensure a\nbalanced loading of experts. CuMo outperforms state-of-the-art multimodal LLMs\nacross various VQA and visual-instruction-following benchmarks using models\nwithin each model size group, all while training exclusively on open-sourced\ndatasets. The code and model weights for CuMo are open-sourced at\nhttps://github.com/SHI-Labs/CuMo.", + "authors": "Jiachen Li, Xinyao Wang, Sijie Zhu, Chia-Wen Kuo, Lu Xu, Fan Chen, Jitesh Jain, Humphrey Shi, Longyin Wen", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "The advent of GPT-4V [56] has sparked excitement within open-source communities to transform large language mod- els (LLM) into multimodal LLMs. Recent multimodal LLMs [3, 13, 47] typically integrate pre-trained vision en- coders and LLMs with visual instruction tuning data to fine-tune the pre-trained LLMs, enhancing their visual un- derstanding capabilities. To further scale up multimodal * Work done during an internship at ByteDance Inc., San Jose, CA. Correspondence to X. Wang (xinyao.wang@bytedance.com) and H. Shi. MM-Vet LLaVA-Wild SEED-IMG MMMU MMBench SQA-IMG 52 49 46 43 85 82 79 76 73 71 69 67 40 38 36 34 80 70 60 50 74 73 72 71 MM1 7B (Private) LLaVA-NeXT Vicuna-7B CuMo Mistral-7B (Ours) LLaVA-NeXT Mistral-7B GQA MME 65 64 63 62 1550 1525 1500 1475 Mini-Gemini Vicuna-7B Figure 1. Comparisons of CuMo Mistral-7B with state-of-the-art 7B multimodal LLMs. CuMo outperforms strong open-sourced models such as Mini-Gemini and LLaVA-NeXT, as well as the private MM1 model. LLMs, previous efforts [8, 42, 44, 46, 48, 54] primarily focus on training the model with a more extensive col- lection of text-image paired data and employing stronger LLMs, significantly increasing training efforts. On the vi- sion side, recent work concentrates on leveraging multiple vision encoders [20, 45] to enrich visual content, employ- ing larger vision encoders [10], and using advanced vision- language connectors [6] to improve performance on mul- timodal tasks. However, these techniques result in an in- creased number of additional parameters and generate addi- tional visual tokens for LLMs to process, making it ineffi- cient to scale. In terms of efficiently scaling up models, Mixture-of- Experts (MoE) has become the de-facto framework in modern large-scale neural networks, particularly in natu- 1 arXiv:2405.05949v1 [cs.CV] 9 May 2024 Top-K Router MLP 1 MLP 2 MLP 3 MLP 4 Weighted-Sum CLIP-MoE MLP-MoE LLM (dense/MoE) What is the dog doing ? Word Embedding The dog is engaging in the activity of surfing. MoE block Figure 2. Architecture of CuMo. CuMo incorporates sparse Top-K MoE blocks into the CLIP vision encoder and vision-language MLP connector, thereby improving the multimodal LLM capabilities from the vision side. Skip connections are omitted for simplicity. Further implementation details are provided in Section 3.2. ral language processing (NLP). Most large language mod- els (LLM) are built upon the transformer [68] architec- ture, wherein sparse MoE is used to replace the dense MLP block with the Top-K sparsely-gated MoE block [60]. Re- cent state-of-the-art open-sourced [30, 65] and private [58] LLMs have predominantly adopted the sparse MoE archi- tecture. These models are scaled up using the MoE design during training while maintaining relatively lower inference costs as only selected MLP experts are activated during the feed-forward process. Nevertheless, the development and optimization of MoE-based models have been largely tai- lored to LLMs, and the exploration of scaling multimodal LLMs with MoE, especially on the vision side, remains largely unexplored. Motivated by these observations, we introduce CuMo, which integrates Top-K sparsely-gated MoE blocks into the vision encoder and the MLP connector of multimodal LLMs, as depicted in Figure 2. We also explore the associ- ated training recipe and methodology for CuMo. Firstly, we pre-train the MLP connector and perform pre-finetuning to warm up the whole model without introducing the MoE ar- chitecture, which stabilizes the following visual instruction tuning stage with newly incorporated sparse MoE blocks. Then, we replace each MLP block with the sparse MoE block in the MLP connector and the vision encoder through co-upcycling. Each expert within the sparse MoE block is initialized from the corresponding MLP block after the pre- training and the pre-finetuning stages. Additionally, each MoE block contains a Top-K router trained from scratch to select experts during the visual instruction tuning stage with auxiliary losses on the router to maintain a balanced load- ing of experts. We conduct further comparisons between co-upcycled LLMs and pre-trained MoE-based LLMs. The results show that the pre-trained MoE-based LLMs signifi- cantly outperform the co-upcycled LLMs. As a result, the upcycling of LLMs is not included in CuMo. Our mod- els are trained fully on open-sourced datasets that are con- verted to visual instruction following formats. Experimen- tal results demonstrate that CuMo outperforms other state- of-the-art multimodal LLMs on various VQA and multi- modal instruction-following benchmarks within the same model size group, as illustrated in Figure 1. Our contri- butions can be summarized as follows: \u2022 We introduce CuMo, which integrates co-upcycled sparsely-gated MoE layers into both the MLP connector and the vision encoder, enhancing the multimodal LLM with only slightly additional activated parameters. \u2022 We outline the training methodology for CuMo, including a three-stage training process with auxiliary losses to sta- bilize training and ensure a balanced loading of experts. \u2022 We train CuMo exclusively on open-sourced datasets and pre-trained models. It outperforms state-of-the-art open- sourced and private multimodal LLMs across multiple competitive benchmarks within each model size group.", + "main_content": "2.1. Multimodal LLM While the ultimate goal for mulitmodal models may be generative across various modalities [4, 63, 70], mod2 ern multimodal LLMs primarily focus on integrating additional modalities, such as vision, into LLMs. InstructBLIP [13] adopts Q-Former [38] to sample from visual tokens for LLM to feed-forward and follow the instructions. Flamingo [1] and IDEFICS [25, 34] use shared decoder for visual-language understanding. Qwen-VL [3] uses three-stage training to convert QwenLM to Qwen-VL. LLaVA series [46\u201348] adopt visual instruction tuning that uses instruction-following data to convert LLM into multimodal LLM. ShareGPT4V [8] collects detailed image caption data from GPT4V to augment the LLaVA models. HoneyBee [6] investigates different designs of the MLP connector for better alignment. VILA [44] unfreezes the LLM during pre-training with interleaved image-text data. MoE-LLaVA [43] adopts the MoE design in small LLMs and reaches comparable performance to LLaVA with large LLMs. VCoder [28] adopts various vision adapters to enhance visual perception abilities. SPHINX [20, 45] adopts multiple visual encoders to enrich the visual features with scaled data and models. InternLM-Xcomposer [14, 73] is trained with interleaved text-image composition data and achieves state-of-the-art performance. InternVL [10] scales up the vision encoder to a 6B ViT model. MM1 [54] summarizes the essential steps towards building a strong multimodal LLM from a pre-trained LLM. Mini-Gemini [42] further collects guided generation into the pipeline. 2.2. Mixture-of-Experts Mixture-of-Experts [26] is proposed to utilize a set of expert networks to address specific tasks by employing a gating network to determine the selection of these experts. Recently, it has gained popularity in the design of large language models [17]. The mainstream practice [60] is to replace the dense MLP layers with Top-K sparsely-gated mixture-of-experts (MoE) layers in the transformer [68]. MoE in Language Subsequent works [18, 35] have further scaled up MoE-based large language models with improved stability and load balancing of experts. The design of gating networks often involves selecting the top-k experts for each token [35, 60]. Various routing strategies have been explored, such as choosing top-k tokens by experts [75], oneto-one matching between experts and tokens [36]. Besides routing strategies, maintaining the load balance of experts is crucial for training MoE models. ST-MoE [77] adopts loading balancing loss and router-z loss to ensure a balanced distribution of the experts. Upcycling [33] proposes training sparse experts from dense checkpoints to stabilize training and lower the cost. Recent large language models like Gemini-Pro [58] and DBRX [65] are also based on the MoE design. MoE in Vision The success of MoE extends to the vision community, particularly following the popularity of vision transformers [5, 15, 22, 23, 27, 39, 76]. V-MoE [59] reaches Top-K Router Weighted Sum Layer Norm Co-Upcycled MoE block MLP MLP 1 MLP 2 MLP 3 MLP 4 N copies Layer Norm Figure 3. Initialization of MoE blocks via Co-Upcycling. Each MLP expert within the MoE block during the visual instruction tuning stage is initialized from the corresponding pre-trained MLP. comparable performance to dense ViT while only requiring half of the compute. LIMoE [55] replaces dense MLP layers with MoE layers in CLIP and observes improvements in zero-shot image classification. Residual MoE [69] corporates residual design into MoE transformer and saves over 30% training cost. AdaMV-MoE [9] proposes an adaptive MoE framework for multi-task learning. 3. Method In this section, we first review the sparse MoE block structure and the upcycling strategy utilized in previous studies. Subsequently, we describe how these sparsely-gated MoE blocks are integrated into each module of multimodal LLMs using co-upcycling strategies. Then, we introduce the threestage training process and auxiliary loss functions employed to stabilize training and balance the loads of experts. 3.1. Revisit Sparse MoE Sparse MoE Structure Previous mainstream practice [60] is to replace the dense MLP blocks with sparsely-gated mixture-of-experts blocks. Given input X \u2208RN\u00d7Cin and a MLP block, Xout = MLP(X) \u2208RN\u00d7Cout (1) To scale up the model with multiple MLP blocks in parallel, a sparse MoE block includes a router network to select TopK experts out of S total experts. This router network has a linear layer to compute the normalized weight matrix based on the inputs X for voting, resulting in W = Softmax(Linear(X)) \u2208RN\u00d7S (2) The Top-K experts are selected for each token based on W, and the re-normalized weights WK \u2208RN\u00d7K are computed 3 Pre-Training CLIP MLP LLM Pre-FineTuning CLIP MLP LLM Visual Instruction Tuning CLIP-MoE MLP-MoE LLM Co-Upcycle Figure 4. Training Stages of CuMo. The first stage involves pre-training the MLP for better alignment. Subsequently, the pre-finetuning stage trains all parameters as a warm-up before the next stage. Finally, the MLP experts within each MoE block are initialized from the weights of the corresponding MLP block, followed by training all parameters in the visual instruction tuning stage. using WK = Softmax(TopK(W)) \u2208RN\u00d7K (3) Each selected expert is represented by an MLP block, and the final output is obtained through a re-weighted sum Xout = K X i W i K \u25e6MLPi(X) \u2208RN\u00d7Cout (4) the output Xout maintains the same dimension as the output of a single dense MLP block. Sparse Upcycling Training MoE-based designs from scratch can be unstable and costly. Sparse Upcycling [33] addresses this challenge by initializing the experts in each MoE block from the corresponding MLP block in pretrained dense checkpoints. This initialization approach provides a better starting point for training MoE-based models and reduces training costs compared to training from scratch. 3.2. CuMo Architecture Sparse MoE in MLP Connector The MLP connector converts visual tokens into word embedding space, aligning dimensions between visual and text tokens. An effective architecture for the vision-language connector is an MLP block [46] that contains two linear layers. We start from a single MLP block and replace it with a Top-K sparse MoE block, incorporating a Top-K router and a set of experts for projecting visual tokens into word embedding space. Sparse MoE in Vision Encoder Vision encoders extract image features as sequences of visual tokens for reasoning in LLMs. CLIP [57] is one the most popular pre-trained vision encoders for multimodal LLM since it is pre-trained on large-scale image-text pairs, which makes it suitable for processing images for multimodal usage. The visual encoding part of CLIP is a ViT [15] model, which has consecutive MLP blocks in the transformer encoder. We substitute each MLP block with a Top-K sparse MoE block, retaining skip connections alongside MoE block outputs. Sparse MoE in LLM In terms of using MoE in LLM, we compare the co-upcycled LLM with pre-trained MoEbased LLM. We start from Mistral-7B and the upcycled Mistral-7B-MoE slightly outperforms Mistral-7B on certain benchmarks. However, considering the constrained knowledge base of upcycled experts from Mistral-7B, we compare it with the pre-trained Mixtral 8x7B with pre-trained experts of a diverse knowledge base. Experimental results reveal that pre-trained Mixtral 8x7B significantly outperforms Mistral-7B-MoE. As a result, LLM is not co-upcycled with CLIP and MLP connectors since it brings marginal improvements with great additional parameters. 3.3. Training Recipe Co-Upcycling MoE blocks We start with training the added MoE blocks from scratch while the model is struggling to converge. Attempts to address this issue with lower learning rates perform worse compared to the baseline. As a result, we adopt a co-upcycling approach, initializing each module that integrates sparsely-gated MoE blocks with pretrained MLPs to replace corresponding MLP blocks, as shown in Figure 3. This strategy consistently improves training stability and model performance. Three-Stage Training To further enhance training stability, we adopt a three-stage training strategy for CuMo models, as illustrated in Figure 4. In the first stage, we only pretrain the MLP connector, given that the vision encoder and LLM have already undergone pre-training on large-scale data. During the second pre-finetuning stage, we train all parameters using high-quality caption data to warm up the entire model before introducing MoE blocks in the subsequent stage. The third stage involves visual instruction finetuning, where the multimodal LLM is scaled up with upcycled MoE blocks and trained on visual instruction tuning 4 SQA Text MMB MM VQA LLaVA SEED MMMU Math Method LLM Act. IMG VQA GQA POPE MME EN CN Vet v2 Wild IMG val Vista 7B to 13B Models InstructBLIP [13] Vicuna-7B 7.9B 60.5 50.1 49.2 36.0 23.7 26.2 60.9 60.5 Qwen-VL-Chat [3] Qwen-7B 68.2 61.5 57.5 1487.5 60.6 56.7 78.2 58.2 35.9 LLaVA-v1.5 [46] Vicuna-7B 7.1B 66.8 58.2 62.0 85.9 1510.7 64.3 58.3 30.5 78.5 63.4 66.1 LLaMA-VID [41] Vicuna-7B 68.3 64.3 86.0 1521.4 65.1 79.3 59.9 VILA [44] Vicuna-7B 7.1B 68.2 64.4 62.3 85.5 1533.0 68.9 61.7 34.9 79.9 69.7 61.1 SPHINX-Intern2 [20] InternLM2-7B 70.4 58.1 56.2 86.9 1260.4 57.9 36.5 75.5 57.6 68.8 35.5 LLaVA-NeXT [48] Mistral-7B 7.6B 72.8 65.7 64.8 86.7 1498 68.7 61.2 47.3 82.2 83.2 72.2 35.3 37.7 LLaVA-NeXT [48] Vicuna-7B 7.1B 70.1 64.9 64.2 86.5 1519 67.4 60.6 43.9 81.8 81.6 70.2 35.8 34.6 LLaVA-LLaMA3 [12] LLaMA3-8B-IT 8.4B 72.9 59.0 62.6 86.4 1469 72.3 66.4 70.1 36.8 Mini-Gemini [42] Vicuna-7B 7.3B 65.2 1523 69.3 40.8 36.1 31.4 MM1 [54] MM1-7B 72.6 72.8 86.6 1529.3 79.0 42.1 82.8 81.5 69.9 37.0 35.9 InstructBLIP [13] Vicuna-13B 14.2B 63.1 50.7 49.5 78.9 1212.8 25.6 58.2 63.1 LLaVA-v1.5 [46] Vicuna-13B 13.4B 71.6 61.3 63.3 85.9 1531.3 67.7 63.6 35.4 80.0 70.7 68.2 36.4 27.6 VILA [44] Vicuna-13B 13.4B 73.7 66.6 63.3 84.2 1570.1 70.3 64.3 38.8 80.8 73.0 62.8 LLaMA-VID [41] Vicuna-13B 70.0 65.0 86.0 1542.3 66.6 80.0 62.3 SPHINX-Plus [20] LLaMA2-13B 74.2 65.7 89.1 1457.7 71.0 47.9 71.7 74.8 36.8 Mini-Gemini[42] Vicuna-13B 13.6B 65.9 1565 68.5 46.0 38.1 37.0 InternVL-Chat [10] Vicuna-13B 19B 61.5 66.6 87.6 1586.4 81.2 LLaVA-NeXT [48] Vicuna-13B 13.4B 73.6 67.1 65.4 86.2 1575 70 64.4 48.4 82.8 87.3 71.9 36.2 35.3 CuMo Mistral-7B 7.8B 73.9 67.0 64.9 86.7 1548.6 73.0 66.6 51.0\u2020 82.2 85.7\u2020 72.1 39.1 35.1\u2020 7B MoE Models SPHINX-MoE [20] Mixtral-8\u00d77B 74.5 68.0 63.8 89.6 1485.3 71.3 40.9 81.1 70.2 73.0 31.1 42.7 MM1 [54] MM1-7B-MoE 75.3 72.8 87.6 1629.0 79.7 47.0 83.4 82.0 70.4 40.9 40.9 Mini-Gemini [42] Mixtral-8\u00d77B 13.5B 69.2 1639 75.6 45.8 41.8 41.8 CuMo Mixtral-8\u00d77B 13.5B 77.9 66.0 63.8 85.7 1639.5 75.3 68.0 48.7\u2020 81.8 84.7\u2020 73.2 45.0 38.2\u2020 Private Models GPT4V [56] 78.0 77.0 74.4 60.2 56.8 49.9 Gemini 1.5 Pro [58] 73.5 73.6 74.3 64.3 73.2 58.5 52.1 Claude 3 Opus [2] 63.3 59.2 58.1 59.4 50.5 Qwen-VL-Max [64] 79.5 1790.1 77.6 75.1 66.6 51.4 51.0 Table 1. Comparisons between CuMo and other state-of-the-art multimodal LLMs on competitive benchmarks. These models are grouped by the size of the base LLM. The benchmarks are double-rowed due to limited space: SQA-IMG [50]; TextVQA [62]; GQA [24]; POPE [40]; MME [19]; MMBench [49]; MMVet [71]; VQAv2 [21]; LLaVA-Wild [47]; SEED-IMG [37]; MMMU [72]; MathVista [51]. Act.: Activated Parameters. Numbers\u2020 are averaged by three inference runs of querying GPT API. data. Loss Function To maintain a load balance between experts in each MoE block, we adopt auxiliary losses based on the language modeling cross-entropy loss. The auxiliary losses comprise loading balance loss and router z-loss [77]. Hence, the total loss is L = Lce + \u03b1bLb + \u03b1zLz (5) Here, Lce represents the language modeling loss, which computes the cross-entropy of next-token predictions. \u03b1b and \u03b1z denote coefficients for loading balance loss Lb and router z-loss Lz, set to 0.1 and 0.01, respectively, across all experiments. These auxiliary losses, abbreviated as bzloss in Section 4, are individually applied to the MLP connector, vision encoder, and LLM for simplicity. 4. Experiments We train the CuMo models on a mixture of open-sourced datasets, which are converted into the visual instruction tuning format. Then, we conduct comprehensive evaluations of the performance of CuMo models across various competitive VQA-based and instruction-following-based benchmarks. Additionally, we perform ablation studies on each module with upcycled MoE blocks with qualitative analysis of the results. 4.1. Implementation Details Training Datasets During pre-training, we only utilize LLaVA-558K [47] to train the MLP connector for better alignment. In the subsequent pre-finetuning stage, detailed image caption data from ALLaVA [7] is employed to warm up all parameters of the multimodal LLM. For the final visual instruction tuning stage, a mixture of datasets including LLaVA-665K [46], ShareGPT4V [8], LAION-GPT-V [16], DocVQA [66], ChartQA [52], AI2D [31], InfoVQA [53], SynDog-EN [32], ALLaVA [7], and LIMA [74] is utilized to train the CuMo models with upcycled MoE blocks. The total data size for visual instruction tuning is approximately 1.65 million, and all training data are publicly accessible. 5 SQA Text MMBench MM VQA LLaVA SEED Method LLM PT IT IMG VQA GQA POPE MME EN CN Vet v2 Wild IMG InstructBLIP [13] Vicuna-7B 129M 1.2M 60.5 50.1 49.2 36.0 23.7 26.2 60.9 60.5 InstructBLIP [13] Vicuna-13B 129M 1.2M 63.1 50.7 49.5 78.9 1212.8 25.6 58.2 63.1 IDEFICS-9B [25] LLaMA-7B 353M 1M 25.9 38.4 48.2 25.2 50.9 IDEFICS-80B [25] LLaMA-65B 353M 1M 30.9 45.2 54.5 38.1 60.0 Qwen-VL [3] Qwen-7B 1.4B 50M 67.1 63.8 59.3 38.2 7.4 78.8 56.3 Qwen-VL-Chat [3] Qwen-7B 1.4B 50M 68.2 61.5 57.5 1487.5 60.6 56.7 78.2 58.2 LLaVA-v1.5 [46] Vicuna-7B 558K 665K 66.8 58.2 62.0 85.9 1510.7 64.3 58.3 30.5 78.5 63.4 66.1 LLaVA-v1.5 [46] Vicuna-13B 558K 665K 71.6 61.3 63.3 85.9 1531.3 67.7 63.6 35.4 80.0 70.7 68.2 CuMo Mistral-7B 558K 665K 71.7 59.3 63.2 87.1 1428.6 69.6 62.6 34.3 80.6 68.8 69.6 Table 2. Comparisons between CuMo Mistral-7B and other multimodal LMM models with limited training data. Method SQA VQAT MMVet SEED Baseline on Mistral-7B 72.8 57.6 32.1 66.4 + Top 2-in-4 & Scratch 68.1 55.6 29.3 65.1 \u21ccTop 2-in-4 & Upcycle 73.7 57.2 32.3 67.1 + bzloss 73.5 57.4 33.1 67.4 \u21ccTop 2-in-8 & Upcycle 73.4 57.6 32.4 67.2 Table 3. Ablation study on the MLP-MoE module. Each row represents a different configuration, with changes or additions marked using \u21ccand + symbols, respectively. Settings highlighted with a light blue background are those adapted for the MLP-MoE module in Table 1. Method SQA VQAT MMVet SEED MLP-MoE 73.5 57.4 33.1 67.4 + Unfreeze CLIP 72.0 58.9 34.7 69.0 + Top 2-in-4 & bzloss 72.8 59.7 35.4 69.8 \u21ccTop 2-in-8 & bzloss 71.0 59.0 33.6 69.2 Table 4. Ablation study on the CLIP-MoE module. All MoE blocks in CLIP are initialized with upcycling. Method SQA VQAT MMVet SEED MLP-MoE & CLIP-MoE 71.7 59.3 34.3 69.6 + Mistral 4\u00d77B & Upcycle 72.8 57.0 35.2 69.9 \u21ccMistral 8\u00d77B & Upcycle 73.2 56.4 35.7 70.5 \u21ccMixtral 8\u00d77B 74.2 60.6 40.0 72.6 Table 5. Ablation study on the LLM-MoE module. Mixtral 8\u00d77B outperforms upcycled Mistral MoE models significantly. The detailed breakdown of the training dataset is listed in Appendix A. Evaluation Benchmarks Evaluation of CuMo models primarily focuses on academic VQA-based datasets such as VQAv2 [21], GQA [24], Science-QA [50], and TextVQA [62], as well as instruction-following-based LMM benchmarks including POPE [40], MME [19], MMBench [49], SEED-Bench [37], LLaVA-Wild [47], and MM-Vet [71]. Additionally, the challenging MMMU [72] and MathVista [51] datasets are evaluated to assess the visual reasoning abilities of the multimodal LLMs. Training Settings We employ the pre-trained CLIP ViTL [57] as the vision encoder, a two-layer MLP as the visionlanguage connector, and Mistral-7B [29] as the LLM to establish the baseline model following LLaVA v1.5 [46]. We only use LLaVA-558K [46] as pre-training data and LLaVA-665K [46] as visual instruction tuning data to train the baseline model and make ablation studies for comparisons. The learning rate is set to 1e-3 for pre-training the MLP connector and reduced to 2e-5 for visual instruction tuning of both the MLP connector and CLIP. To further stabilize the visual instruction tuning process after scaling up with additional data, the learning rate is lowered to 2e-6 for all parameters of the CuMo models in the final results. More hyperparameters of the training process is listed in Appendix B. Evaluation Settings During evaluation, we adhere to the settings outlined in the LLaVA series [46], employing a greedy decoding strategy for all benchmarks. The data and questions are converted into visual instructions to prompt the multimodal LLMs. For benchmarks that utilize GPT API for evaluation, we adopt gpt-4-0613 for LLaVAWild [47] and gpt-3.5-turbo for MathVista [51]. 4.2. Main Results Comparison with SoTA Multimodal LLMs In Table 1, we present a comparison of CuMo models with other stateof-the-art instruction-following-based multimodal LLMs. We categorize the models based on the size of the base LLMs, including 7B models, 13B models, and 7B MoE models. CuMo Mistral-7B outperforms other 7B-based state-of-the-art multimodal LLMs across multiple benchmarks. Moreover, the performance of the CuMo Mistral7B model is comparable to many 13B-based multimodal LLMs. In the case of Mixtral-8\u00d77B models, CuMo achieves results on par with SPHINX-MoE, MM1, and Mini-Gemini. LLaMA-based LLMs [11, 67] are not utilized in our experiments due to license constraints. Comparison under limited training data To further evaluate the effectiveness of the co-upcycled MoE blocks, we 6 1\u00d7 2\u00d7 3\u00d7 SQA VQAT MMVet SEED \u2713 71.7 59.3 34.3 69.6 \u2713 \u2713 71.7 60.6 35.0 69.7 \u2713 \u2713 72.9 61.0 37.0 69.7 \u2713 \u2713 \u2713 72.2 60.5 36.9 70.1 Table 6. Ablation study on multi-resolution image features. The combination of 3\u00d7 and 1\u00d7 is adopted for the final models in Table 1. Method SQA VQAT MMVet SEED No PFT 71.7 59.3 34.3 69.6 + ShareGPT4V 72.4 61.7 36.5 70.0 \u21ccALLaVA 73.0 62.8 37.2 70.9 Table 7. Ablation study on the pre-finetuning stage. ALLaVA is chosen for pre-finetuning due to its provision of high-quality image caption data. train the vanilla CuMo mistral-7B under limited training data in Table 2. It shows that CuMo outperforms other 7B models and reaches comparable performance to LLaVAv1.5 Vicuna-13B under the same training data. 4.3. Ablation Study Upcycle MLP connector to MLP-MoE We initiate the ablation study by replacing the MLP connector with upcycled MLP-MoE, as depicted in Table 3. We start with a Top 2-in4 router and train the MoE blocks from scratch, which leads to a clear performance drop on all benchmarks. Then, we adopt the upcycling strategy to initialize the MLP experts. We observe marginal improvements over the baseline, considering each expert comprises only two linear layers. Subsequently, the incorporation of bzloss to ensure a balanced loading of experts in the MLP-MoE yields noticeable enhancements on MMVet. However, employing a Top 2-in-8 router with upcycling and bzloss results in a slight performance decline, possibly due to the limited visual instruction tuning data to train robust and well-balanced eight experts. Empower CLIP with CLIP-MoE In Table 4, initially unfreezing CLIP based on MLP-MoE leads to noticeable improvements on TextVQA and MMVet benchmarks. However, training the added Top2-in-4 MoE blocks in CLIP from scratch proves unsuccessful, as the model fails to converge even with reduced learning rates. Consequently, adopting upcycled MoE blocks during the visual instruction tuning stage yields further enhancements on TextVQA, MMVet, and SEED benchmarks. Upcycle LLM vs Pre-trained LLM-MoE Upon replacing all MLP blocks with sparsely-gated MoE blocks in the visual part, we further investigate the utilization of the MoE architecture in the LLM. Starting from the MistralLayer 1 Layer 4 Layer 7 Layer 10 Layer 13 Layer 16 Layer 19 Layer 22 0% 25% 50% 75% 100% Expert 1 Expert 2 Expert 3 Expert 4 Figure 5. Expert distributions of MoE blocks in CLIP. We select layers from CLIP and summarize the activated experts during the feed-forward process on the MME test set. 7B model, we first lower the learning rate to 2e-6 to set the baseline and the following experiments since a learning rate of 2e-5 induces training instabilities. Then, we upcycle each MLP block with a sparsely-gated MoE block, initializing the weight of each expert from the pre-trained MLP block. As demonstrated in Table 5, the upcycled Mistral4\u00d77B and 8\u00d77B outperform the Mistral-7B model slightly except for TextVQA. However, considering that the upcycled experts significantly increase parameters without introducing new knowledge, we replace the upcycled Mistral 8\u00d77B with Mixtral 8\u00d77B [30]. In Mixtral 8\u00d77B, all expert layers are pre-trained on large-scale language data, providing superior initialization compared to upcycling. The results indicate that CuMo Mixtral-8x7B outperforms its upcycled counterparts significantly and is employed in the final models with bzloss to maintain a balanced loading of experts. Multi-Resolution Visual Features Incorporating multiresolution inputs is crucial for enhancing the understanding of image content in multimodal LLMs. Following the approach outlined in S2[61], we introduce multi-resolution inputs to CLIP and concatenate the feature maps channelwise to maintain the total number of visual tokens consistent with low-resolution inputs. As illustrated in Table 6, an empirical combination of 3\u00d7 and 1\u00d7 yields optimal performance and we adopt this configuration for the final CuMo models. Pre-FineTuning Stage Previous ablation studies were conducted directly after the pre-training of the MLP connector, leading to observed training instabilities during visual instruction tuning. To address this, we introduce a pre-finetuning stage using high-quality image caption data, wherein all parameters are unfrozen. In Table 7, we leverage caption data from ALLaVA for this stage. Results indicate that ALLaVA data proves to be a superior option, 7 can you describe what groceries are presented in this fridge CuMo-Mistral-7B The groceries presented in the fridge include a variety of fresh produce such as bell peppers, lettuce, broccoli, and cucumbers. There are also eggs, milk, and cheese, which are dairy products. Additionally, there are cherry tomatoes and a jar of red sauce, which could be used for cooking or as a condiment. The presence of these items suggests that the fridge is well-stocked with a balance of fresh vegetables, dairy, and condiments, indicating a household that values a healthy and varied diet. LLaVA-Yi-34B The refrigerator in the image is well-stocked with a variety of groceries. Here's a list of what I can see: on the top shelf, there are several items including what appears to be a head of lettuce, a bunch of green onions, a container of strawberries, and a few blue plastic containers that could be used for food storage or serving; the middle shelf has a few bottles, which might contain milk or juice, and a carton of eggs.; the bottom shelf is filled with fresh produce. There are bell peppers in different colors (red, yellow, and green), a bunch of spring onions, and a head of lettuce; the drawer at the bottom contains more fresh produce, including what looks like a bunch of green onions and possibly some other leafy greens. The refrigerator is well-organized and appears to be stocked with fresh and healthy food options. MiniGemini-Yi-34B The refrigerator is filled with a variety of groceries, including fruits, vegetables, dairy products, and beverages. There are several bottles, likely containing milk or other dairy products, and a carton of eggs. The fridge also contains a selection of fresh produce, such as broccoli, peppers, and lettuce. Additionally, there are some fruits, including apples and oranges. The presence of these items suggests that the person who owns the fridge prefers a balanced diet with a mix of healthy foods and beverages. how many people are in this image CuMo-Mistral-7B In the image, there are a total of 14 characters. This includes 12 characters seated around the table and 2 characlters standing on the table. The characters are a mix of human and skeletal figures, with the skeleton figures being part of the Day of the Dead celebration. LLaVA-Yi-34B The image you've provided is from the animated movie \"Coco.\" It features a variety of characters, both human and fantastical, which are not real people but animated characters. The central figure is a young boy named Miguel, who is holding a guitar. Surrounding him are other characters, including his family members and other characters from the film. The characters are depicted in a stylized, animated manner, and the image is a still from the movie. MiniGemini-Yi-34B In the image, there are a total of 12 people. This includes the boy in the foreground, the two skeletons, and the nine characters seated around the table. It's important to note that the skeletons are representations of people and are included in the count. can you introduce this movie based on this poster CuMo-Mistral-7B The poster is for the movie \"The Avengers,\" which is part of the Marvel Cinematic Universe. It features a team of superheroes, each with unique abilities and costumes, coming together to face a significant threat. The central figure, with the iconic red and gold suit, is likely the leader of the group. The presence of the Hulk and Thor suggests that the film includes a mix of science fiction and fantasy elements. The cityscape and the explosion in the background imply that the movie's plot involves a large-scale conflict that impacts the urban environment. The release date and the mention of \"3D and IMAX 3D\" indicate that the movie was available in various formats, catering to different viewing preferences. The poster is designed to convey the epic scale and the high stakes of the film, as well as the diverse team of heroes that viewers can expect to see on screen. LLaVA-Yi-34B This poster is for the movie \"Avengers: Age of Ultron,\" which is a 2015 American superhero film based on the Marvel Comics superhero team the Avengers. The film is the sequel to \"The Avengers\" (2012) and the 11th film in the Marvel Cinematic Universe (MCU). It was directed by Joss Whedon and produced by Marvel Studios. The poster features several of the main characters from the film, including Iron Man, Thor, Captain America, the Hulk, Black Widow, and Hawkeye, all in their superhero costumes. The setting appears to be a cityscape with a sense of destruction, suggesting a battle or conflict. The title \"Avengers: Age of Ultron\" is prominently displayed at the top of the poster, and the Marvel logo is visible at the bottom. The film's plot revolves around the Avengers attempting to stop the villain Ultron from enacting his plans for human extinction. The movie is known for its action-packed sequences, special effects, and ensemble cast of superheroes. It was a critical and commercial success, contributing to the ongoing popularity of the Marvel Cinematic Universe. MiniGemini-Yi-34B The movie poster is for \"The Avengers,\" a 2012 American superhero film produced by Marvel Studios and distributed by Walt Disney Pictures. It is the sixth film in the Marvel Cinematic Universe (MCU). The film features an ensemble cast including Robert Downey Jr., Chris Evans, Mark Ruffalo, Chris Hemsworth, Scarlett Johansson, Jeremy Renner, Tom Hiddleston, and Samuel L. Jackson. The plot revolves around Nick Fury, director of the peacekeeping organization S.H.I.E.L.D., recruiting a team of superheroes to help save the world from Loki, the adoptive brother of Thor, who is leading an alien invasion. The poster is designed to showcase the main characters and the action-packed nature of the film, with the characters in their iconic costumes and the cityscape in the background suggesting the scale of the conflict. The release date is prominently displayed, indicating the film's availability in theaters. The poster also highlights the film's availability in various formats, including 3D, which was a significant selling point at the time of its release. Figure 6. Dialogues between the user and multimodal LLMs on challenging images. We highlight the correct answers and hallucinations from the responses of the multimodal LLMs. providing fewer but higher-quality captions for training, ultimately leading to improved performance. 4.4. Qualitative Analysis Expert Distribution As shown in Figure 5, we visualize the expert distributions in the MoE block from selected layers at CLIP-MoE. The dataset analyzed is the test set of the MME 8 benchmark. The distribution indicates that the selected experts during inference are evenly spread across layers, providing further evidence of the effectiveness of the auxiliary losses in maintaining load balance. Dialogue Comparisons Presented in Figure 6, we contrast the responses from CuMo-Mistral-7B, LLaVA-Yi-34B, and MiniGemini-Yi-34B. It demonstrates that CuMo-Mistral7B can effectively follow instructions and predominantly provide correct answers to challenging questions derived from complex scenes. However, CuMo also exhibits instances of hallucinations, such as responding with \u201c2 characters standing on the table,\u201d highlighting the need for further investigation to mitigate hallucinations in CuMo. 5. Conclusion In this study, we introduce the sparse mixture-of-experts design into multimodal LLMs. Specifically, we replace each MLP block with a Top-K sparse MoE block in the MLP connector and the vision encoder. To enhance training stability, we employ a three-stage training approach, incorporating upcycled MoE blocks during the visual instruction tuning stage, along with auxiliary bzloss to maintain a balanced loading of experts. All CuMo models are trained and evaluated on fully open-sourced datasets and benchmarks. Through extensive experiments and ablation studies, we validate the effectiveness of the upcycled MoE blocks in each module. CuMo outperforms state-of-the-art models across multiple competitive benchmarks within the same group of model sizes. Acknowledgments We extend our gratitude to Chunyuan Li, Lei Chen, and Haibin Lin for their insightful and valuable discussions throughout this project.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2307.05399v1", + "title": "Domain-Agnostic Neural Architecture for Class Incremental Continual Learning in Document Processing Platform", + "abstract": "Production deployments in complex systems require ML architectures to be\nhighly efficient and usable against multiple tasks. Particularly demanding are\nclassification problems in which data arrives in a streaming fashion and each\nclass is presented separately. Recent methods with stochastic gradient learning\nhave been shown to struggle in such setups or have limitations like memory\nbuffers, and being restricted to specific domains that disable its usage in\nreal-world scenarios. For this reason, we present a fully differentiable\narchitecture based on the Mixture of Experts model, that enables the training\nof high-performance classifiers when examples from each class are presented\nseparately. We conducted exhaustive experiments that proved its applicability\nin various domains and ability to learn online in production environments. The\nproposed technique achieves SOTA results without a memory buffer and clearly\noutperforms the reference methods.", + "authors": "Mateusz W\u00f3jcik, Witold Ko\u015bciukiewicz, Mateusz Baran, Tomasz Kajdanowicz, Adam Gonczarek", + "published": "2023-07-11", + "updated": "2023-07-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Solutions based on deep neural networks have al- ready found their applications in almost every do- main that can be automated. An essential part of them is NLP, the development of which has gained particular momentum with the beginning of the era of transformers (Vaswani et al., 2017). Complex and powerful models made it possible to solve problems such as text classification with a previously unattainable accuracy. However, ex- ploiting the capabilities of such architectures in real-world systems requires online learning after deployment. This is especially difficult in dynami- cally changing environments that require the mod- els to be frequently retrained due to domain or class setup shifts. An example of such environment is Alphamoon Workspace1 where the presented archi- tecture will be deployed as a model for document 1https://alphamoon.ai/ Figure 1: Continual learning in document processing platform. Classification models need to learn incremen- tally and handle domain shifts after deployment. classification since we noticed the emerging need for online learning. We observed that the users\u2019 data in document classification process is changing frequently and such shifts often decrease the model accuracy. As a result, we have to retrain the models manually ensuing a time-consuming process. Our goal was to design an effective approach to incre- mental learning that will be used in a continual learning module of our system (Figure 1). Recently, neural architectures have become ef- fective and widely used in classification problems (Devlin et al., 2018; Rawat and Wang, 2017). The parameter optimization process based on gradi- ent descent works well when the data set is suf- ficiently large and fully available during the train- ing process. Otherwise, the catastrophic forgetting (French, 1999) may occur, which makes neural networks unable to be trained incrementally. Con- tinual learning aims to develop methods that enable accumulating new knowledge without forgetting previously learnt one. In this paper, we present a domain-agnostic ar- chitecture for online class incremental continual learning called DE&E (Deep Encoders and Ensem- bles). Inspired by the E&E method (Shanahan et al., 2021), we proposed a method that increases its ac- curacy, provides full differentiability, and, most arXiv:2307.05399v1 [cs.LG] 11 Jul 2023 importantly, can effectively solve real-world classi- fication problems in production environments. Our contribution is as follows: 1) we introduced a differ- entiable KNN layer (Xie et al., 2020) into the model architecture, 2) we proposed a novel approach to aggregate classifier predictions in the ensemble, 3) we performed exhaustive experiments showing the ability to learn incrementally and real-world us- ability, 4) we demonstrate the effectiveness of the proposed architecture by achieving SOTA results on various data sets without a memory buffer.", + "main_content": "2.1 Continual Learning 2.1.1 Methods Currently, methods with a memory buffer such as GEM (Lopez-Paz and Ranzato, 2017), A-GEM (Chaudhry et al., 2019a) or DER (Buzzega et al., 2020) usually achieve the highest performance in all continual learning scenarios (Mai et al., 2022). Such methods store part of the data in the memory and this data is successively replayed during training on new, unseen examples. However, the requirement to store data in memory disqualifies these methods in many practical applications due to privacy policies or data size (Salem et al., 2018). This forces attention toward other approaches, such as parameter regularization. The most popular methods in this group include EWC (Kirkpatrick et al., 2016) and LWF (Li and Hoiem, 2017). When receiving a new dose of knowledge, these methods attempt to influence the model parameter updating procedure to be minimally invasive. As research shows (Van de Ven and Tolias, 2019), regularization-based methods fail in class incremental scenarios making them ineffective in many real-world cases. 2.1.2 Approaches for NLP Almost all prior works focus on the development of continual learning methods in the computer vision domain (Delange et al., 2021). Research on continual learning for NLP is limited and, as Biesialska et al. (2020) observed, the majority of current NLP methods are task-specific. Moreover, these methods often use a memory buffer (de Masson D\u2019Autume et al., 2019) or relate to the language model itself (Ke et al., 2021). To address this niche, domain-agnostic approaches have to become much more prevalent in the near future. 2.2 Ensemble methods Ensemble methods are widespread in the world of machine learning (Zhang and Ma, 2012). By using predictions of multiple weak learners, it is possible to get a model that performs surprisingly well overall. Broad adoption of methods (Cao et al., 2020; Li and Pan, 2022; Yang et al., 2021) demonstrates the effectiveness of ensemble techniques in a wide variety of tasks. Ensembles have also been used successfully in the field of continual learning, as evidenced by the BatchEnsemble (Wen et al., 2020) or CN-DPM (Lee et al., 2020). Other contributions present in literature (Doan et al., 2022) tend to focus strongly on improving model performance rather than increasing model efficiency. Furthermore, ensemble approaches can also be used indirectly through dropout (Srivastava et al., 2014) or weights aggregation (Wortsman et al., 2022). 2.3 Mixture of Experts Mixture of Experts (ME) (Jacobs et al., 1991) is a technique based on the divide and conquer paradigm. It assumes dividing the problem space between several specialized models (experts). Experts are supervised by the gating network that selects them based on the defined strategy. The difference between the ensembles is that ME methods focus on selecting a few experts rather than combining predictions of all available models. ME techniques have found many applications in various domains (Masoudnia and Ebrahimpour, 2014), including continual learning (Shanahan et al., 2021), and even nowadays such approaches are widely used in NLP (Gao et al., 2022; Ravaut et al., 2022). 2.4 Real-world NLP systems Over the last few years, the amount of real-world NLP applications has grown rapidly (Sarker, 2022). Despite major successes in the real-world application of language technologies such as Google Translate, Amazon Alexa, and ChatGPT, production deployment and maintenance of such models still remain a challenge. Researchers have shown (Nowakowski et al., 2022; Karakanta et al., 2021), that there are several issues related to maintaining NLP models, including technical limitations, latency, and performance evaluation. However, the crucial problem is the shift of data domain that forces models to be retrained and deployed again over time (Hu et al., 2020). It is a major limitation in dynamically changing environments where users Figure 2: Architecture of the proposed model. An input is processed by the feature extractor. Obtained embeddings are used to find the most relevant classifiers according to assigned keys. The soft KNN layer approximates the soft KNN scores. Predictions are weighted in the voting layer by both cosine similarity and soft KNN scores. Final output is the class with the highest voting score. expect models to quickly adapt to them. Currently, this problem has been tackled in several systems (Afzal et al., 2019; Hancock et al., 2019), but many of the solutions preclude maintaining model accuracy when training incrementally making them insufficient. 3 Our approach 3.1 Problem formulation Class incremental continual learning involves training a classification model f(\u00b7) : X 7\u2212 \u2192 Y on a sequence of T tasks. The model is trained on each task separately (one task at a time). Each task Dt contains data points Dt = {(x1 t , y1 t ), . . . , (xNt t , yNt t )}, where Nt is length of Dt, x(i) t \u2208RD, and y(i) t \u2208Yt. Yt is a label set for task t and Yt \u2229Yt\u2032 = \u2205for t \u0338= t\u2032. We want the model to keep performing well on all previous tasks after each update, and we assume to be working in the most challenging setup (Van de Ven and Tolias, 2019), where one task consists of data from one class. 3.2 Method We present a flexible and effective domain-agnostic architecture that can be used to solve various classification problems. The architecture is presented in Figure 2. Feature extractor. The first component of the proposed architecture is a multi-layer feature extractor that transforms input data into the embedding space. It can be described by the following mapping z = F(x), where x \u2208RD is an input example and z \u2208RM is a M-dimensional embedding. The approach we follow assumes the use of a pre-trained model with frozen parameters. Such a procedure makes it possible to completely prevent the extractor from forgetting knowledge by isolating feature space learning from the classification process. Keys and classifiers. We use an ensemble of N classifiers fn(\u00b7), where each of them maps the embedding into a K-dimensional output vector \u02c6 yn = fn(z). With each classifier, there is an associated key vector kn \u2208RM with the same dimensionality as the embedding. The keys help to select the most suitable models for specialization with respect to the currently processed input example. They are initialized randomly from normal distribution. We use simple single-layer neural networks as classifiers, with fan-in variance scaling as the weight initialization strategy. The network output is activated by a hyperbolic tangent function (tanh). Soft \u03ba-nearest neighbors layer. The standard KNN algorithm is often implemented using ordinary sorting operations that make it impossible to determine the partial derivatives with respect to the input. It removes the ability to use KNN as part of end-to-end neural models. However, it is possible to obtain a differentiable approximation of the KNN model by solving the Optimal Transport Problem (Peyr\u00e9 et al., 2019). Based on this concept, we add a differentiable layer to the model architecture. We call this layer soft \u03ba-nearest neighbors (soft KNN). In order to determine the KNN approximation, we first compute a cosine distance vector c \u2208RN between the embedding and the keys: cn = 1 \u2212cos(z, kn), (1) where cos(\u00b7, \u00b7) denotes the cosine similarity. Next, we follow the idea of a soft top-\u03ba operator presented in (Xie et al., 2020), where \u03ba denotes the number of nearest neighbors. Let E \u2208RN\u00d72 be the Euclidean distance matrix with the following elements: en,0 = (cn)2, en,1 = (cn \u22121)2. (2) And let G \u2208RN\u00d72 denote the similarity matrix obtained by applying the Gaussian kernel to E: G = exp(\u2212E/\u03c3), (3) where \u03c3 denotes the kernel width. The exp operators are applied elementwise to the matrix E. We then use the Bregman method, an algorithm designed to solve convex constraint optimization problems, to compute L iterations of Bregman projections in order to approximate their stationary points: p(l+1) = \u00b5 Gq(l) , q(l+1) = \u03bd G\u22a4p(l+1) , (4) where l = 0, . . . , L \u22121, \u00b5 = 1N/N, \u03bd = [\u03ba/N, (N \u2212\u03ba)/N]\u22a4, q(0) = 12/2, and 1i denotes the i-element all-ones vector. Finally, let \u0393 denotes the optimal transport plan matrix and is given by: \u0393 = diag(p(L)) \u00b7 G \u00b7 diag(q(L)) (5) As the final result \u03b3 \u2208RN of the soft \u03ba-nearest neighbor operator, we take the second column of \u0393 multiplied by N i.e. \u03b3 = N\u0393:,2. \u03b3 is a soft approximation of a zero-one vector that indicates which \u03ba out of N instances are the nearest neighbors. Introducing the soft KNN enables to train parts of the model that were frozen until now. Voting layer. We use both cn and \u03b3 to weight the predictions by giving the higher impact for classifiers with keys similar to extracted features. The obtained approximation \u03b3 has two main functionalities. It eliminates the predictions from classifiers Table 1: Data sets setup for experiments. Domain Data set Classes Train Test Avg. words Text BBC News 5 1,668 557 380 Newsgroups 10 11314 7532 315 Complaints 10 16,000 4,000 228 Audio Speech Commands 10 18,538 2,567 \u2014 Image MNIST 10 60,000 10,000 \u2014 CIFAR-10 10 50,000 10,000 \u2014 outside \u03ba nearest neighbors and weights the result. Since the Bregman method does not always completely converge, the vector \u03ba contains continuous values that are close to 1 for the most relevant classifiers. We make use of this property during the ensemble voting procedure. The higher the \u03ba value for a single classifier, the higher its contribution toward the final ensemble decision. The final prediction is obtained as follows: \u02c6 y = PN n=1 \u03b3ncn\u02c6 yn PN n=1 cn (6) Training To effectively optimize the model parameters, we follow the training procedure presented in (Shanahan et al., 2021). It assumes the use of a specific loss function that is the inner product between the ensemble prediction and the onehot coded label: L(y, \u02c6 y) = \u2212y\u22a4\u02c6 y (7) Optimizing this criterion yields an advantage of using a tanh activation function, significantly reducing catastrophic forgetting (Shanahan et al., 2021). Following the reference method, we also use an optimizer that discards the value of the gradient and uses only its sign to determine the update direction. As a result, the parameters are being changed by a fixed step during the training. 4 Experiments 4.1 Setup In order to ensure experiment\u2019s reproductivity, we evaluated our method on the popular and publicly available data sets. Data sets We use three common text classification data sets with different characteristics Newsgroups (Lang, 2008), BBC News (Greene and Cunningham, 2006), and Consumer Finance Complaints2. The goal of the experiments was to evaluate our method on tasks with with different dif2Source: https://huggingface.co/datasets/ consumer-finance-complaints Table 2: Accuracy (%) and standard deviation for methods evaluated on various data sets. Speech Commands data set was evaluated with 64 classifiers in ME, the remaining models have 128 classifiers. Regularization-based methods completely failed on the difficult data sets due to the recency bias phenomenon (Mai et al., 2022). Text Image Audio Model Mem. NG BBC Compl. MNIST CIFAR-10 Sp. Comm. Naive \u00d7 5.25\u00b10.03 21.65\u00b12.56 9.56\u00b10.33 11.29\u00b13.05 10.00\u00b10.01 21.54\u00b13.78 LwF \u00d7 5.20\u00b10.05 18.60\u00b12.03 10.04\u00b10.20 11.47\u00b12.75 10.00\u00b10.01 20.61\u00b13.88 EWC \u00d7 5.13\u00b10.13 21.97\u00b12.14 10.16\u00b10.31 11.19\u00b12.70 10.00\u00b10.01 32.93\u00b14.92 SI \u00d7 5.27\u00b10.01 19.43\u00b12.96 10.00\u00b10.62 14.90\u00b16.52 10.00\u00b10.01 9.99\u00b10.27 CWR* \u00d7 4.63\u00b10.60 22.98\u00b11.20 10.13\u00b10.33 10.40\u00b10.54 10.00\u00b10.01 10.32\u00b10.26 GEM \u2713 35.89\u00b13.80 70.99\u00b17.68 33.74\u00b12.50 52.27\u00b15.20 23.40\u00b12.71 21.01\u00b12.06 A-GEM \u2713 9.44\u00b17.14 59.10\u00b117.52 9.20\u00b10.01 65.37\u00b14.53 26.43\u00b15.27 17.45\u00b16.90 Replay \u2713 22.45\u00b13.09 59.61\u00b13.17 16.46\u00b14.62 69.02\u00b14.90 32.93\u00b14.56 12.23\u00b11.28 E&E \u00d7 46.07\u00b12.91 75.87\u00b13.88 44.80\u00b11.62 87.10\u00b10.21 53.97\u00b11.31 79.15\u00b10.60 Ours \u00d7 47.27\u00b13.63 78.49\u00b13.92 44.97\u00b10.86 87.62\u00b10.14 56.27\u00b11.21 80.11\u00b11.30 ficulty levels. We also conducted experiments for audio classification using Speech Commands (Warden, 2018) data set. For the evaluation purposes, we selected the 10 most representative classes from the Newsgroups, Complaints and Speech Commands. Finally, we also conducted experiments on the popular MNIST and CIFAR-10 data sets as image domain representatives. The data set summary is presented in Table 1. In all experiments we used a train set to train model incrementally, and afterward we performed a standard evaluation using a test set. Feature extractors For all text data sets, we used a Distilbert (Sanh et al., 2019), a light but still very effective alternative for large language models. Next, for Speech Commands, we utilized Pyannote (Bredin et al., 2020), a pretrained model for producing meaningful audio features. For image data sets, we used different extractors. MNIST features were produced by the pretrained VAE and CIFAR-10 has a dedicated BYOL model (see A.4 for more details). 4.2 Results The results of the evaluation are presented in Table 2. For all setups evaluated, our model performed best improving results of the main reference method (E&E) by up to 3 percent points (pp.). The improvement scale varies across the data sets. We also observed a significant difference in achieved accuracy between the DE&E and the standard continual learning methods. Simple regularizationbased methods completely fail in the class incremental scenario. It shows how demanding training Figure 3: Number of parameters in DE&E architecture (64, 128, 1024 classifiers) and achieved accuracy (%). We calculated the number of parameters as the sum of the parameters for all classifiers in the ME. Each mark is the test accuracy averaged across 5 runs. the model incrementally is when a set of classes is not fixed, which often takes place in real-world scenarios. Furthermore, our method achieved these results without replaying training examples seen in the past, making it more practical relative to the SOTA memory-based methods (GEM, A-GEM, Replay) that store samples from every class. For the ensemble of 128 classifiers and Speech Commands data set, our architecture achieved an accuracy of more than 59 pp. higher than the best method with a memory buffer. One of the most important hyperparameters of the model is the number of classifiers (experts). To investigate how it affects accuracy, we evaluated our architecture in three variants: small 64, normal 128, and large 1024 classifiers. The evaluation results are presented in Figure 3. We observed that increasing the ensemble size transTable 3: Accuracy (%) and standard deviation of DE&E evaluated on Class Incremental and Domain Incremental scenarios. We used the same setup as shown in Table 2. Data set Class Incremental Domain incremental BBC News 78.49\u00b13.92 79.71\u00b13.14 Newsgroups 47.27\u00b13.63 44.55\u00b11.40 Complaints 44.97\u00b10.86 39.23\u00b13.03 Speech Commands 81.46\u00b10.85 79.31\u00b10.49 MNIST 87.62\u00b10.14 85.04\u00b10.39 CIFAR-10 56.27\u00b11.21 55.66\u00b11.32 lates to higher accuracy, and gain depends on the setup and data characteristics. The most significant improvement was observed on BBC and CIFAR10 where the large model achieved an accuracy of about 20pp. better than the small one. For the remaining data sets and the analogous setup, the gain was up to 5pp. We explain this phenomenon as the effect of insufficient specialization level achieved by smaller ensembles. If experts are forced to solve tasks that are too complicated they make mistakes often. Increasing the number of experts allows for dividing feature space into simpler sub-tasks. However, such a procedure has natural limitations related to the feature extractor. If features have low quality, increasing the number of experts will be ineffective. To select the optimal ensemble size we suggest using the elbow rule which prevents the model from being overparameterized and ensures reasonable accuracy. However, in general, we recommend choosing larger ensembles that are better suited for handling real-world cases. Since real-world environments require deployed models to quickly adapt to domain shifts, we tested our method in a domain incremental scenario. In such setup, each data batch can provide examples from multiple classes that can be either known or new (Van de Ven and Tolias, 2019). This way, the model needs to learn incrementally, being prone to frequent domain shifts. As shown in Table 3, the proposed method handles both scenarios with comparable accuracy. We observed improved accuracy for BBC News, but reduced for the remaining data sets. Such property can be beneficial when there is limited prior knowledge about the data or the stream is imbalanced (Aguiar et al., 2022). We have also investigated the importance of the presented expert selection method. We trained the DE&E method and for each training example, we allowed it to choose random experts (rather than the most relevant ones) with fixed probability p. As shown in Figure 4, the selection method has a strong influence on the model performance. Accuracy decreases proportionally to the p over all data sets studied. The proper expert selection technique is crucial for the presented method. It is worth noting that relatively easier data sets suffer less from loss of accuracy than hard ones because even randomly selected experts can still classify the data by learning simple general patterns. In more difficult cases like Newsgroups and Complaints data sets, model performance is comparable to random guessing when p > 0.5. Figure 4: Influence of random classifier selection on DE&E accuracy (%). All models consist of 128 classifiers. Each mark is the accuracy for an independent run. 5 Conclusions In this paper, we proposed a domain-agnostic architecture for continual learning with a training procedure specialized in challenging class incremental problems. The presented architecture is based on the Mixture of Experts technique and handles many practical issues related to the deployment of text classification models in non-trivial real-world systems. As our main contribution, we introduced a fully differentiable soft KNN layer and a novel prediction weighting strategy. By conducting exhaustive experiments, we showed improvement in accuracy for all the cases studied and achieved SOTA results without using a memory buffer. This enables an effective and secure training, especially when working with sensitive textual data. The presented architecture is highly flexible, can effectively solve classification problems in many domains, and can be applied to real-world machine learning systems requiring continuous improvement. Such work enables researchers to make further steps toward overrunning many of the current challenges related to language technology applications. Limitations The main limitations of the proposed architecture are related to the presence of the frozen feature extractor. The accuracy of the classification module is proportional to the quality of features. Since the ensemble weak learners are single-layer neural networks, the entire feature extraction process relies on a pre-trained model that strongly limits the upper bound of classification accuracy. Such approach reduces the method complexity, but also makes it prone to errors when embeddings have low quality. Achieving accuracy at a satisfactory level, which is crucial in real world systems, requires the use of high quality feature extractors. Currently, plenty of pretrained SOTA models are available for free in domains such as text or image classification, but if such extractor is not available, does not produce reasonable features or is too expensive to use, our architecture may not be the best choice. Another issue is relatively long training time comparing to the reference methods (see A.3). The introduction of a differentiable soft KNN layer resulted in additional computational effort that clearly impacted the model complexity. This limits the use in low latency systems with machine learning models trained online. Ethics Statement The authors foresee no ethical concerns with the work presented in this paper, in particular concerning any kind of harm and discrimination. Since the presented architecture can have a wide range of usages, the authors are not responsible for any unethical applications of this work. Acknowledgements The research was conducted under the Implementation Doctorate programme of Polish Ministry of Science and Higher Education and also partially funded by Department of Artificial Intelligence, Wroclaw Tech and by the European Union under the Horizon Europe grant OMINO (grant number 101086321). It was also partially co-funded by the European Regional Development Fund within the Priority Axis 1 \u201cEnterprises and innovation\u201d, Measure 1.2. \u201cInnovative enterprises, sub-measure 1.2.1. \u201cInnovative enterprises \u2013 horizontal competition\u201d as part of ROP WD 2014-2020, support contract no. RPDS.01.02.01-02-0063/20-00." + }, + { + "url": "http://arxiv.org/abs/2309.16178v2", + "title": "LAE-ST-MoE: Boosted Language-Aware Encoder Using Speech Translation Auxiliary Task for E2E Code-switching ASR", + "abstract": "Recently, to mitigate the confusion between different languages in\ncode-switching (CS) automatic speech recognition (ASR), the conditionally\nfactorized models, such as the language-aware encoder (LAE), explicitly\ndisregard the contextual information between different languages. However, this\ninformation may be helpful for ASR modeling. To alleviate this issue, we\npropose the LAE-ST-MoE framework. It incorporates speech translation (ST) tasks\ninto LAE and utilizes ST to learn the contextual information between different\nlanguages. It introduces a task-based mixture of expert modules, employing\nseparate feed-forward networks for the ASR and ST tasks. Experimental results\non the ASRU 2019 Mandarin-English CS challenge dataset demonstrate that,\ncompared to the LAE-based CTC, the LAE-ST-MoE model achieves a 9.26% mix error\nreduction on the CS test with the same decoding parameter. Moreover, the\nwell-trained LAE-ST-MoE model can perform ST tasks from CS speech to Mandarin\nor English text.", + "authors": "Guodong Ma, Wenxuan Wang, Yuke Li, Yuting Yang, Binbin Du, Haoran Fu", + "published": "2023-09-28", + "updated": "2023-10-07", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "eess.AS" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "With the rise of end-to-end (E2E) automatic speech recog- nition (ASR), researchers [1\u201325] explore different E2E ASR scenarios. An utterance that includes two or more languages is known as a code-switching (CS) scenario, which is gen- erally divided into occurring at an utterance level (extra- sentential CS) or within an utterance (intra-sentential CS). It is still a challenging ASR scenario. Several challenges are conventionally encountered in modeling CS speech: firstly, the real paired CS audio is data- scarce, and secondly, the conventional models are not good at modeling CS speech due to the confusion between different languages. To alleviate the first issue, researchers propose technical methods to study the rules of CS occurrence and synthesize CS paired data [11\u201315] or explore the affection of monolingual data [26\u201328]. As for the second issue, the \u2217Corresponding author structures like Connectionist Temporal Classification (CTC)-, attention-, and transducer-based E2E models have been inves- tigated for CS ASR [13\u201322]. Recently, to mitigate the second issue, the conditionally factorized frameworks [29\u201332] are proposed to decompose the CS task (e.g., Mandarin-English CS) into two modeling steps: 1) recognizing Mandarin and English part, respectively, and 2) composing processed mono- lingual segments into a CS sequence. However, in modeling step 1) for these methods, the model only utilizes the informa- tion of the monolingual part. We know that, when modeling the non-streaming E2E ASR task, the prediction of each unit generally relies on overall audio contextual information. To solve the issues of the conditionally factorized mod- els [29\u201332] (e.g., LAE [32]), we propose the LAE-ST-MoE framework. It incorporates speech translation (ST) tasks into LAE [32] and utilizes ST to facilitate the learning of contex- tual information between Mandarin and English, thereby im- pacting the model\u2019s encoder through joint learning. In addi- tion, inspired by [22\u201324], the LAE-ST-MoE introduces a task- based mixture of expert (MoE) approach, employing separate feed-forward networks (FFNs) for the ASR and ST tasks. Our experiment is conducted on the classic CS bench- mark, i.e., ASRU 2019 Mandarin-English CS challenge dataset [33]. Since the data does not have ST labels, we use the large machine translation (MT) model from Mod- eScope to label the data, which is based on the CSANMT algorithm [34]. In the experiments, compared to the LAE- based system, the LAE-ST-MoE model achieves a relative performance improvement of about 6%-9% in ASR tasks on all test sets. Moreover, our model does not introduce extra decoding computational complexity. In addition, the trained LAE-ST-MoE model can perform ST tasks from CS speech to Mandarin or English text and has achieved good BLEU. Then, it is easy to extend our model to one-to-many ST tasks. Our main contributions are as follows: (1) To our best knowledge, we are the first to propose using the ST task to introduce richer cross-lingual contextual information to boost the monolingual modeling stage of LAE; (2) We introduce an MoE between ASR and ST tasks to make each task more fo- cused, thereby improving the overall recognition performance of the model without extra decoding computational complex- ity; (3) The well-trained LAE-ST-MoE model can perform 979-8-3503-0689-7/23/$31.00 \u00a92023 IEEE arXiv:2309.16178v2 [cs.SD] 7 Oct 2023 ST tasks from CS speech to Mandarin or English text, and the structure is easy to extend to one-to-many ST tasks.", + "main_content": "In the Mandarin-English CS ASR system [29\u201332], we know that the basis is to model the label-to-frame alignments. For each T-length speech feature sequence X = {xt|t = 1, ..., T} and L-length CS label sequence Y = {y\u2113 \u2208 (VMan \ufffdVEn|\u2113= 1, ..., L)}, there are several possible Tlength label-to-frame sequences Z = {zt \u2208(VMan \ufffdVEn \ufffd{\u2205})|t = 1, ..., T}, where \u2205denotes a blank symbol in CTC [1] based system, VMan, and VEn respectively represents to the Mandarin and English part in CS. However, for each CS Z, there always are two corresponding monolingual labelto-frame sequences ZMan = {zMan t \u2208{ZMan \ufffd{\u2205}}|t = 1, ..., T} and ZEn = {zEn t \u2208ZEn \ufffd{\u2205}}|t = 1, ..., T}. Therefore, the label-to-frame posterior P(Y|X) can thus be represented in terms of CS, P(Z|X), and monolingual, P(ZMan|X) and P(ZEn|X), label-to-frame posteriors: \ufffd \ufffd \ufffd | P(Y|X) = \ufffd Z\u2208Z \ufffd Z\u2208Z | \ufffd an\u2208Z \ufffd ZMan\u2208ZMan \ufffd En\u2208Z \ufffd ZEn\u2208ZEn P(Z, ZMan, ZEn|X) (1) where Z and ZMan/En denote sets of all possible CS and monolingual label-to-frame alignments for a given Y. By applying Bayes\u2019 formula, the P(Z, ZMan, ZEn|X) in Eq.(1) can be transformed into the following expression: P(Z, ZMan, ZEn|X) = P(Z|ZMan, ZEn, X) \u00d7P(ZMan, ZEn|X) (2) P(Z|Z, Z, X) \u00d7P(ZMan, ZEn|X) (2) and P(ZMan, ZEn|X) = P(ZMan|ZEn, X) \u00d7 P(ZEn|X). (3) Two assumptions are made. The first assumption is that once ZMan and ZEn are given, no additional information from observation X is needed to determine Z. The second assumption is that ZMan and ZEn are independent, given X. Therefore, combined with Eq. (1-3), the eq. (1) can be shown: \ufffd \ufffd fore, combined P(Y|X) \u2248 \ufffd Z\u2208Z ined with Eq. (1-3), the eq. (1) c \ufffd Z\u2208Z P(Z|ZMan, ZEn) \u00d7 \ufffd ZMan\u2208Z \ufffd \ufffd ZMan\u2208ZMan P(ZMan|X) \ufffd En \ufffd ZMan\u2208ZM \u00d7 \ufffd ZEn \u2208ZE \ufffd ZMan\u2208ZMan | \ufffd ZEn \u2208ZEn P(ZEn|X). (4) To achieve the transformation from Eq. (1) to Eq. (4), the monolingual-specific encoder is introduced by the conditionally factorized structures [29\u201332] to optimize the representation of each language separately. For example, the token sequence of the CS audio is like \u201c \u771f\u6b63\u505a\u5230happy every day\u201d. When forwarding Mandarin-specific encoder, the reference text will be replaced with \u201c \u771f\u6b63\u505a\u5230 \u201d and ignore the English part, where can refer to [31] or [32]. As for the English-specific encoder, as shown in Figure 1, it is the same as the Mandarin-specific encoder. Further consider the modeling process, e.g., Mandarin-specific encoder, the model will not learn English contextual information in the CS audio, which could potentially improve its performance on the Mandarin part. However, the ST model is capable of converting contextual information from various languages into one language. Therefore, applying ST tasks to enrich the contextual information between the two languages in CS ASR can be reasonable and feasible. Based on the LAE architecture [32] and joint learning mechanism, we propose LAE-ST-MoE architecture, which uses ST as an auxiliary task to bring more contextual information for ASR. The details of our proposed LAE-ST-MoE will be presented in the next section. 3. PROPOSED FRAMEWORKS 3.1. LAE-ST-MoE architecture The LAE structure [32] has a shared encoder module, two language-specific encoders for Mandarin and English, and a global ASR decoder. The monolingual-specific encoder is imposed by a corresponding monolingual-specific CTC loss. To alleviate the issues of LAE discussed in section 2, we propose the LAE-ST-MoE model architecture, as shown in Figure 1, which introduces two LAE-ST-MoE encoders and two ST decoders based on LAE. If NShare represents the number of layers in the shared encoder, and NMono represents the number of layers in the monolingual-specific encoder. Then, the LAE-ST-MoE encoder has N layers, where N is equal to (NEncoder \u2212NShare \u2212NMono) and NEncoder refers to the overall encoder layers. A common ST cross-entropy loss imposes the ST decoder, which consists of 6 Transformerbased blocks. In addition, the ST and ASR tasks are jointly trained using FFN-based MoE. A detailed explanation of the proposed LAE-ST-MoE model is presented as follows. If given the input feature sequence X, the shared Transformer encoder will transform it to representation Hshare: Hshare = Encodershare(X). (5) Furthermore, the Hshare will be forwarded to the LAE-STMoE encoder, which replaces the FFN of the Transformer encoder with the FFN-MoE module. It produces the hybrid ASR-ST representation HMan ASR ST and HEn ASR ST using multi-head self-attention (MHSA): HMan ASR ST = MHSA(LNorm(Hshare)) (6) HEn ASR ST = MHSA(LNorm(Hshare)) (7) where LNorm denotes LayerNorm [35]. Based on HEn ASR ST and HMan ASR ST, the FFN-MoE is forward to get ASR representation H0 En ASR, H0 Man ASR, and ST representation HMan2En ST, HEn2Man ST, respectively: H0 Man ASR = LNorm(FFN MoE(HMan ASR ST)) (8) HEn2Man ST = LNorm(FFN MoE(HMan ASR ST)) (9) H0 En ASR = LNorm(FFN MoE(HEn ASR ST)) (10) HMan2En ST = LNorm(FFN MoE(HEn ASR ST)). (11) Shared Transformer Encoder ST FFN ASR FFN LayerNorm LayerNorm Self Attention ASR FFN LayerNorm LayerNorm Self Attention ST FFN Mandarin-specific Transformer Encoder English-Specific Transformer Encoder Mandarin-specific CTC English-specific CTC En2Man ST Decoder Man2En ST Decoder Combine Function Global Decoder \u771f \u6b63 \u505a \u5230 \u2581happy \u2581every day \u2581happy \u2581every day \u771f \u6b63 \u505a \u5230 \u2581truly \u2581ac h ie ve \u2581happy \u2581every day \u771f \u6b63 \u505a \u5230 \u5f00 \u5fc3 \u6bcf \u4e00 \u5929 LST_En2Man LMan_CTC LGlobal_Decoder LEn_CTC LST_Man2En HMan_ASR_ST H0 Man_ASR HEn_ASR_ST HEn2Man_ST H0 En_ASR HMan2En_ST HMan_ASR HEn_ASR HGlobal_ASR HShare HShare N x x N LAE-ST-MoE Encoder X Fig. 1. The framework of the proposed LAE-ST-MoE. On the ST task side, HEn2Man ST and HMan2En ST will forward to the En2Man and Man2En ST decoder, respectively. In ASR, it is the same as LAE, based on H0 Man ASR and H0 En ASR, the Monolingual-specific encoder will produce the monolingual-specific representation HMan ASR, HEn ASR and combine these to get the global ASR representation HGlobal ASR: HMan ASR = EncoderMan Spec(H0 Man ASR)(12) HEn ASR = EncoderEn Spec(H0 En ASR) (13) HGlobal ASR = HMan ASR + HEn ASR. (14) 3.2. Training and Decoding In the LAE-ST-MoE model training stage, if the label text sequence for speech feature X is Y, we will apply the ModelMT from ModelScope to translate Y into Mandarin YMan and English YEn text: YMan = ModelMT En2Man(Y) (15) YEn = ModelMT Man2En(Y). (16) Like [31, 32], we replace Y with monolingual-specific label YMan Spec and YEn Spec using and , respectively. Based on monolingual-specific ASR representation HMan ASR and HEn ASR, the monolingual-specific ASR object LSpec will be shown as follow: LMan CTC = CTCMan Spec(YMan Spec|HMan ASR) (17) LEn CTC = CTCEn Spec(YEn Spec|HEn ASR) (18) LSpec = (LMan CTC + LEn CTC) 2 . (19) Moreover, given the global ASR representation HGlobal ASR, the global ASR decoder object LGlobal Decoder is: LGlobal Decoder = DecoderGlobal(Y|HGlobal ASR). (20) Following [31], we also use \u03bbSpec (we set it to 0.3 in the experiments) to combine LSpec and LGlobal Decoder to produce the overall ASR loss LASR: LASR=\u03bbSpec\u00d7 LSpec + (1 \u2212\u03bbSpec) \u00d7 LGlobal Decoder. (21) In the CTC-based ASR system, LGlobal Decoder only represents the CTC loss. Otherwise, in hybrid CTC attention-based ASR [4], LGlobal Decoder is the combination between CTC LGlobal CTC and attention LGlobal Att loss using \u03bbCTC: LGlobal Decoder = LGlobal CTC \u00d7 \u03bbCTC +(1 \u2212\u03bbCT C) \u00d7 LGlobal Att. (22) On the ST task, given ST representation (HEn2Man ST and HMan2En ST) and ST label sequence (YMan and YEn), the overall ST loss LST is shown as follows: LST Man2En = DecoderMan2En(YEn|HMan2En ST) (23) LST En2Man = DecoderEn2Man(YMan|HEn2Man ST) (24) LST = (LST Man2En + LST En2Man) 2 (25) where we use the cross-entropy loss for the ST tasks. Based on the overall ASR loss LASR and ST loss LST, the final training object LFinal is: LFinal = LASR + \u03b2 \u00d7 LST (26) where \u03b2 is used to balance and regulate the ST effect. In the ASR decoding stage, like the LAE structure [32], our model only gets the probabilities from the global ASR decoder. Therefore, compared with [32], our LAE-ST-MoE model has the same decoding computational complexity. In the ST decoding, our model uses the custom auto-regressive manner to forward the corresponding ST branch and get the final ST results. In addition, for monolingual Mandarin input, the En2Man ST decoder is comparable to the Mandarin ASR decoder. Therefore, we can easily fuse it into monolingual Mandarin decoding through rescoring. The same applies to monolingual English decoding. Table 1. The details of the used Datasets Lang Corpora Dur. (Hrs) Utterance(k) Train Eval Train Eval CN ASRU-Man [33] 482.6 14.3 545.2 16.6 EN Librispeech [36] 464.2 10.5 132.5 5.6 CN-EN ASRU-CS [33] 199.0 20.3 186.4 16.2 4. EXPERIMENTS AND RESULTS 4.1. Datasets We experiment on ASRU 2019 Mandarin-English codeswitching challenge dataset [33]. Like [22], we split the same Mandarin monolingual subset of the ASRU 2019 dataset as our CN test. Moreover, we use the test-clean and test-other datasets from Librispeech [36] to create our monolingual English test EN. Then, the CS test CN-EN is from the official challenge test set. The details are presented in Table 1. The 80-dimensional log filter-bank energy is our input acoustic features, which are extracted with a stride size 10ms and a window size 25ms. The cepstral mean and variance normalization (CMVN), and SpecAugment [37] is applied. The vocabulary consists of 7075 unique characters and 4989 BPE [38] tokens. In addition, as for the training and testing ST label, the EN2CN1 and CN2EN2 translation model, which is based on the CSANMT algorithm [34], both from ModelScope3, is used to get the pseudo labels. Then, we 1https://www.modelscope.cn/models/damo/nlp csanmt translation en2 zh/summary 2https://www.modelscope.cn/models/damo/nlp csanmt translation zh2 en/summary 3https://github.com/modelscope/modelscope use WeNet\u2019s [39] metrics calculation script4 for ASR scoring, which includes word (WER), character (CER), mix (MER) error rate, and the sacrebleu [40] tool for ST scoring, which includes BLEU and translation error rate (TER). For simpler expression, in Table 2, Table 3, Table 4, Table 5, and Table 6, we will use CN, EN, and ALL to represent the CER of monolingual Mandarin, the WER of monolingual English, and the total MER of the CS test set respectively. 4.2. Experimental setup The experiments are both conducted on the ESPnet toolkit [41]. We use the hybrid CTC/Attention [4] model with a NEncoder=12 encoder, NDecoder=6 decoder, and the CTConly model with a NEncoder=12, called the Vallina model. In the hybrid CTC/Attention model, \u03bbCTC set to 0.3. In our implementation, following [32], the LAE-based baseline model contains a shared encoder block NShare=9 and a languagespecific encoder block NMono=3 for each language. As mentioned in section 3.1, the layers of the LAE-ST-MoE encoder N are equal to ( NEncoder NShare NMono ), and the number of layers will be given in the result section. In our models, all encoders and decoders are stacked Transformerbased blocks [5, 42] with an attention dimension of 256, 4 attention heads, and a feed-forward dimension of 2048. We use the Adam optimizer with a Transformer-lr scale of 1 and warmup steps of 25k to train 100 epochs on 8 Tesla V100 GPUs. The dropout rate is 0.1 to prevent the model from over-fitting. In the training stage, we adopt a dynamic batch size strategy with a maximum batch size of 128. Moreover, we use Kenlm [43] to train a 4-gram language model with all training transcriptions and adopt the CTC prefix beam search for ST decoder rescore with a fixed beam size 10. 4.3. Experimental Results 4.3.1. Main results To show the effectiveness of our proposed LAE-ST-MoE framework, we compare it with LAE-based CTC and attentionbased (AED) ASR models. We set the NMono to 1 and \u03b2 to 0.6 in these experiments. The ablation on \u03b2 and NMono will be shown in section 4.3.4. The results are shown in Table 2. CTC System: Compared with the LAE-CTC ASR system (S2), our proposed LAE-ST-MoE CTC model (S3) achieve 9.26%, 8.57%, and 7.55% relative performance gain over the CS, mono EN, and CN tests, respectively, with the same decoding parameter. Especially in the English part of the CS test, our LAE-ST-MoE CTC (S3) shows a 10.09% WER reduction over the LAE CTC (S2) system. Moreover, it demonstrates a superior performance gain compared to Vanilla CTC (S1), which shows an about 20% error rate reduction in the CS test. Furthermore, the proposed LAE-ST-MoE CTC achieves a comparable performance with Conformer-based LAE [32] and an obvious gain compared to FLR-MoE CTC [22]. 4https://github.com/wenet-e2e/wenet/blob/main/tools/compute-wer.py Table 2. Results of proposed models and the baselines. The numbers in brackets indicates the relative error rate reduction comparing with the corresponding LAE-based model (S2 and S5). System Model Infer Params Code-Switch Mono ALL CN EN EN CN CTC-based ASR system Literature Conformer CTC [32] 11.6 + LAE [32] 9.5 FLR-MoE CTC [22] 25.8 M 10.5 7.7 33.1 10.1 5.1 Our results S1 Vallina CTC 19.8 M 12.2 9.0 38.9 12.4 7.1 S2 LAE CTC (baseline) 26.5 M 10.8 8.0 33.7 10.5 5.3 S3 LAE-ST-MoE CTC (proposed) 26.5 M 9.8 (9.26% \u2193) 7.3 30.3 9.6 (8.57% \u2193) 4.9 (7.55% \u2193) Attention-based ASR system Literature Hybrid CTC + Attention [21] 28.8 M 10.9 8.8 28.1 + Bi-En. (MoE-in-unsup) [21] 45.6 M 9.8 7.7 26.6 FLR-MoE AED [22] 40.7 M 9.7 7.4 28.4 9.6 4.7 Our results S4 Vallina AED 34.7 M 11.2 8.6 32.5 11.7 6.3 S5 LAE AED (baseline) 41.4 M 10.0 7.7 29.2 9.9 5.0 S6 LAE-ST-MoE AED (proposed) 41.4 M 9.3 (7% \u2193) 7.1 27.4 9.2 (7.07% \u2193) 4.7 (6% \u2193) AED System: The results also show that our LAE-ST-MoEbased system (S6) performs better than the Vallina (S4) and LAE-based (S5) AED ASR. Moreover, the LAE-STMoE-based AED system (S6) also shows an obvious MER reduction compared with the Bi-encoder [21] based and FLRMoE [22] based system on the CS test. CTC vs. AED system: We can find that the proposed LAEST-MoE-based CTC (S3) shows a little performance gain to the LAE AED system (S5) and comparable results with BiEncoder [21] based and FLR-MoE [22] based AED system. These results suggest that the ST auxiliary task can improve the ASR performance based on the LAE structure, which is consistent with our motivation. 4.3.2. Results of the w/ or w/o MoE in LAE-ST-MoE model Table 3. Performance of the w/ or w/o MoE. Model Code-Switch Mono CS \u2192EN CS \u2192CN ALL CN EN EN CN BLEU BLEU LAE-ST CTC 10.0 7.4 31.6 9.8 5.2 16.2 65.8 + MoE 9.8 7.3 30.3 9.6 4.9 17.7 66.6 Table 3\u2019s LAE-ST CTC model replaces the MoE layer in LAE-ST-MoE with a regular FFN. From the results, we can see that due to the introduction of the MoE module, the performance of ASR and ST is both improved obviously, which further confirms our motivation that introducing the MoE module will make ASR and ST tasks more focused. 4.3.3. Results of using ST decoder for ASR rescore Table 4. Performance of using ST decoder rescore. Model Code-Switch Mono ALL CN EN EN CN Vallina CTC 12.2 9.0 38.9 12.4 7.1 LAE CTC 10.8 8.0 33.7 10.5 5.3 LAE-ST-MoE CTC 9.8 7.3 30.3 9.6 4.9 + En2Man ST Dec. res. 9.7 7.1 31.2 10.2 4.8 + Man2En ST Dec. res. 10.4 8.1 29.1 9.3 5.6 The En2Man ST decoder is comparable to the Mandarin ASR decoder for monolingual Mandarin input. Therefore, we can easily fuse it into monolingual Mandarin decoding through rescoring. As shown in Table 4, the En2Man ST decoder improves the LAE-ST-MoE CTC system in the mono CN speech. It achieves comparable results to the LAE-STMoE AED system (Table 2\u2019s S6) in the monolingual Mandarin test. Especially on the Mandarin part of the CS test, the En2Man ST decoder rescoring performs better than the LAEbased AED system (Table 2\u2019s S5), which maybe benefit from the Mandarin-English context representation and the decoder LM-related information. In addition, the same phenomenon also can be observed when applying the Man2En ST decoder rescoring. These results show that the information learned by the ST decoder differs from that of the ASR decoder, improving the ASR performance. To a certain extent, the above results also prove the effectiveness of the LAE-ST-MoE. 4.3.4. Results of different \u03b2 and NMono values in LAE-STMoE As mentioned in section 3.2, \u03b2 is used to balance and regulate the ST effect. Therefore, in Table 5, we conduct experiments with \u03b2 values of 1.0, 0.8, 0.6, and 0.4, where we set NShare = 9 and NMono = 1. From the results, it can be seen that the performance of CS is basically not affected, and the model has the best overall performance at 0.6. Table 5. Results with different \u03b2 when NShare = 9 and NMono = 1. Model \u03b2 Code-Switch Mono ALL CN EN EN CN Vallina CTC 12.2 9.0 38.9 12.4 7.1 LAE-ST-MoE CTC 1.0 9.8 7.3 30.3 9.7 5.0 LAE-ST-MoE CTC 0.8 9.8 7.3 30.3 9.7 5.1 LAE-ST-MoE CTC 0.6 9.8 7.3 30.3 9.6 4.9 LAE-ST-MoE CTC 0.4 9.9 7.4 30.5 9.7 5.0 In addition, we set \u03b2 to 0.6 and NShare to 9. Then, the effectiveness of NMono is investigated in Table 6. When NMono is 0, the ASR and ST share all encoder layers except FFN-MoE. However, when NMono=2, the LAE-ST-MoE encoder layer will reduce to 1. From Table 6, we can see the model achieves the best in NMono=1, which suggests that the LAE-ST-MoE model needs more layers to perform ST, and it also needs to reserve some layers to learn the languagespecific ASR representation. Table 6. Results with different NMono when NShare = 9 and \u03b2 = 0.6. Model NMono Code-Switch Mono ALL CN EN EN CN Vallina CTC 12.2 9.0 38.9 12.4 7.1 LAE-ST-MoE CTC 0 10.1 7.5 31.6 9.9 5.1 LAE-ST-MoE CTC 1 9.8 7.3 30.3 9.6 4.9 LAE-ST-MoE CTC 2 9.9 7.4 30.8 9.7 5.0 4.3.5. The results of ST auxiliary task in LAE-ST-MoE models We use ModelScope\u2019s MT model to generate pseudo-labels for the test set. From Tables 7 and 8, which show the BLEU score and translation error rate (TER) of our models, we can see that ST is less affected by \u03b2 but more affected by NMono. Furthermore, by combining Tables 5, 6, 7, and 8, we can observe that when the ST BLEU change, the ASR remain basically unchanged. It may be because there is also some confusion between the information on ASR and ST. However, the helpful and confusing information needs to be balanced. Our experimental CS data is Mandarin-dominant, so we have more Mandarin-to-English ST training data than English-toMandarin, which results in better BLEU for Mandarin-toEnglish ST. Furthermore, we test the best ST model on monolingual data in Table 9, and we can see that our model also has Table 7. ST results on the CS test when NShare = 9 and NMono = 1. Model \u03b2 CS \u2192EN CS \u2192CN BLEU TER (\u2193) BLEU TER (\u2193) LAE-ST-MoE CTC 1.0 18.4 69.6 67.0 21.3 LAE-ST-MoE CTC 0.8 18.1 70.0 66.8 21.5 LAE-ST-MoE CTC 0.6 17.7 70.3 66.6 21.6 LAE-ST-MoE CTC 0.4 17.3 70.6 66.2 21.9 Table 8. ST results on the CS test when NShare = 9 and \u03b2 = 0.6. Model NMono CS \u2192EN CS \u2192CN BLEU TER (\u2193) BLEU TER (\u2193) LAE-ST-MoE CTC 0 18.6 69.0 67.0 21.3 LAE-ST-MoE CTC 1 17.7 70.3 66.6 21.6 LAE-ST-MoE CTC 2 16.3 72.6 65.5 22.4 Table 9. ST results on the monolingual test. Model CN \u2192EN EN \u2192CN BLEU TER (\u2193) BLEU TER (\u2193) LAE-ST-MoE CTC 33.9 44.8 31.5 59.1 CS \u2192 CN: CS \u2192 EN: Audio text: \u4ed6\u7684diary\u6807\u9898\u6211\u90fd\u5f88\u559c\u6b22 \u4ed6\u7684diary\u6807\u9898\u6211\u90fd\u5f88\u559c\u6b22 Modelscope (MT): \u4ed6\u7684\u65e5\u8bb0\u6807\u9898\u6211\u90fd\u5f88\u559c\u6b22 I liked his diary title Our model (ST): \u4ed6\u7684\u65e5\u8bb0\u6807\u9898\u6211\u90fd\u5f88\u559c\u6b22 His diary title I like it very much EN \u2192 CN: CN \u2192 EN: Audio text: I say I've been wondering about this business \u7ed9\u6211\u4ecb\u7ecd\u51e0\u9996\u597d\u542c\u7684\u6b4c Modelscope (MT): \u6211\u8bf4\u6211\u4e00\u76f4\u5728\u60f3\u8fd9\u9879\u4e1a\u52a1 Introduce me some nice songs Our model (ST): \u6211\u8bf4\u6211\u4e00\u76f4\u5728\u60f3\u8fd9\u9879\u4e1a\u52a1 Introduce me some good songs Fig. 2. The examples translated by ModelScope and our model respectively. good BLEU. For CS data with limited English, the BLEU of CS speech to Mandarin text shows better than CS to English. Figure 2 provides examples of the translation performed by ModelScope\u2019s MT model and our model\u2019s ST branch. Specifically, ModelScope\u2019s model translates text, whereas ours conducts the ST task. To a certain extent, these examples demonstrate that our model has learned good ST ability. 5. CONCLUSIONS In this paper, we propose an LAE-ST-MoE framework that incorporates ST tasks into LAE and utilizes ST to learn the contextual information between different languages. The experimental results on the ASRU 2019 Mandarin-English CS challenge dataset demonstrate that, compared to the LAE-based CTC and AED system, the proposed LAE-ST-MoE model achieves about 6%-9% relative error rate reduction. Extensive investigations into the w/ or w/o MoE module, comparison with the literature results, and ablation on different \u03b2 and NMono values have also been carried out and confirm the effectiveness of the LAE-ST-MoE. Moreover, the well-trained LAE-ST-MoE model can perform ST tasks from CS speech to Mandarin or English, and the structure is easy to extend to one-to-many ST tasks. In the future, we will further explore the LAE-ST-MoE to multilingual ASR and one-to-many ST. 6." + }, + { + "url": "http://arxiv.org/abs/2308.11971v2", + "title": "EVE: Efficient Vision-Language Pre-training with Masked Prediction and Modality-Aware MoE", + "abstract": "Building scalable vision-language models to learn from diverse, multimodal\ndata remains an open challenge. In this paper, we introduce an Efficient\nVision-languagE foundation model, namely EVE, which is one unified multimodal\nTransformer pre-trained solely by one unified pre-training task. Specifically,\nEVE encodes both vision and language within a shared Transformer network\nintegrated with modality-aware sparse Mixture-of-Experts (MoE) modules, which\ncapture modality-specific information by selectively switching to different\nexperts. To unify pre-training tasks of vision and language, EVE performs\nmasked signal modeling on image-text pairs to reconstruct masked signals, i.e.,\nimage pixels and text tokens, given visible signals. This simple yet effective\npre-training objective accelerates training by 3.5x compared to the model\npre-trained with Image-Text Contrastive and Image-Text Matching losses. Owing\nto the combination of the unified architecture and pre-training task, EVE is\neasy to scale up, enabling better downstream performance with fewer resources\nand faster training speed. Despite its simplicity, EVE achieves\nstate-of-the-art performance on various vision-language downstream tasks,\nincluding visual question answering, visual reasoning, and image-text\nretrieval.", + "authors": "Junyi Chen, Longteng Guo, Jia Sun, Shuai Shao, Zehuan Yuan, Liang Lin, Dongyu Zhang", + "published": "2023-08-23", + "updated": "2024-03-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG", + "cs.MM" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Vision-Language Pre-training aims to learn a general mul- timodal representation that can be transferred to various vision-language downstream tasks, such as vision-language understanding and image-text retrieval. A vision-language foundation model should have excellent performance while being easy to train and scale up, which can be achieved through the model architecture and the pre-training tasks. The model architectures of recent methods can be roughly divided into two categories: dual-encoder architecture and unified architecture. Dual-encoder methods (Radford et al. 2021; Zeng, Zhang, and Li 2022) employ modality-specific models (e.g. BERT (Devlin et al. 2019), ViT (Dosovitskiy et al. 2021)) to encode different modalities separately and a fusion module to integrate them. As for the fusion mod- ule, some methods (Radford et al. 2021) employ shallow *This work was done while Junyi was an intern at ByteDance. \u2020Corresponding author. Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 200 500 850 1700 3800 GPU Training Hours 74 75 76 77 78 VQA Accuracy 4.3x faster EVE-Base(Ours) ALBEF MAMO VLMO VL-BEiT Figure 1: Performance of different models on VQA test-dev under different training hours. Training hours of all models are reproduced by us on A100 GPUs. fusion (e.g., dot product) for the interaction of vision and language. Some alternative methods (Zeng, Zhang, and Li 2022) use deep neural networks, such as Transformer En- coders, to perform deep fusion on modality interaction, but lead to difficulties in scaling up and low efficiency. Unified methods (Kim, Son, and Kim 2021; Wang et al. 2022b) use a modality-shared Transformer to encode different modali- ties jointly. This approach simplifies the framework and im- proves the speed, helping with model scaling up. However, they overlook the inherent gap between modalities, leading to lower overall performance. Image is continuous, redun- dant, and low-level on the raw signals, while text is discrete, refined, and high-level. Directly using a shared Transformer to encode different modalities with semantic gap poses prob- lems. Therefore, it is necessary to consider the differences between different modalities carefully. Previous methods also have explored numerous pre- training tasks for vision-language pre-training, including Image-Text Contrastive Learning (Radford et al. 2021), Image-Text Matching (Li et al. 2021), Word-Patch Align- ment (Chen et al. 2020), Masked Language Modeling (Su et al. 2020), Masked Image Modeling (Bao et al. 2022b), and so on. They have been widely used to improve vision-language pre-training. While incorporating more pre- arXiv:2308.11971v2 [cs.CV] 1 Mar 2024 training tasks can enhance performance, adding too many tasks can also lead to some problems. Foremost, it signifi- cantly prolongs the pre-training time and increases the com- putational resources required. Additionally, it necessitates manual weight adjustments for different objectives. Further- more, excessive pre-training objectives can result in a reduc- tion in the model\u2019s scalability, which is crucial in design- ing pre-training models, as the recent success has shown in large language models (Ouyang et al. 2022; Wei et al. 2022b; Zhao et al. 2023b). Therefore, it is necessary to use effective and scalable pre-training tasks. In this paper, we propose an Efficient Vision-languagE foundation model (EVE) with a unified modality-aware Transformer pre-trained with a single unified pretraining task, i.e., masked signal modeling. In terms of model architecture, we use a unified modality- aware Transformer and revisit the integration of Mixture- of-Experts in vision-language pre-training. We employ a shared Multi-Head Self-Attention module and a Modality- Aware MoE module for the modality-aware Transformer to encode and fuse various modalities jointly. Using a unified shared Transformer is more concise and flexible, which sim- plifies the extension to additional modalities and facilitates cross-modal alignment. By incorporating MoE, we can take into account the differences between modalities and cap- ture more modality-specific information. We also introduce a modality routing technique in MoE that enables the router select more appropriate experts for processing. In terms of pre-training tasks, we propose a unified masked signal modeling technique combining masked pixel and language modeling, which significantly improves train- ing speed and reduces scaling difficulty. Some meth- ods (Wang et al. 2023; Kwon et al. 2023; Zhao et al. 2023a) have applied generative pre-training paradigm to vision- language pre-training. While they either add the genera- tive objective with other complex objectives like ITC and ITM (Kwon et al. 2023) or employ more complicated tar- gets such as visual tokens (Wang et al. 2023) or momentum features (Zhao et al. 2023a), which require a nontrivial vi- sual tokenizer or momentum model. All of these increase the complexity of pre-training. In contrast to them, we just uti- lize the raw signals from the image-text pairs themselves to minimize the complexity of pre-training and achieve better scalability. Pre-training speed is 3.5x faster than incorporat- ing ITC and ITM. EVE can greatly enhance pre-training speed, as shown in Figure 1. It decreases the demand for extensive computa- tional resources while being easy to scale up. We demon- strate the effectiveness of EVE on various vision-language downstream tasks, including visual question answering, vi- sual reasoning, and image-text retrieval. EVE achieves state- of-the-art performance on Image-Text Retrieval and Vision- Language Understanding (VQA and NLVR2) tasks. Our contributions are summarized as follows: \u2022 We introduce EVE, an efficient vision-language foun- dation model that achieves state-of-the-art performance while improving training speed, with one unified multi- modal Transformer and one unified pre-training task. \u2022 We integrate Modality-Aware MoE with a shared multi- modal Transformer to achieve a more profound fusion of different modalities and capture more modality-specific information simultaneously, resulting in better perfor- mance and faster inference speed within a unified archi- tecture. \u2022 We propose a unified masked signal modeling technique, simplifying vision-language pre-training into a single unified objective, resulting in significantly improved pre- training speed and competitive performance.", + "main_content": "Model architecture and pre-training tasks are crucial factors in the representation learning of vision-language. Model Architecture Dual-encoder with a fusion module (Li et al. 2021; Liu et al. 2021; Dou et al. 2022b; Zhao et al. 2023a) performs well on vision-language tasks but with higher time and architecture complexity. Unified architecture methods (Kim, Son, and Kim 2021; Wang et al. 2022b; Bao et al. 2022a,b) can flexibly encode different modalities as a fusion encoder or process a single modality as a unimodal encoder, demonstrating faster inference speed and promising performance. Some of them (Kim, Son, and Kim 2021; Wang et al. 2022b) use a shared standard Transformer (Vaswani et al. 2017) to jointly encode different modalities, while they ignore the modality gap and lead to worse performance. Others (Bao et al. 2022a,b) use MoME Transformer instead and prove that shared attention is better for multimodal learning. However, MoME Transformer uses modality-shared FFN in the deep layers may neglect some modality-specific information. Considering the simplicity, effectiveness, and flexibility of the unified architecture, we adopt a unified architecture with Modality-Aware MoE to better capture modality specifics during fusion for multimodal representation learning. We achieve state-of-the-art performance with approximately the same inference cost. Masked Signal Modeling Recently, several methods (Bao et al. 2022b; Zhao et al. 2023a; He et al. 2022b; Diao et al. 2023; Geng et al. 2022) explore the \u201dmask then predict\u201d paradigm in the vision for vision-language pre-training. While VLBEiT (Bao et al. 2022b) introduces training on the visual modality through masked image modeling, their reconstruction target is the visual token, which may significantly influence performance depending on the visual tokenizer employed. DAVINCI (Diao et al. 2023) extends prefix language modeling further to vision, but it also uses the discrete visual token as the target. MAMO (Zhao et al. 2023a) enriches multimodal representation by using momentum features in masked representation modeling, which relies heavily on a momentum teacher model to avoid divergence. Some methods (Kwon et al. 2023; He et al. 2022b; Gui et al. 2023) use masked pixel modeling, but they all require additional costly pre-training tasks such as ITC (Radford et al. 2021) and ITM (Li et al. 2019). Among these methods, VLMAE (He et al. 2022b) only applies masked pixel modeling to the image encoder. M3AE (Geng et al. 2022) leverages a MIM Decoder MLM Decoder cat laying to Vision Mask Embedding Text Mask Embedding Random Mask a a little little [M] cat [M] laying next next [M] to a a [M] bowl bowl Multi-Head Self-Attention Modality-Aware MoE Figure 2: Overview of EVE and Masked Signal Modeling. We use a unified architecture with shared attention and ModalityAware MoE for EVE and a single unified masked signal modeling for pre-training. We employ random masking on both image and text. Masked image and complete text are used in masked image modeling, vice versa. unified Image-Language masking approach to mask and reconstruct both images and text simultaneously, but it is not used in multimodal downstream tasks. We unify masked pixel and language modeling into masked signal modeling, reconstructing masked raw signals from visible signals. This simplifies and accelerates training, achieving better performance and scalability. Mixture-of-Experts (MoE) Mixture-of-Experts has been extensively explored in computer vision (Riquelme et al. 2021) and natural language processing (Shazeer et al. 2017; Lepikhin et al. 2021). These methods generally aim to improve performance by learning a better routing using auxiliary losses (Lepikhin et al. 2021; Zoph et al. 2022), converting it into a linear assignment problem (Lewis et al. 2021), or making it differentiable (Hazimeh et al. 2021). MoE seems well-suited for multimodal learning, but the differences between modalities present some challenges. LIMoE (Mustafa et al. 2022) involves more auxiliary losses to balance different modalities, uni-perceiver-moe (Zhu et al. 2022) employs conditional MoE, VLMO (Bao et al. 2022a) and VLMoE (Shen et al. 2023) use shared expert in the deep layers. However, existing methods increase complexity or limit performance due to manual routing and ignoring modality information. Therefore, we propose Modality-Aware MoE as a simple way to apply MoE to multimodal learning. We simplify the auxiliary loss and capture more modality specifics by expert switching. Methods Backbone Network As shown in Figure 2, we adopt a unified multimodal Transformer with shared attention and Modality-Aware Mixtureof-Experts as the backbone network, which is capable of encoding different modalities. After pre-training, the model can be utilized as either a fusion encoder or a unimodal encoder for various downstream tasks through fine-tuning. For Image I, following VIT (Dosovitskiy et al. 2021), we first split the Image I into N patches with a patch size of P. The resulting N = HW/P 2 patches are projected into a shared embedding space using a linear projector. A special token Icls is added at the beginning of all visual tokens. We employ learnable visual position embeddings Ipos and visual type embeddings Itype on visual tokens. Image embedding can be summarized as follows. Iemb = [Icls, I1, . . . , IN] + Ipos + Itype (1) For Text T, following BERT (Devlin et al. 2019), we tokenize the text into discrete tokens with the maximum length of n and project them into the joint embedding space. We add a special token T cls at the beginning of all text tokens and use learnable text position embeddings T pos and text type embeddings T type for text encoding Text embedding can be summarized as follows. T emb = [T cls, T 1, . . . , T n] + T pos + T type (2) We concatenate Iemb and T emb as the input to the model: Pemb = [Iemb, T emb] (3) Modality-Aware Mixture-of-Experts Multimodal learning differs significantly from unimodal learning, as the differences between modalities cannot be ignored. Using the same Feed-Forward Network for all modalities can lead to inappropriate fusion of modalities, resulting in degraded performance. Conversely, using modalityspecific MoE in all layers may not benefit the alignment of different modalities. Therefore, we propose the ModalityAware Mixture-of-Experts (MoE) as shown in Figure 3, which incorporates the modality routing technique on top Expert 1 Expert 2 Expert 3 Expert \u2026 Text Modality Embedding Vision Modality Embedding Expert n Modality Router \u2026.. \u2026 / Weighted Sum / \u2026.. Figure 3: Architecture of Modality-Aware MoE. of the general MoE to capture modality-specific information while fusing by selectively switching to different experts. In the general MoE, each MoE block typically consists of N experts, and each input token is processed by k experts selected from the N experts. A lightweight router g is used to select the k experts for each token, which employs a simple linear-softmax predictor to calculate the routing weight. This can be formulated as: g(x) = softmax (W \u00b7 x) (4) W \u2208RD\u00d7N is a learnable projector for input x \u2208RD. The final output of the MoE block is the weighted average of the k selected experts, which can be formulated as: MoE(x) = k X i=1 g(x)i \u00b7 FFNi(i) (5) Modality Routing General MoE does not impose any restrictions on the router, which can easily lead to unbalanced routing. LIMoE (Mustafa et al. 2022) points out that this phenomenon can be exacerbated in multimodal learning due to the difference in token count across different modalities. To address this issue, we propose a modality-aware routing approach to enhance the router. We adopt a best-effort strategy for routing to preserve all tokens while explicitly providing modality information to the router by adding modality-specific embeddings. The new routing function can be formulated as follows: g(x) = softmax (W \u00b7 (x + bm)) (6) Here, we use modality-specific embeddings bm \u2208RD for different modalities, i.e., bI for images and bT for text. Auxiliary Loss In addition to modality routing, we use a single simple auxiliary loss to balance routing and avoid carefully tuning the weight. Following Shazeer et al. (2017), we add Load-Balancing Loss as the auxiliary loss to train the router. It can be formulated as follows: Laux = \u03b1 \u00b7 N N X i fi \u00d7 pi (7) Pre-training Tasks Batch size Time MLM ITC ITM MIM Token MIM Pixel 224 2.14h 152 3.09h 132 3.26h 80 6.88h 64 7.73h Table 1: Maximum batch size per GPU and pre-training time per epoch of different pre-training tasks on 8 A100 GPUs with the same architecture as EVE-Base. We add vision mask tokens in the encoder during masked token modeling. This objective encourages uniform routing of tokens, where N denotes the number of experts, fi denotes the fraction of tokens dispatched to the ith expert, and pi denotes the average routing weight for the ith expert. The weight \u03b1 is a hyperparameter that we set at 0.001 by default to avoid overwhelming other objectives. Considering efficiency, we use a soft router with top-k = 2 in the deep layers and a hard router in the shallow layers. An MoE module equipped with a hard router has the same number of experts as the number of modalities. The hard router directly selects the corresponding expert based on the modality of each token. Pre-training Task: Masked Signal Modeling Previous multimodal models (Li et al. 2021; Radford et al. 2021; Bao et al. 2022a; Li et al. 2019; Zhao et al. 2023a) typically involve complex pre-training tasks like Image-Text Contrastive Learning (ITC) (Radford et al. 2021), ImageText Matching (ITM) (Li et al. 2019), and Masked Representation Modeling (MRM) (Zhao et al. 2023a). These methods have shown good performance, but pre-training still requires significant computational resources, and is challenging to scale up. Table 1 shows the efficiency comparison between different pre-training tasks, which indicates a significant difference in time consumption and batch size. Compared to pre-training without ITC and ITM, including them requires four times more computational resources to achieve a similar speed. Moreover, ITC and ITM tasks are similar to other contrastive learning-based methods that typically require a larger batch size to achieve better performance. Incorporating additional pre-training tasks can significantly decrease training speed, increase training difficulty, and have an impact on the scalability of the model. Thus, we pre-train our model with only one unified masked signal modeling objective on image-text pairs to reconstruct masked signals by visible signals as shown in Figure 2. Specifically, masked signal modeling combines masked image modeling and masked language modeling, and only utilizes the raw signals from image-text pairs themselves without relying on any additional techniques. We use masked image and complete text in masked image modeling, while complete image and masked text in masked language modeling. Despite its simplicity, our approach achieves competitive performance compared to previous methods and can be easily scaled up. In this section, we use h(\u00b7) and \u03b8(\u00b7) to denote the encoder and the decoder. \u02c6 I and \u02c6 T are represented for masked image and masked text. D indicates the dataset. Masked Language Modeling (MLM) Following BERT (Devlin et al. 2019), we randomly mask some of the text tokens and predict them based on the information provided by the image and corrupted text. The Masked Language Modeling (MLM) objective can be formulated as follows: Lmlm = E(I,T )\u223cD\u2113mlm \u0010 \u03b8t \u0010 h(I, \u02c6 T) \u0011 , T \u0011 (8) \u2113mlm computes the cross-entropy loss between the prediction probability Pmlm, obtained from the text decoder gt, and the ground truth on each masked token. We use a twolayer MLP with a softmax layer as the text decoder. Masked Image Modeling (MIM) Previous methods (Zhao et al. 2023a; Zhang et al. 2023; Wang et al. 2023) have typically employed semantically rich visual features obtained from the model itself or discrete visual tokens obtained from visual tokenizers as the targets for MIM. However, both approaches have their drawbacks. Training visual tokenizers (Ramesh et al. 2021; Peng et al. 2022) is a challenging task as different tokenizers can have varying impacts on performance and may lead to error propagation. Meanwhile, using visual features (Zhao et al. 2023a; Zhang et al. 2023) requires either applying momentum distillation techniques or employing other loss functions and techniques to prevent the model from diverging during training. These MIM targets make the overall framework more complex. In visual self-supervised learning, some works use other information as the MIM targets, such as RGB pixels (He et al. 2022a), scene depth (Bachmann et al. 2022), HOG (Wei et al. 2022a), etc. However, using targets such as scene depth and HOG requires additional techniques, which increases the complexity of the training process. In order to maintain simplicity and effectiveness, we choose to utilize the image pixels themselves as the reconstruction target. Following MAE (He et al. 2022a), we adopt an asymmetric design for MIM, where only observed image patches and all text tokens are fed into the encoder. A lightweight decoder is used to reconstruct raw pixels on masked positions from partial image representation and masked tokens, as shown in Figure 2. We use multiple Transformer blocks with narrower hidden widths as the decoder. The MIM objective can be formulated as: Lmim = E(I,T )\u223cD\u2113mim \u0000\u03b8i \u0000h(\u02c6 I, T)\u0001 , I\u0001 (9) \u2113mim calculates the mean square error between the raw pixels and the reconstructed result generated by the image decoder. We compute the loss on masked image patches. The overall objective of masked signal modeling is: L = Lmlm + Lmim (10) Experiments Pre-training Datasets Following Previous methods, we pre-train EVE on four widely used public datasets: MSCOCO Captions (Lin et al. 2014), Visual Genome (Krishna et al. 2017), SBU Captions (Ordonez, Kulkarni, and Berg 2011) and Conceptual Captions (Sharma et al. 2018). There are about 4M images and 10M image-text pairs in all datasets. Since some downstream tasks are based on COCO, we exclude all images in the test sets of downstream tasks from the pre-training data. We also pre-train EVE-Large on a larger dataset with 21M image-text pairs by adding CC12M (Changpinyo et al. 2021). Implementation Details EVE-Base has 12 Transformer blocks and EVE-Large has 24 Transformer blocks. We employ a soft router with 32 experts in EVE-Base on top-2 blocks, EVE-Large on top3 blocks, and a hard router on the other blocks. We pretrain EVE-Base for 480k steps with a batch size of 2048 and EVE-Large with the same batch size for 280k steps. We use AdamW (Loshchilov and Hutter 2019) optimizer. The peak learning rate is 5e-4 for EVE-Base and 2e-4 for EVE-Large. During pre-training, the image resolution is 224 \u00d7 224. We use random resized cropping and horizontal flipping for data augmentation. We mask 75% of image in MIM and 50% of text in MLM. EVE is initialized with BEiTv2. More details are provided in Appendix. Vision-Language Downstream Tasks We evaluate our pre-trained model on three common VisionLanguage Tasks. More implementation details and comparison on inference speed are provided in Appendix. Visual Question Answering (VQA) VQA requires the model to predict an answer based on the given image and question. We use VQA2.0 dataset (Goyal et al. 2017) to evaluate our model. Following previous work (Bao et al. 2022a), we view the task as a classification task. Natural Language for Visual Reasoning (NLVR2) Given a sentence and two images, NLVR2 asks the model to judge whether the sentence accurately describes the relationship between the two images. We evaluate our model on NLVR2 dataset (Suhr et al. 2019). Following Chen et al. (2020), we convert the triplet input into two image-text pairs with the same text description and different images. Image-Text Retrieval Retrieval task contains two subtasks: Image-to-Text Retrieval (TR) and Text-to-Image Retrieval (IR). We evaluate the model on widely used Flickr30K (Plummer et al. 2015) and MSCOCO (Lin et al. 2014) benchmarks following Karpathy split (Karpathy and Fei-Fei 2015). Following Li et al. (2021), we apply ITC and ITM losses in the fine-tuning stage and we use rerank strategy during inference. Model #Images VQA NLVR2 COCO Flickr30K test-dev test-std dev test-P TR@1 IR@1 TR@1 IR@1 ALBEF (Li et al. 2021) 4M 74.54 74.70 80.24 80.50 73.1 56.8 94.3 82.8 Triple (Yang et al. 2022) 4M 74.90 74.92 80.54 81.33 75.6 59.0 94.9 84.0 Codebook (Duan et al. 2022) 4M 74.86 74.97 80.50 80.84 75.3 58.7 95.1 83.3 METER (Dou et al. 2022a) 4M 77.68 77.64 82.33 83.05 76.2 57.1 94.3 82.2 MAMO (Zhao et al. 2023a) 4M 76.12 76.20 81.86 81.53 77.1 60.3 95.6 85.4 VLMO (Bao et al. 2022a) 4M 76.64 76.89 82.77 83.34 74.8 57.2 92.3 79.3 VL-BEiT (Bao et al. 2022b) 4M 77.53 77.75 81.93 82.66 79.5 61.5 95.8 83.9 VLMAE (He et al. 2022b) 4M 75.30 75.40 80.50 81.20 77.3 59.6 95.2 83.6 MaskVLM (Kwon et al. 2023) 4M 75.45 75.40 81.58 81.98 76.3 60.1 95.6 84.5 VLC-Base (Li et al. 2021) 5.6M 74.02 74.00 77.70 79.04 72.4 50.7 89.2 71.3 DAVINCI (Diao et al. 2023) 631.8M 76.32 76.44 80.03 80.25 SimVLM-Base (Wang et al. 2022b) 1.8B 77.87 78.14 81.72 81.77 BEiT3-Base (Wang et al. 2023) 3.1B 77.65 83.60 84.40 79.1 61.4 96.3 86.2 EVE-Base (Ours) 4M 78.00 78.02 83.34 83.93 79.6 62.0 95.6 84.1 Table 2: Comparison with state-of-the-art base-size models on VQA, NLVR2, MSCOCO and Flickr30K. Gray lines indicate the model pre-trained with much more data (more than 400M). MIM Target NLVR2 Flickr30K VQA dev test-P TR IR BEiTv2 Token 78.0 78.5 92.6 78.3 76.6 DALL-E Token \u00d7 \u00d7 92.4 77.4 75.8 Pixel (Ours) 79.7 80.1 93.9 80.7 77.3 Table 3: Ablation study on MIM target. \u00d7 denotes divergence during fine-tuning. Results on Downstream Tasks We present the results of VQA, NLVR2, COCO, and Flickr30K with state-of-the-art base models in Table 2 and large models in Table 4. We report the accuracy for VQA and NLVR2, top-1 recall for TR and IR. Results on Vision-Language Understanding EVE-Base outperforms all previous methods on Understanding tasks and even marginally outperforms BEiT3-Base (Wang et al. 2023) pre-trained with 3.1B data on VQA. EVE-Base outperforms VLMO (Bao et al. 2022a), which also employs a unified architecture with more pre-training objectives by 1.77% on VQA test-dev and 0.70% on NLVR2 test-P. EVELarge4M shows similar performance to SimVLM-Large (Wang et al. 2022b), whereas EVE-Large16M surpasses SimVLM-Huge which is larger and pre-trained on much more data. Results on Image-Text Retrieval EVE-Base achieves competitive results on Flickr and outperforms the previous state-of-the-art methods on COCO. Compared to VLMO, EVE-Base achieves improvements of 6.42% on COCO text retrieval R@1 and 8.39% on COCO image retrieval R@1. In addition, EVE-Large demonstrates better performance on both COCO and Flickr30K to other Large or even Huge models with very limited data. Notably, Image-Text Contrastive Learning and Image-Text Matching are not involved in the pre-training of EVE. 0 30 50 75 90 100 Vision Maksing Ratio 76.8 77.1 77.4 77.7 93.3 93.6 93.9 94.2 Flickr VQA 0 25 40 50 75 90 100 T ext Maksing Ratio 76.0 76.4 76.8 77.2 77.6 78.0 92.4 92.8 93.2 93.6 94.0 Flickr VQA Figure 4: Ablation study on masking ratio. Left and right yaxis denote VQA accuracy and Flickr mean recall. Ablation Studies For all ablation studies, we pre-train the model for 25 epochs with a similar architecture to EVE-Base and report accuracy on NLVR2, VQA dev set, and top-1 recall on Flickr30K. We use the soft router with top-k = 2 by default. We present some more ablation studies in Appendix. MIM Target We compare different MIM targets in Table 3, including image token and pixel. We use the tokenizer from BEiT v2 (Peng et al. 2022) and DALL-E (Ramesh et al. 2021). It is observed that reconstructing pixels is better than reconstructing image tokens in all tasks. Using a more complex MIM target does not achieve the expected effect. Masking Ratio In Figure 4, we investigate the impact of different masking ratios on both vision and language. Results indicate that a higher vision masking ratio leads to improved performance. We hypothesize that the raw signals are highly redundant for image, and a higher masking ratio is needed to facilitate representation learning. The noteworthy difference from previous work (Zhao et al. 2023a) is that we achieve better performance at a higher text masking ratio. Our interpretation is that with a more profound integration of vision and language, the model can more easily predict masked text tokens with the aid of vision. Model #Images VQA NLVR2 COCO Flickr30K test-dev test-std dev test-P TR@1 IR@1 TR@1 IR@1 VinVL-Large (Zhang et al. 2021) 8.9M 76.52 76.60 82.67 83.98 75.4 58.8 BLIP-CapFiltL (Li et al. 2022) 129M 78.25 78.32 82.15 82.24 81.2 64.1 97.2 87.5 BLIP-Large (Li et al. 2022) 129M 82.4 65.1 97.4 87.6 Uni-PerceiverMoE-L (Zhu et al. 2022) 44.1M 74.7 58.3 94.1 83.7 FILIP-Large (Yao et al. 2022) 340M 78.9 61.2 96.6 87.1 Prismer-Large (Liu et al. 2023) 12.7M 78.4 78.5 GIT (Wang et al. 2022a) 800M 75.5 ALIGN-Large (Jia et al. 2021) 1.8B 77.0 59.9 95.3 84.9 SimVLM-Large (Wang et al. 2022b) 1.8B 79.32 79.56 84.13 84.84 SimVLM-Huge (Wang et al. 2022b) 1.8B 80.03 80.34 84.53 85.15 Florence-Huge (Yuan et al. 2021) 900M 80.16 80.36 81.8 63.2 97.2 87.9 EVE-Large (Ours) 4M 79.25 79.20 84.03 84.69 82.5 65.2 96.3 86.3 EVE-Large (Ours) 16M 80.17 80.18 85.63 86.22 83.5 66.7 98.0 87.9 Table 4: Comparison with state-of-the-art large-size models on VQA, NLVR2, MSCOCO and Flickr30K. Gray lines indicate the model pre-trained with much more data (more than 400M). 4 8 16 32 Number of Experts 76.0 76.2 76.4 76.6 76.8 77.0 77.2 VQA Accuracy T opK=2 T opK=1 4 8 16 32 Number of Experts 92.8 93.2 93.6 94.0 Flickr mean recall T opK=2 T opK=1 Figure 5: Ablation study on the number of experts and top-k design. We use soft router in [8, 10, 12] Transformer blocks. Tasks NLVR2 Flickr30K VQA MIM MLM dev test-P TR IR 57.2 57.4 30.4 22.9 60.9 78.8 79.3 92.2 79.2 77.0 \u2020 \u2020 75.4 75.7 88.6 74.2 74.6 79.7 80.1 93.9 80.7 77.3 Table 5: Ablation study on MIM and MLM. \u2020 denotes the model is pre-trained by MIM and MLM simultaneously with masked image and text inputs. Masking ratio is set to 50% for both image and text in \u2020, but 75% for image in others. Number of Experts and Top-K The number of experts and the selection of top-k are crucial aspects of MoE design, as they determine the model\u2019s parameters, computational complexity, and performance. Figure 5 clearly demonstrates that performance deteriorates as the number of selected experts decreases from 2 to 1. When k = 1, increasing the number of experts can actually lead to a decrease in performance, which is more evident in retrieval tasks. When k = 2, increasing the number of experts leads to corresponding improvements in the performance of both VQA and retrieval tasks, with a more significant improvement observed in the retrieval task. Pre-training Tasks Flickr30K VQA MIM MLM ITC ITM TR IR 94.0 80.0 76.8 94.0 80.7 77.0 94.2 80.8 77.1 94.4 81.2 77.4 Table 6: Ablation study on more pre-training tasks. All models are pre-trained with the same pre-training GPU hours. Pre-training Tasks We explore the use of different pretraining tasks for masked signal modeling in Table 5. Experiments reveal that MLM with a high masking ratio is sufficient for learning the interaction between vision and language. The addition of MIM further improves the results by reducing bias, as observed in (Kwon et al. 2023). Pretraining with MIM alone results in a minimal fusion between vision and language. We hypothesize that text descriptions are typically coarse-grained and may not offer significant assistance in fine-grained vision reconstruction. Simultaneously masking both modalities and performing MIM and MLM is not recommended. This task reduces the amount of vision and language information available, which in turn increases the difficulty of MLM and MIM, resulting in performance decline. We further explore more pre-training tasks under the same pre-training GPU hours in Table 6. Pre-training only on MIM and MLM achieves better results in both retrieval tasks and understanding tasks, thereby demonstrating the efficiency of Masked Signal Modeling. Performance on NLVR task is provided in Appendix. Deep FFN We compare different designs of FFN in the deep layers in Table 7. Modality-shared FFN performs better than modality-specific MoE in the deep layers, as deep features require more alignment between modalities. Using a soft router can align different modalities while obtaining more modality-specific information, thereby further enhancing performance compared to deeper architecture. When we set Laux = 0, there is a noticeable decline in the model\u2019s Deep FFN NLVR2 Flickr30K VQA dev test-P TR IR Shared FFN 79.6 80.1 93.5 80.1 77.0 Shared FFN\u2020 80.1 80.2 93.9 80.6 77.1 Hard Router 79.8 80.1 93.2 79.3 77.0 Soft Router 80.3 80.7 94.4 81.2 77.4 Soft Router\u2021 79.2 80.0 93.6 79.7 77.1 Table 7: Ablation study on deep (top-2 layers) FFN design. Shared FFN indicates different modalities use the same FFN. We additionally add one more Transformer block to investigate the impact of parameters per token for \u2020. We set Laux = 0 for \u2021. 0 5 10 15 20 25 30 0.00 0.01 0.02 0.03 0.04 0.05 Vision T oken T ext T oken Figure 6: Frequency distribution of different modal tokens routed to specific experts in the last layer of EVE-Base. performance across various tasks. Figure 6 illustrates the frequency distribution of tokens from various modalities routed to specific experts in the last layer of EVE-Base during inference in retrieval tasks. Most experts evenly process both modalities, yet some are specialized in only images or text (e.g., experts 1 and 24). This reveals a modality gap within the model\u2019s deeper layers. Shared FFN fails to address this issue, whereas the application of MoE can help mitigate it by routing to experts that specialize in processing vision or language tokens, thereby improving overall performance. Modality Routing We compare the performance of the model whether use modality routing in the soft router or not in Table 8, and the results show that our proposed modality routing can help the router to distinguish the inputs of different modalities and thus achieve better performance. Visualization We use Grad-CAM (Selvaraju et al. 2017) heatmap to visualize the self-attention maps of EVE in masked signal modeling and VQA Task. Results are provided in Appendix. Conclusion In this paper, we present a new multimodal foundation model EVE only pre-trained by Maksed Signal Modeling with Modality-Aware MoE which is flexible and capable of encoding different modalities in a unified manner. We accelerate pre-training speed 3.5x faster than pre-training with ITC and ITM. Additionally, it is easy to scale up with a larger model or more pre-training data. Extensive experiments demonstrate that EVE outperforms existing methods in various Vision Language downstream tasks. Modality Routing NLVR2 Flickr30K VQA dev test-P TR IR EVE-Base 80.3 80.7 94.4 81.2 77.4 w/o MR 79.7 80.0 93.7 80.8 77.3 Table 8: Ablation study on modality routing technique. Acknowledgements This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 61876224." + }, + { + "url": "http://arxiv.org/abs/2308.11793v1", + "title": "Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts", + "abstract": "Cross-scene generalizable NeRF models, which can directly synthesize novel\nviews of unseen scenes, have become a new spotlight of the NeRF field. Several\nexisting attempts rely on increasingly end-to-end \"neuralized\" architectures,\ni.e., replacing scene representation and/or rendering modules with performant\nneural networks such as transformers, and turning novel view synthesis into a\nfeed-forward inference pipeline. While those feedforward \"neuralized\"\narchitectures still do not fit diverse scenes well out of the box, we propose\nto bridge them with the powerful Mixture-of-Experts (MoE) idea from large\nlanguage models (LLMs), which has demonstrated superior generalization ability\nby balancing between larger overall model capacity and flexible per-instance\nspecialization. Starting from a recent generalizable NeRF architecture called\nGNT, we first demonstrate that MoE can be neatly plugged in to enhance the\nmodel. We further customize a shared permanent expert and a geometry-aware\nconsistency loss to enforce cross-scene consistency and spatial smoothness\nrespectively, which are essential for generalizable view synthesis. Our\nproposed model, dubbed GNT with Mixture-of-View-Experts (GNT-MOVE), has\nexperimentally shown state-of-the-art results when transferring to unseen\nscenes, indicating remarkably better cross-scene generalization in both\nzero-shot and few-shot settings. Our codes are available at\nhttps://github.com/VITA-Group/GNT-MOVE.", + "authors": "Wenyan Cong, Hanxue Liang, Peihao Wang, Zhiwen Fan, Tianlong Chen, Mukund Varma, Yi Wang, Zhangyang Wang", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Given several images from different viewpoints, Neu- ral Radiance Field (NeRF) has achieved remarkable suc- cess on synthesizing novel views. Most existing methods [34, 31, 41, 61, 14, 51, 3, 64, 65, 22] focus on overfit- ting one single scene by reconstructing its 3D radiance field *Equal contribution. in a \u201cbackward\u201d manner. Though capable of generating realistic and consistent novel views, the need for retrain- ing on each new scene limits their practical applications. Recently, generalizable NeRF has settled a new trend: in place of the costly per-scene fitting, several pioneer works [70, 58, 69, 53, 49] attempt to synthesize novel views of un- seen scenes in a \u201cfeedforward\u201d fashion on the fly. Those models are first pre-trained by learning how to represent scenes and render novel views from captured images across different scenes, achieving high-quality \u201czero-shot\u201d infer- ence results on new scenes. Among them, Generalizable NeRF Transformer (GNT) [53] stands out by replacing the explicit scene modeling and rendering function via unified, data-driven, and scalable transformers, and automatically inducing multi-view consistent geometries and renderings via large-scale novel view synthesis pre-training. However, those cross-scene NeRF models face the fun- damental dilemma between \u201cgenerality\u201d and \u201cspecializa- tion\u201d. On the one hand, they need to broadly cover both diverse scene representations and/or rendering mechanisms due to different scene properties (e.g., color, materials) \u2013 hence larger overall model size is needed to guarantee suffi- cient expressiveness. On the other hand, since a single scene usually consists of specialized self-similar appearance pat- terns, those models must also be capable of per-scene spe- cialization to model the scene closely. Existing generaliz- able models still do not achieve a satisfactory balance be- tween both \u201cgenerality\u201d and \u201cspecialization\u201d, as most of them [53, 49] do not fit diverse scenes well out of box, and some [58, 69] will need extra per-scene optimization step. To fill the aforementioned gap, we propose to introduce and customize the powerful Mixture-of-Experts (MoE) idea [47] into GNT framework, which is composed of a view transformer that aggregates multi-view image features and a ray transformer that decodes the point feature to synthe- size novel views. The inspiration is drawn from Large Lan- guage Models (LLMs) [27, 12], where MoE has become 1 arXiv:2308.11793v1 [cs.CV] 22 Aug 2023 the key knob to improve the generalization of these models, scaling up the total model size without exploding the per- inference cost, by encouraging different submodels (com- bination of activated experts) to be sparsely activated for different inputs and hence become \u201cspecialized\u201d. Specifically, to balance between cross-scene \u201cgeneral- ization\u201d and per-scene \u201cspecialization\u201d, we bake MoE into GNT\u2019s view transformer1, leading to a new GNT with Mixture-of-View-Experts (GNT-MOVE). However, as we observed from experiments, naively plugging MoE into NeRF fails to well balance between generality and special- ization, due to their intension with generalizable NeRF\u2019s cross-scene consistency and spatial smoothness priors: \u2022 Cross-scene consistency: similar appearance patterns or similar materials, from different scenes, should be treated consistently by choosing similar experts. \u2022 Spatial smoothness: nearby views in the same scene should change continuously & smoothly, hereby mak- ing similar or smoothly transiting expert selection. Those two \u201cpriors\u201d are owing to the natural image ren- dering and multi-view geometry constraints. Yet, enforc- ing them risks causing the notorious representation collapse of MoEs [76], i.e., differently activated submodels may naively learn the same or similar functions and be unable to capture diverse specialized features. Such representational collapse has been addressed a lot in the general MoE litera- ture [77, 28, 44]. But it remains elusive whether those solu- tions will be at odds with the \u201cconsistency/smoothness\u201d: a new challenge we must pay attention to. In order to mitigate such gaps, we investigate two cus- tomized improvements of MoE for NeRF. Firstly, we aug- ment the MoE layer with a shared permanent expert, that will be selected in all cases. This shared expert enforces the commodity across scenes as an architectural regular- ization, and boosts cross-scene consistency. Secondly, a spatial smoothness objective is introduced for geometric- aware continuity, by encouraging two spatially close points to choose similar experts, and using the geometric distance between sampled points to re-weight their expert selections. We empirically find the two consistency regularizations to work well with the typical expert diversity regularizer in MoEs, together ensuring effectively large model capacity as well as meeting the consistency/smoothness demands. We have conducted comprehensive experiments on complex scene benchmarks. Remarkably, when trained on multiple scenes, GNT-MOVE attains state-of-the-art performance in two aspects: (1) often notably better zero-shot generaliza- tion to unseen scenes; and (2) consistently stronger perfor- mance on few-shot generalization to unseen scenes. 1In this paper, we mainly focus on the view transformer based on the hypothesis that the modular design of MoE could be naturally beneficial to multi-view feature aggregation. Introducing MoE into the ray transformer may be also promising and we leave it as future work. Our main contributions can be summarized as follows: \u2022 We present an LLM-inspired NeRF framework, GNT- MOVE, which significantly pushes the frontier of gen- eralizable novel view synthesis on complex scenes by introducing Mixture-of-Experts (MoE) transformers. \u2022 To tailor MoE for generalizable NeRF, we introduce a shared permanent expert for cross-scene rendering consistency, and a geometry-aware spatial consistency objective for cross-view spatial smoothness. \u2022 Experiments on complex scene benchmarks validate the effectiveness of GNT-MOVE on cross-scene gen- eralization with both zero-shot and few-shot settings.", + "main_content": "NeRF and Its Generalization. Novel View Synthesis (NVS) aims to generate unseen views given a set of posed images. Recently, Neural Radiance Field (i.e., NeRF [34]) has achieved remarkable performance on novel view synthesis by volume rendering on a radiance field. Several followups extend NeRF by proposing new parameterizations of rays [3, 4] to improve rendering quality, using explicit data structures or distillation [15, 31, 52, 35, 56, 17, 19] to improve efficiency, or adopting spatial-temporal modeling [37, 40, 16, 29, 63] to extend it to dynamic scenarios. However, the original NeRF needs to retrain on each new scene, thus limiting its practical applications. To tackle the cross-scene generalization, one line of works [55, 21, 70] incorporate a convolutional encoder and use the same MLP conditioned on different image features to model different objects. More recently, another line of works [53, 50, 26, 38, 74, 58] adopt transformer-based network with epipolar constraints to synthesize novel views of unseen scenes in a \u201dfeedforward\u201d fashion on the fly. Our method is also based on the transformer to render novel scenes in a feedforward fashion. The difference is that we customized the powerful MoE ideas into our framework to balance between crossscene generalization and per-scene specialization, thus capable of modeling diverse complex scenes and rendering more realistic results, in fewor zero-shot. Mixture-of-Experts (MoE). MoEs [20, 23, 6, 71, 44, 11, 7] perform input-dependent computations with a combinaMixture-of-Experts (MoE). MoEs [20, 23, 6, 71, 44, 11, 7] perform input-dependent computations with a combination of sub-models (a.k.a. experts) according to certain learned or ad-hoc routing policies [9, 44]. Various successful cases of MoE have been shown in a wide range of applications. Recent advances [47, 27, 12, 48] in the natural language processing field propose sparse-gated MoEs to scale up LLM capacity without sacrificing per-inference cost and encourage different modules with distinct functionalities. This helps to unleash the massive potential for compositional unseen generalization [68, 32, 67] besides excellent accuracy-efficiency trade-offs. MoE also gains 2 Ray Transformer Position Embedding Target View 2 Source Views 1\u00a0 Source Views 2 Photometric Loss\uff1a Spatial Consistency Loss\uff1a Target View 1 Unet Norm Norm attention Norm attention MoE Norm MLP View Transformer View Transformer Cast\u00a0Ray 1 Cast Ray 2 router permanent expert router expert 1 expert 4 expert 2 expert 3 Figure 1: Overview of our GNT-MOVE. Left sub-figure: for each ray in the target view, sampled points will aggregate multi-view features from source views by passing through the view transformer. Right sub-figure: in view transformer, we embed the MoE layer in the transformer blocks. Point token will be processed by both router-selected experts and our proposed permanent expert to enforce cross-scene consistency. Note that we use 4 MoE embedded transformer blocks, and 4 experts per MoE layer, leading to \u00004 2 \u00014 = 1,296 total expert combinations to provide sufficiently large and diverse coverage. popularity in computer vision [1, 11, 39], although most works [10, 2, 18, 59, 66] only focus on classification tasks. A few works have explored the sparsely activated submodels idea implicitly in NeRF. Kilo-NeRF [41] introduces thousands of tiny MLPs to divide and conquer the entire scene modeling. Block-NeRF [54] enables NeRF to represent a street-scale scene by dividing large environments into individually trained NeRFs. NID [57] improves both data and training efficiency of INR by assembling a group of coordinate-based sub-networks. NeurMiPs [30] leverages a collection of local planar experts in 3D space to boost the reconstruction quality. Different from all previous arts trying to improve per-scene rendering or fitting, we make the first attempt to customize MoE for generalizable NeRF and improve its performance on rendering novel unseen scenes. 3. Preliminary GNT Generalizable NeRF Transformer (GNT) [53] is a pure, unified transformer-based architecture that efficiently reconstructs Neural Radiance Fields (NeRFs) on the fly from source views. It is composed of two transformerbased stages. In the first stage, the view transformer predicts coordinate-aligned features for each point by aggregating information from epipolar lines of its neighboring views. In the second stage, the ray transformer composes pointwise features along the ray to compute the ray color. More precisely, given N source images {Ii \u2208RH\u00d7W \u00d73}N i=1, for each sampled point x \u2208R3 on a ray emitted from the target view, the view transformer is formulated as: F(x, \u03b8) = V-Trans(F(\u03a01(x)), ..., F(\u03a0N(x))), (1) where \u03a0i(x) is to project 3D point x onto the i-th image plane Ii, and F is a small U-Net [45] based CNN that interpolates features at the projected image point. The view transformer is adopted to combine all the extracted features into a coordinate-aligned feature volume. These multi-view aggregated features are then fed into the ray transformer. The ray transformer then performs mean pooling over the predicted tokens and map them to RGB via an MLP to obtain the rendered ray color: C(r) = MLP \u25e6R-Trans(F(x1, \u03b8), ..., F(xM, \u03b8)). (2) {x1, ..., xM} are 3D points sampled along the same ray r. In this work, we choose GNT as the backbone due to its outstanding performance. However, our methodology shall be general to other transformer-based NeRFs [50, 26, 38] MoE A Mixture of Experts (MoE) layer typically contains a group of E experts f1, f2, \u00b7 \u00b7 \u00b7 , fE and a router R whose output is an E-dimensional vector. The expert networks are in the form of a multi-layer perception [12, 43] in ViTs. The router R plays the role of expert selection, and we adopt a representative router called top-K gating [47]. With input token x, the resultant output y of MoE layers can be formulated as the summation of the selected top K experts from E expert candidates using a router: y = E X e=1 R(x)e \u00b7 fe(x), R(x) = softmax(TopK(G(x), K)), TopK(v, K)i = \u001a vi if vi is in the top K elements of v 0 otherwise (3) where G represents the learnable network within the router. 3 Figure 2: Cross-scene inconsistency. For similar colors or patterns from different scenes (left: green leaves in Flower and Orchids, right: white background in Drums and Materials), the router selects different experts (visualized with different colors). A permanent expert enforces commodity across scenes to enhance cross-scene consistency. 4. Method Overview. We scale up GNT model with MoE layer in this section. The main pipeline is illustrated in Figure 1. Our design principle is that we only make necessary and minimal modifications to the vanilla GNT to preserve its standardized architecture and ease of use. 4.1. Mixture of View Experts: The Basic Pipeline It is discussed in [53] that GNT leverages the UNet to extract geometry, appearance, and local light transport information from the 2D images, and view transformer will integrate those features to estimate the point-wise rendering parameters (such as occupancy, transparency, and reflectance) on the latent space for the ray transformer. We notice that the natural shading properties are often exclusive to each other and thus sparsely activated (e.g., diffuse reflection vs. specular reflection). Also in typical rendering engines, displaying a scene usually invokes different graphical shaders to handle spatially varying materials. These observations altogether motivate us to plug MoE modules into the view transformer to specialize different components for specific rendering properties. Our pipeline could be seen in Figure 1. As shown in the right sub-figure, in the view transformer, we replace the dense MLP layer with a sparsely activated MoE layer composed of a set of half-sized MLP experts {fe}E e=1. As in Equation 3, the output of each MoE layer is the weighted summation of the outputs from the selected top K experts. Considering the view transformer with L MoE-embedded transformer blocks, we note that the number of possible expert combinations can factually reach \u0000E K \u0001L, which can provide sufficiently broad and diverse coverage. Following many MoE prior arts [77, 28, 44], we also enforce balanced and diverse expert usage to avoid representation collapse. Particularly, within each training batch, we sample 3D points x from ray group R, where the rays are emitted from multiple different views of the same scene, and regularize expert selection via Coefficient of Variation (CV) of the sparse routing [47]: Ldiv = CV ( E r\u223cR E x\u2208r R(x)) CV (g) = mean(g)/var(g), (4) where x is the token embedding of point x, and mean(\u00b7) and var(\u00b7) compute the sample mean and variance of the input vector respectively. The diversity regularizer (4) is a standard idea in MoE. Putting into a NeRF context, it encourages different views to fully exploit the expert space, and different experts to capture nuances of distinct views. 4.2. Fusing Cross-scene Consistency and Spatial Smoothness into MoE However, our experiments show that naively plugging MoE into NeRF cannot guarantee a good balance between cross-scene generalization and per-scene specialization. This is due the absence of cross-scene consistency and spatial smoothness, which are essential priors for generalizable NeRF. We hence introduce two levels of NeRF-specific customizations for MoE: (i) architecture level: a shared permanent expert responsible for cross-scene consistency, and (ii) objective level: a spatial consistency objective to encode geometric-aware smoothness. Permanent Shared Expert As aforementioned, for generalizable NeRF trained on complex and diverse scenes, the employed MoE should keep consistent expert selection on similar appearance patterns or similar materials from different scenes. However, this cross-scene consistency for NeRF can be affected by diversified expert usage in MoE. When we directly plug the MoE layers into GNT, we observe an obvious cross-scene inconsistency: as shown in Figure 2. For similar colors or materials from different scenes, the router selects totally different experts (e.g., leaves in the left sub-figure, white background in the right sub-figure), without considering the sensible cross-scene commodity. Therefore, to enforce said commodity across scenes and improve cross-scene consistency, we propose to modify the MoE layer from an architectural level. This is achieved through the introduction of a shared permanent expert fp responsible for distilling common knowledge across different scenes. The permanent expert has the same structure as other experts in the MoE. As shown in Figure 1, instead of being selected by the router, it is fixed and participates in the token processing by default. Formally, given an input token x to the MoE layer, the output y is computed as: y = fp(x) + E X e=1 R(x)e \u00b7 fe(x) (5) Geometry-Aware Spatial Consistency Along with the cross-scene consistency, spatial smoothness is another essential characteristic for NeRFs due to the view geometry 4 (a) IBRNet (b) GNT (c) GPNR (d) GNT-MOVE (e) Ground Truth Figure 3: Qualitative results for the unseen cross-scene rendering. In the T-Rex scenes (row 1), GNT-MOVE reconstructs the edge details of stairs more accurately. In the Materials scenes (row 2), GNT-MOVE models the complex lighting effects much clearer compared to other methods, showing its stronger generalization ability in modeling different complex scenes. Ground Truth Ours GNT Figure 4: Qualitative comparison on Shiny-6 dataset. From left to right are the ground truth image, and the zoom-in results of GNT-MOVE and GNT, respectively Ground Truth Ours GNT Ground Truth Ours GNT Figure 5: Qualitative comparison on Tanks-and-Temples dataset. From left to right are the ground truth image, and the zoom-in results of GNT-MOVE and GNT, respectively constraints. Seeing from different camera poses, the nearby views in the same scene should make a similar or smoothly transiting expert selection. To encourage such a multi-view consistency, we propose a spatial consistency objective that encourages two spatially close points to choose similar exGT Ours GNT GT Ours GNT Figure 6: Qualitative comparison on NMR dataset. From left to right are the ground truth image, and the zoom-in results of GNT-MOVE and GNT, respectively. perts, and we use the geometric distance between them to re-weight the expert selection. Specifically, given two spatially close 3D points xi and xj, the router R takes their token embedding xi and xj as input and maps them to expert selection scores R(xi), R(xj) \u2208RE respectively. Similar expert selection is thereby encouraged through pulling these two distributions closer. However, as we have a huge amount of sampled points from multiple views, it is computationally expensive and inefficient to calculate the pairwise distance between all 3D points. To make it easier to find pairs of close points, we first calculate the pairwise distance between rays based on their location in the image coordinate system. Then we filter out close rays whose pairwise distance is smaller than a predefined threshold \u03f5. For 3D points sampled from two close rays, we compute the Euclidean distance between all the points, denoted as di,j = \u2225xi \u2212xj\u2225. For each point xi, we select its closest points x\u2032 i with dis5 tance di,i\u2032. Therefore, we encourage the consistency of the expert selection between the closest points via a symmetric Kullback\u2013Leibler divergence loss: LKL(xi) = 1 2DKL(R(xi)\u2225R(x\u2032 i))+1 2DKL(R(x\u2032 i)\u2225R(xi)). (6) As closer points are more likely to have higher expert selection similarity, we do not treat all pairs equally. Rather we use their geometric distances to serve as a consistency confidence \u03c1i = e \u2212di,i\u2032 P (xj ,x\u2032 j ) e \u2212dj,j\u2032 . The final spatial consistency loss is hence defined as: Lsc = X (xi,x\u2032 i) \u03c1iLKL(xi). (7) Note that our spatial consistency is enforced on 3D points from multiple views. Therefore, it naturally encourages geometry-aware spatial smoothness in the same scene. 5. Experiments In this section, we conduct extensive experiments with GNT-MOVE to answer two questions: i) Does MoE help GNT scale up in scene coverage and improve generality? ii) Does GNT-MOVE meanwhile improve specialization to different scenes? We compare GNT-MOVE with state-of-theart (SOTA) methods on generalizable novel view synthesis tasks, under both zero-shot and few-shot settings (Section 5.2). We also provide careful analyses on the expert selection in GNT-MOVE to illustrate how MoE divide and conquer to render a challenging scene (Section 5.4). 5.1. Implementation Details Training / Inference Details We choose top K = 2 experts out of E = 4 expert candidates per layer. Note that we scale down the expert size by half compared to the dense MLP layer in standard ViT to make their computation FLOPs equivalent. We train GNT-MOVE end-to-end using the Adam optimizer. The threshold \u03f5 for close rays is set as 20. The loss weights \u03bbsc and \u03bbdiv are set to be 1 \u00d7 104 and 1 \u00d7 10\u22123, respectively. Please refer to our supplementary for additional training details. Metrics We adopt three widely-used metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) [60], and the Learned Perceptual Image Patch Similarity (LPIPS) [73]. We report the average of each metric across multiple scenes in one dataset for crossscene generalization experiments. Following [53], we also report the geometric mean of 10\u2212P SNR/10, \u221a 1 \u2212SSIM, LPIPS, for an easier comparison [3]. 5.2. Main Experiments: Zero-Shot and Few-Shot Cross-Scene Generalization Setting To evaluate the cross-scene generalization performance, we compare our GNT-MOVE with state-of-the-art generalizable NeRF under two important settings: \u2022 Zero-shot: the pre-trained model is directly evaluated on an unseen scene for novel view synthesis. \u2022 Few-shot: the pre-trained model is first finetuned with a few observed views from the target unseen scene, and then applied to the target scene. Datasets We follow the experimental protocol in IBRNet [58] and GNT [53] and use the following training/evaluation datasets: (1) Training Datasets consist of both real and synthetic data, in consistency with GNT [53]. For synthetic data, we use object renderings of 1023 models from Google Scanned Object [8]. For real data, we make use of RealEstate10K [75], 90 scenes from the Spaces dataset [13], and 102 real scenes from handheld cellphone captures [33, 58]. (2) Testing Datasets are the common NeRF benchmarks including Local Light Field Fusion (LLFF) [33] and NeRF Synthetic dataset [34]. Note that these LLFF scenes are not included in the handheld cellphone captures in the training set. We also include three additional datasets: Shiny-6 dataset [62], Tanks-andTemples [42] and NMR [24], which contains complex optical effects, large unbounded scenes, and 360\u00b0 views of various objects from unseen categories, respectively. More dataset details can be found in the supplementary. 5.2.1 Zero-Shot Generalization For LLFF and NeRF Synthetic scenes, we compare our method with PixelNeRF [70], MVSNeRF [5], IBRNet [58], GNT [53], and GPNR [49]. As seen from Table 1a, our method achieves the best performance on both LLFF and NeRF Synthetic datasets in PSNR, LPIPS, and average evaluation metrics. Compared with GPNR, GNT-MOVE achieves a significantly better perceptual score, with up to 38% LPIPS reduction on both datasets. We also outperform GNT on PSNR with notable improvements of 0.16dB and 0.18dB on two datasets. The qualitative results on representative scenes are shown in Figure 3. One could observe that GNT-MOVE renders novel views with clearly better visual quality. It particularly better reconstructs fine details of object edges in T-Rex, and more accurately models complex specular reflection effects in Materials (even our training sets contain only limited lighting variations). We then compare on the more challenging Shiny [62], Tanks-and-Temples [42], and NMR [24] datasets. On the non-object-centric Shiny dataset, we observe from Table 1b that, GNT-MOVE clearly surpasses its peers of the generalizable category in all the metrics: outperforming GNT 6 Models Local Light Field Fusion (LLFF) NeRF Synthetic PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PixelNeRF 18.66 0.588 0.463 0.159 22.65 0.808 0.202 0.078 MVSNeRF 21.18 0.691 0.301 0.108 25.15 0.853 0.159 0.057 IBRNet 25.17 0.813 0.200 0.064 26.73 0.908 0.101 0.040 GPNR 25.72 0.880 0.175 0.055 26.48 0.944 0.091 0.036 GNT 25.86 0.867 0.116 0.047 27.29 0.937 0.056 0.029 Ours 26.02 0.869 0.108 0.043 27.47 0.940 0.056 0.029 (a) NeRF Synthetic dataset and LLFF dataset. Setting Models Shiny-6 Dataset PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 Per-Scene Training NeRF 25.60 0.851 0.259 0.065 NeX 26.45 0.890 0.165 0.049 IBRNet 26.50 0.863 0.122 0.047 NLF 27.34 0.907 0.045 0.029 Generalizable IBRNet 23.60 0.785 0.180 0.071 GPNR 24.12 0.860 0.170 0.063 GNT 27.10 0.912 0.083 0.036 Ours 27.54 0.932 0.072 0.032 (b) Shiny dataset. Models NMR Dataset PSNR \u2191 SSIM \u2191 LPIPS \u2193 Avg \u2193 LFN 24.95 0.870 PixelNeRF 26.80 0.910 0108 0.041 SRT 27.87 0.912 0.066 0.032 GNT 32.12 0.970 0.032 0.015 Ours 33.08 0.972 0.031 0.014 (c) NMR dataset. Setting Models Truck Train M60 Playground PSNR\u2191 SSIM\u2191 LPIPS\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 Per-scene Training NeRF 20.85 0.747 0.513 16.64 0.635 0.651 16.86 0.702 0.602 21.55 0.765 0.529 NeRF++ 22.77 0.823 0.298 17.17 0.672 0.523 17.88 0.738 0.435 22.37 0.799 0.391 Generalizable GNT 17.39 0.561 0.429 14.09 0.420 0.552 11.29 0.419 0.605 15.36 0.417 0.558 Ours 19.71 0.628 0.379 16.27 0.499 0.466 13.56 0.495 0.527 19.10 0.501 0.507 (d) Tanks-and-Temples dataset. Table 1: Comparison of GNT-MOVE against SOTA methods for cross-scene generalization under zero-shot setting. Models Local Light Field Fusion (LLFF) NeRF Synthetic 3-shot 6-shot 10-shot 6-shot 12-shot PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PixelNeRF 17.54 0.543 0.502 0.181 19.00 0.721 0.496 0.148 20.01 0.755 0.333 0.123 19.13 0.783 0.250 0.112 21.90 0.849 0.173 0.075 MVSNeRF 17.05 0.486 0.480 0.189 20.50 0.594 0.384 0.130 22.54 0.673 0.309 0.099 16.74 0.781 0.263 0.138 22.06 0.844 0.185 0.076 IBRNet 16.89 0.539 0.458 0.185 20.61 0.686 0.316 0.115 23.52 0.789 0.226 0.077 18.17 0.812 0.234 0.115 24.69 0.895 0.120 0.051 GNT 19.58 0.653 0.279 0.121 22.36 0.766 0.189 0.081 24.14 0.834 0.133 0.059 22.39 0.856 0.139 0.067 25.25 0.901 0.088 0.044 Ours 19.71 0.666 0.270 0.120 22.53 0.774 0.184 0.078 24.61 0.837 0.132 0.056 22.53 0.871 0.116 0.061 25.85 0.915 0.074 0.038 Table 2: Comparison of GNT-MOVE against SOTA methods in few-shot setting on the LLFF and NeRF Synthetic datasets. by 0.44 dB PSNR, and GPNR/IBRNet by over 3 dB. Even compared to the per-scene fitting category (which puts an unfair disadvantage on us), our PSNR and SSIM still win over strong competitors such as NLF (which has particular optical modeling) and IBRNet. All those results endorse that GNT-MOVE benefits its MoE-based specialization to adapt well to challenging materials and light effects. We further compare GNT-MOVE with SRT [46], another transformer-based renderer pre-trained by novel view synthesis, on the NMR dataset [24]. As shown in Table 1c, GNT-MOVE remarkably outperforms SRT by 5.21 dB PSNR that is even more impressive if one considers that SRT is pre-trained with samples from NMR. In contrast, GNT-MOVE can \u201czero-shot\u201d generalize way better. It also outperforms GNT by a remarkable 1 dB PSNR. Table 1d also demonstrates the performance of GNT-MOVE on the Tanks-and-Temples dataset (the four scenes selected in NeRF++[72]). Once again, GNT-MOVE largely outperforms GNT by up to 3 dB PSNR. Those results strongly suggest that GNT-MOVE, with its higher capacity, is indeed more generalizable and robust than vanilla GNT. The qualitative rendering comparison on representative scenes from Shiny-6 dataset [62], Tanks-and-Temples dataset [42], and NMR dataset [24] could be found in Figure 4, Figure 5, and Figure 6, respectively. 5.2.2 Few-Shot Generalization Next under the few-shot setting, we compare our method with PixelNeRF [70], MVSNeRF [5], IBRNet [58], and GNT [53]. On the LLFF dataset that contains forwardfacing scenes, we finetune the pre-trained models using 3, 6, and 10 images. On NeRF Synthetic dataset that contains 360\u25e6scenes, we finetune them on 6 and 12 images, respectively. During inference, images used for finetuning are by default included as source images for novel view synthesis. In Table 2, GNT-MOVE shows a remarkably large performance gain over all the state-of-the-art methods on NeRF Synthetic dataset. Compared to GNT, our model achieves better results in all metrics, with particularly impressive perceptual score gains of 17% and 16% LPIPS on 6-shot and 12-shot, respectively. GNT-MOVE also improves over GNT by a great margin of 0.6 dB PSNR and 0.14 SSIM on 12-shot setting. Similar performance gains are also observed on the LLFF dataset: GNT-MOVE improves the state-of-the-art GNT on PSNR metric by 0.13 dB, 0.17 dB, and 0.47 dB on 3-shot, 6-shot, and 10-shot, respectively. 5.3. Spotlight Comparison: GNT v.s. GNT-MOVE Since GNT-MOVE is an extension of GNT (which is the most recent SOTA), it is naturally of interest to compare the two closely and to understand how much benefits MoE actually brings to GNT (\u201cspecialization\u201d v.s. \u201cgeneralization\u201d ), for the goal of cross-scene generalization. While most aforementioned experiments already demonstrate various solid gains, we feel it worthy of providing a focused summary below. We emphasize that GNT and GNT-MOVE are trained and evaluated in completely fair settings. \u2022 In the zero-shot setting, GNT-MOVE always outper7 GT GNT+na\u00efve MoE GNT-MOVE Ours GT Ours GNT+na\u00efve MoE GNT-MOVE Figure 7: Visualization of expert selection using different colors. Row 1: two quite different views from the Materials scene. Row 2: two slightly different views from the Orchids scene. Row 3: two different scenes, but with similar visual appearances (e.g., stairs, bones). We compare GNT with naive MoE, and our GNT-MOVE solution. forms GNT on the metric of PSNR, with moderate improvements of 0.16 dB and 0.18 dB, on the \u201cstandard\u201d LLFF and NeRF Synthetic datasets, respectively. Yet on the more challenging ones, the PSNR gain of GNTMOVE over GNT becomes larger: 0.44 dB on Shiny, 0.96 dB on NMR, and eventually an impressive 2.63 dB on Tanks-and-Temples (averaged over 4 scenes). \u2022 Same in the zero-shot setting, GNT-MOVE outperforms GNT in all cases on the metrics of LPIPS and Avg scores. It marginally lags behind GPNR on SSIM in NeRF Synthetic and LLFF, but wins on SSIM on other more challenging datasets. For example, the SSIM gain of GNT-MOVE over GNT is as large as 0.076 on Tanks-and-Temples (averaged over 4 scenes). \u2022 Then, in the few-shot setting, our results suggest a clean sweep for GNT-MOVE, in all shot settings, under all metrics, on both LLFF and NeRF Synthetic datasets. Generally, as the number of shots increases, the gains of GNT-MOVE over GNT seem to increase as well, ending up with 0.47 dB and 0.60 dB gaps on LLFF and NeRF synthetic, respectively. \u2022 When it comes to visual quality, GNT-MOVE is clearly superior in tackling challenging scenes with complex lighting, e.g., Ship, Materials, and Drums (please refer to the per-scene breakdown results of zero-shot generalization in the supplementary). The experiments on the Shiny dataset in Table 1b demonstrate that GNT-MOVE generalizes better than GNT in the presence of challenging refraction and reflection. \u2022 Also in Table 1d, GNT-MOVE generalizes out of the box on large-scale, unbounded 3D scenes while the vanilla GNT fails. Note that both GNT-MOVE and (a) (b) Figure 8: Expert selection histogram. (a) Similar patterns (e.g., bright) across different scenes have similar expert selections; (b) Different views from the same scene have similar and consistent layer-wise expert selections. GNT are trained only on bounded and forward-facing scenes, implying the stronger compositional generalization potential [25] achieved through MoEs. More comparisons, demonstrating that the solid gain of MoE for generalizable NeRF goes way beyond naively larger model size; yet the gain can only be unleashed with PE and SR, could be found in the supplementary. 5.4. Dive into the Expert Selection GNT-MOVE strikes a good balance between crossscene/view consistency and expert specialization: it can be demonstrated through the visualization of expert maps in Figure 7, where we compare GNT-MOVE with a baseline of GNT + naive MoE (i.e., the basic pipeline described in Sec. 4.1, without enforcing our customized consistency/smoothness). In Row 1, two different views of the Materials scene select the same set of experts for foreground material balls and the background, respectively. That is in contrast to the much more confused/\u201cmixed\u201d selection observed in GNT + naive MoE. In Row 2, one observes the same cross-view consistency, while the subtle differences between two views 8 Models Local Light Field Fusion (LLFF) NeRF Synthetic MoE PE SR PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 Avg\u2193 GNT \u2013 \u2013 \u2013 25.86 0.867 0.116 0.047 27.29 0.937 0.056 0.029 Ours \u2713 25.46 0.856 0.128 0.051 27.15 0.934 0.057 0.031 Ours \u2713 \u2713 25.88 0.865 0.120 0.049 27.32 0.936 0.058 0.030 Ours \u2713 \u2713 25.93 0.866 0.117 0.046 27.30 0.935 0.059 0.030 Ours \u2713 \u2713 \u2713 26.02 0.869 0.108 0.043 27.47 0.940 0.056 0.029 Table 3: Ablation analyses of our two key proposals: PE indicates permanent expert and SR indicates smoothness regularizer. (e.g., occluded bud) are also modeled differently in the two corresponding expert maps, indicating good expert specialization and diversity. Row 3 indicates an example of crossscene consistency, where the same expert group is selected by GNT-MOVE for similar visual appearances (e.g., stairs, bones) across two different scenes. The selection of experts also properly reacts to fine edges (e.g., flower edges in row 2, handrail edge and bone edge in row 3), and is also capable of adapting to complex lighting effects, as shown in the Materials scene (row 1) and the light part of the T-Rex scene (row 3 right). Furthermore, we visualize the expert selection histogram in Figure 8. It aligns well with our observations that GNT-MOVE excels in ensuring both cross-scene consistency and cross-view spatial smoothness. In Figure 8a, by aggregating expert selections from all test frames of the {Trex, Horns} scenes, we discern that experts 1&3 are predominantly chosen for bright patterns, whereas experts 0&2 are favored for darker or dimmer regions. Concurrently, Figure 8b underscores that expert selections across varied views of the same scene exhibit layer-wise similarity and consistency. Besides, following [53], we plot the depth maps computed from the learned attention values in Figure 9. The depth maps show clear physical ground that GNT-MOVE learns the correct geometry without explicit supervision. It also confirms that our geometry-aware smoothness does not distort or oversmooth the geometry. 5.5. Ablation Studies We conduct ablation analysis on our key proposals, permanent expert and smoothness regularizer, on cross-scene generalization under zero-shot setting, and report results on the LLFF [33] and NeRF Synthetic dataset [34] in Table 3. As observed, directly plugging MoE into GNT cannot guarantee a good performance. We witness a performance drop on both datasets after adding the MoE. This is because MoE does not meet NeRF\u2019s cross-view consistency requirements and also does not learn the commodity across different scenes. Evidently, our customized design of permanent expert and smoothness regularizer both aid in improving model generalization capability. On the LLFF dataset, the smoothness regularizer brings the biggest perFigure 9: Geometry visualization. We show the depth maps from GNT-MOVE. Red indicates far and blue is near. formance gain, as cross-view consistency naturally benefits scenes with slightly disturbed views. On the NeRF Synthetic dataset with diverse complex scenes and materials, the permanent expert brings a considerable improvement as it enforces the commodity across scenes, thus contributing to the cross-scene consistency. Qualitative results in Figure 7 also illustrate their gains over the naive plug-in of MoE. 6. Conclusion In this work, we focus on generalizable novel view synthesis on complex scenes and propose a novel learningbased framework, GNT-MOVE, that significantly pushes the frontier of this problem by introducing MoE to the domain of NeRFs. In order to better tailor MoE for generalizable NeRFs, we introduce a shared permanent expert and a spatial consistency objective to enforce cross-scene consistency and geometry-aware smoothness. GNT-MOVE proves its effectiveness by achieving SOTA performance on cross-scene generalization in both zero-shot and few-shot settings, on a broad collection of datasets. Our limitation is that we primarily focus on the view transformer of GNT, while introducing MoE into the ray transformer may be further promising we regard it as future work. 9" + }, + { + "url": "http://arxiv.org/abs/2301.06014v2", + "title": "Further Exploration of the Effects of Time-varying Covariate in Growth Mixture Models with Nonlinear Trajectories", + "abstract": "Growth mixture modeling (GMM) is an analytical tool for identifying multiple\nunobserved sub-populations of longitudinal processes. In particular, it\ndescribes change patterns within each latent sub-population and examines\nbetween-individual differences in within-individual change for each sub-group.\nOne research interest in utilizing GMMs is to explore how covariates affect\nsuch heterogeneity in change patterns. Liu and Perera (2022c) extended\nmixture-of-experts (MoE) models, which mainly focus on time-invariant\ncovariates, for allowing the covariates to account for within-group and\nbetween-group differences simultaneously and examining the heterogeneity in\nnonlinear trajectories. The present study further extends Liu and Perera\n(2022c) and examines the effects on trajectory heterogeneity of time-varying\ncovariates (TVCs). Specifically, we propose methods to decompose a TVC into a\ntrait feature (e.g., the baseline value of the TVC) and a set of state features\n(e.g., interval-specific slopes or changes). The trait features are allowed to\naccount for within-group differences in growth factors of trajectories (i.e.,\ntrait effect), and the state features are allowed to impact observed values of\na longitudinal process (i.e., state effect). We examine the proposed models\nusing a simulation study and a real-world data analysis. The simulation study\ndemonstrated that the proposed models are capable of separating trajectories\ninto several clusters and generally generating unbiased and accurate estimates\nwith target coverage probabilities. With the proposed models, we showed the\nheterogeneity in the trait and state features of reading ability across latent\nclasses of students' mathematics performance. Meanwhile, the trait and state\neffects on mathematics development of reading ability are also heterogeneous\nacross the clusters of students.", + "authors": "Jin Liu", + "published": "2023-01-15", + "updated": "2023-01-23", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Growth mixture modeling (GMM) is an analytical tool to examine heterogeneity in longitudinal processes and describe change patterns of trajectories for each identi\ufb01ed sub-group. This tool stemmed from a modeling framework named \ufb01nite mixture modeling (FMM), which was introduced to the structural equation modeling (SEM) family by Muth\u00e9n and Shedden (1999). As its name suggests, the FMM represents sample heterogeneity by allowing for a \ufb01nite number of unobserved classes and mixing these latent groups using a linear combination. When utilized to examine growth curves, the FMM is then called GMM. Similar to the FMM, the GMM is a model-based clustering method (Bouveyron et al., 2019) that enables researchers to analyze data with domain knowledge and construct models to test hypotheses. Moreover, the GMM is a probability-based clustering approach that allows each trajectory to belong to multiple classes simultaneously with different probabilities (i.e., uncertainty), based on which the algorithm decides the cluster each trajectory enters. The GMM has received considerable attention over the past 20 years. Many theoretical and empirical \u2217CONTACT Jin Liu Email: Veronica.Liu0206@gmail.com arXiv:2301.06014v2 [stat.ME] 23 Jan 2023 A PREPRINT - JANUARY 24, 2023 works, such as Bauer and Curran (2003); Muth\u00e9n (2004); Grimm and Ram (2009); Grimm et al. (2010); Nylund et al. (2007), have examined its advantages and disadvantages. In addition to analyzing the heterogeneity of processes, it is also of interest to assess how covariates contribute to the between-group and within-group differences in such within-individual processes when utilizing the GMM. Liu and Perera (2022c) demonstrated that two types of time-invariant covariates (TICs) could be added into the GMM: (1) TICs that suggest group components and (2) TICs that explain variability within each group. Speci\ufb01cally, Liu and Perera (2022c) proposed building up a full mixture model to simultaneously allow for these two types of TICs in a GMM. With an extensive simulation study, this work examined the model performance and how the two types of TICs affect clustering and estimating effects. Moreover, this research work pointed out that the covariates that account for within-group heterogeneity could also be time-varying and showed it with real-world data analysis. The GMM with a time-varying covariate (TVC) in Liu and Perera (2022c) allows for the assessment of the heterogeneity in the TVC and its heterogeneous effects on the longitudinal process across latent classes. However, the model has some limitations. First, the model does not allow the TVC to predict variability in the growth factors in each cluster of the longitudinal process. Second, the model only provides insights regarding how TVC observed values contribute to the heterogeneity in the longitudinal outcome; however, how a change in the TVC affects trajectory heterogeneity could be of more interest in practice. In this present work, we propose adding a decomposed TVC, based on the modeling framework proposed by Liu and Perera (2022a), into the GMM to address the above two challenges. In the following sections, we describe how to add covariates into a GMM in detail and introduce the modeling framework that helps decompose a TVC. Covaraites in Growth Mixture Models As introduced above, adding covariates into a GMM has received lots of attention among researchers who utilize the SEM. In general, a GMM allows for the inclusion of TICs and TVCs. Existing studies have examined when, where, and how to add which TICs into a mixture model. As stated earlier, there are two types of TICs in the GMM: one type of TICs indicates group components, while the other type of TICs accounts for within-group differences in trajectories. In a GMM with the \ufb01rst-type TICs, the longitudinal outcome depends on a latent mixing component variable that, in turn, depends on the TICs. The inclusion of the \ufb01rst-type TICs enables one to understand how the likelihood of a trajectory belonging to an unobserved group changes conditional on the covariates. GMMs with the \ufb01rst-type TICs are popular among researchers employing the SEM. Earlier studies have demonstrated that adding this type of TICs can be achieved in a con\ufb01rmatory approach through one-step models (Bandeen-Roche et al., 1997; Dayton and Macready, 1988; Kamakura et al., 1994; Yamaguchi, 2000), or more exploratory way through stepwise methods, such as two-step (Bakk and Kuha, 2018; Liu et al., 2021b) or three-step models (Vermunt, 2010; Bolck et al., 2004; Asparouhov and Muth\u00e9n, 2014). One difference between the one-step and stepwise approaches is that the one-step method generates the estimates of the measurement parameters and the coef\ufb01cients from the TICs to the latent classes simultaneously, while the stepwise method estimates them separately. Existing studies have shown that the one-step approach provides less biased and more accurate estimates, yet a \u2018trial and error\u2019 fashion to construct a whole model and explore which TICs are needed to be included in a GMM requires lots of computational resources. More importantly, adding or removing a TIC may result in different solutions to the number of latent classes obtained from the Bayesian information criterion (BIC) based enumeration process, which has been well-de\ufb01ned and documented in the SEM literature (Nylund et al., 2007; Nylund-Gibson and Masyn, 2016; Diallo et al., 2017). Some recent works have proposed remedies to address such critiques of the one-step methods. For example, Kim et al. (2016); Hsiao et al. (2020); Liu and Perera (2022c,b) have recommended constructing a one-step model in an exploratory fashion (also referred to as \u2018the adjusted one-step approach\u2019). Speci\ufb01cally, they suggested performing the enumeration processes without any TICs to have a stable solution for the number of latent classes and then constructing a whole model with the determined number of latent classes. Moreover, Liu et al. (2021b); Liu and Perera (2022c,b) have demonstrated how to leverage machine learning techniques, such as feature extraction and feature selection in the SEM framework, to include the TICs that have impacts on trajectory heterogeneity in a GMM. In particular, Liu et al. (2021b) proposed utilizing the exploratory factor analysis (EFA) (Spearman, 1904) to transform candidate TICs from a large set to a manageable set with keeping the meaningful information of the original set. Liu and Perera (2022c) recommended employing structural equation model forests (SEM forests) (Brandmaier et al., 2016) to identify the TICs that contributed the most to the trajectory heterogeneity. Liu and Perera (2022b) further explored the two approaches and highlighted the difference in the insights obtained from these methods and the interpretations of such insights. Moreover, multiple existing studies, such as Asparouhov and Muth\u00e9n (2014); Kim et al. (2016); Hsiao et al. (2020); Masyn (2017); Liu and Perera (2022c), also examined mixture models with the second-type TICs. In the GMM with the second-type TICs, the outcome variable depends on both the latent component variable and the TICs: the growth factors of the longitudinal outcome are regressed on the TICs in each latent class, and the class-speci\ufb01c submodels 2 A PREPRINT - JANUARY 24, 2023 are regressed on the latent component variable. Therefore, each submodel can be viewed as a multiple-indicator and multiple-cause (MIMIC) model (J\u00f6reskog and Goldberger, 1975; McArdle and Epstein, 1987). Each MIMIC model has two components: (1) a measurement model where exogenous variables (i.e., repeated measurements of the outcome) indicate latent variables (i.e., growth factors) and (2) a structural model in which TICs are multiple-causal predictors of the latent variables. The inclusion of the second-type TICs allows for (1) identifying the heterogeneity in the TICs and (2) evaluating the heterogenous effects on the growth factors of the TICs across the latent classes of trajectories. GMMs with second-type TICs have received less attention than GMMs with \ufb01rst-type TICs. However, these earlier studies have demonstrated severe consequences of ignoring or misspecifying the second-type TICs in a mixture model by performing simulation studies, especially under the scenario that these TICs contribute to cluster separation. Such severe consequences may include misclassifying trajectories and/or generating biased estimates, Liu and Perera (2022c) have demonstrated that the two types of TICs can simultaneously be included in a GMM. In addition, they showed that a covariate in a GMM is not necessarily a TIC by extending a longitudinal model proposed by Grimm (2007) and regressing the development of mathematics ability on the repeated measurements of approach-to-learning in each latent class. The model successfully identi\ufb01ed the heterogeneity in the variable approach- to-learning and the heterogeneous impacts on the mathematics development of the covariate approach-to-learning across students\u2019 latent classes. However, in the built model, the covariate approach-to-learning is not allowed to explain the variability of the growth factors of the development of mathematics ability. Moreover, the insights obtained from the model are limited to the effects of the absolute observations of the development of approach-to-learning. However, the impact of the change in the learning approach on intellectual development could be of more interest in practice. Grimm et al. (2016, Chapter 8) also stated an inherent limitation of regressing a longitudinal outcome on a TVC in the SEM framework2: the full model has many parameters as there is no restricted structure on the TVC. Then the mean vector, variance-covariance matrix, and residuals of TVC are needed to be estimated. The estimation of covariances between repeated measures of TVC is challenging under some undesirable conditions. One solution to this is to \ufb01x the covariances to zero. However, it may not be valid if the TVC is expected to be somewhat stable. This limitation still exists when extending the model to the GMM framework. For example, in Liu and Perera (2022c), the covariances between the repeated measures of approach-to-learning are assumed to be zero in all latent classes, which may not be valid according to the developmental theory. A Framework to Decompose Time-varying Covariates This section introduces a modeling framework to decompose a TVC and address the above three challenges. Liu and Perera (2022a) proposed a novel speci\ufb01cation for latent change score modeling (LCSM) framework (Zhang et al., 2012; Grimm et al., 2013b,a) to \ufb01t longitudinal data. The new speci\ufb01cation for the LCSM views the change that occurs in a time interval as the area under the curve (AUC) of the rate-of-change versus time (r \u2212t) graph. The novel speci\ufb01cation then allows for estimating the individual baseline value and interval-speci\ufb01c slopes. This feature provides a natural method to decompose a longitudinal variable into a trait feature (i.e., the baseline value) and a set of state features (i.e., interval-speci\ufb01c slopes). Note that both the baseline value and the interval-speci\ufb01c slopes are allowed to vary from individual to individual. Based on the interval-speci\ufb01c slopes, one is able to derive other types of state features, such as interval-speci\ufb01c changes and change-from-baseline values, which are also at the individual level. The novel speci\ufb01cation and the two possible modi\ufb01cations enable us to model a TVC with a restricted structure, which in turn, reduces the number of parameters. The new speci\ufb01cation for the LCSM is able to model longitudinal data with parametric or nonparametric functional forms (Liu and Perera, 2022a). This article only focuses on the latter one, which can also be viewed as a latent basis growth model (LBGM) in the LCSM framework. We provide its path diagram with six repeated measurements in Figure 1a. In the \ufb01gure, xj and x\u2217 j are the observed and latent true scores at t = tj, respectively. In addition, dxj+1 and \u03b3j are the slope and relative rate in the jth time interval (i.e., from t = tj to t = tj+1), respectively. Figure 1a shows that there are two growth factors, \u03b7[x] 0 and \u03b7[x] 1 , with this model speci\ufb01cation, indicating the intercept and the slope in the \ufb01rst time interval (also referred to as the shape factor3 for a LBGM), respectively. With this speci\ufb01cation, the relative rate during the \ufb01rst time interval (i.e., \u03b31) is speci\ufb01ed as 1 so that dx2 is the slope in this interval (i.e., dx2 = \u03b7[x] 1 ). Similarly, the slope of each other time interval is de\ufb01ned as the product of the shape factor and the corresponding relative rate (i.e., dxj+1 = \u03b7[x] 1 \u00d7 \u03b3j for j \u22652). As demonstrated in Figure 1a, the latent true score 2According to Grimm (2007), a longitudinal model with a TVC could be constructed in the mixed-effects modeling framework and the SEM framework. However, the models built up in the SEM framework are able to provide more insights due to the \ufb02exibility of the framework. The SEM framework allows for estimating the time-varying impacts of a TVC and the covariances between the TVC and the growth factors of the longitudinal outcome. 3In the SEM literature, the slope in the \ufb01rst time interval is also referred to as a shape factor, which can be scaled in multiple ways. 3 A PREPRINT - JANUARY 24, 2023 at baseline is indicated by the growth factor \u03b7[x] 0 (i.e., x\u2217 1 = \u03b7[x] 0 ). The latent true score at each post-baseline time point is de\ufb01ned as the sum of the latent true score at the previous time point and the product of an interval-speci\ufb01c slope and the corresponding interval length (i.e., x\u2217 j = x\u2217 j\u22121 + dxj \u00d7 (tj \u2212t(j\u22121))). We then obtain observed scores by allowing a residual around the corresponding latent true score. Note that each time interval in Figure 1a is in a diamond shape, suggesting a good feature of the speci\ufb01cation: the LBGM is allowed to construct in the framework of individual measurement occasions by using \u2018de\ufb01nition approaches\u2019 (Mehta and West, 2000; Mehta and Neale, 2005; Sterba, 2014). The \u2018de\ufb01nition variables\u2019 are observed variables to adjust model coef\ufb01cients to individual-speci\ufb01c values. For this speci\ufb01cation, such individual-speci\ufb01c values are individual time intervals. As demonstrated in Figure 1a, the model allows for separately estimating the initial status and the interval-speci\ufb01c slopes, which provides a natural way to decompose a TVC. ========================= Insert Figure 1 about here ========================= In addition to the growth factors to indicate the initial status and the shape factor, a LBGM with the novel speci\ufb01cation contains other latent variables, including interval-speci\ufb01c slopes (i.e., dxj) and true scores (i.e., x\u2217 j) over time. The interval-speci\ufb01c slopes and true scores are derived from other parameters instead of being freely estimated. These non-estimable latent variables can be added into paths, serving as predictors in a model. With this idea, we are able to assess class-speci\ufb01c trait effects of a TVC by regressing the growth factors of a longitudinal outcome on the initial status in each class and evaluate class-speci\ufb01c state effects of a TVC by regressing each observed value of a longitudinal outcome on the corresponding interval-speci\ufb01c slope in each cluster. A slight modi\ufb01cation by including additional latent variables on the model speci\ufb01ed in Figure 1a enables one to have a different set of state features. For example, we add \u03b4xj in Figure 1b to de\ufb01ne the change that occurs in the time interval from tj\u22121 to tj (i.e., \u03b4xj = dxj \u00d7 (tj \u2212tj\u22121)). This modi\ufb01cation allows for the decomposition of a TVC into the initial status and a collection of interval-speci\ufb01c changes. The explicit inclusion of interval-speci\ufb01c changes in a TVC enables them to serve as predictors of the observed values of a longitudinal outcome4. We consider these two ways to decompose a TVC. The estimated TVC initial status serves as the trait feature in both methods, whereas the interval-speci\ufb01c slopes or changes are two possible sets of state features. We then include a decomposed TVC in a GMM to evaluate the heterogeneity in the trait and state features and their heterogeneous trait and state effects on the longitudinal outcome across latent classes. The proposed models are built in the framework of individual measurement occasions to follow multiple existing studies that illustrate the \u2018de\ufb01nition variables\u2019 approach in the SEM framework (Sterba, 2014; Liu et al., 2021a; Liu and Perera, 2022a) to avoid possible unacceptable solutions (Blozis and Cho, 2008). We organize the rest of this article as follows. We describe the model speci\ufb01cation and model estimation of the GMMs with TICs and a decomposed TVC in the Method section. We then design a Monte Carlo simulation study to evaluate the proposed models. In particular, we examine the clustering and estimating effects of the proposed models. The clustering effects are evaluated by accuracy. The estimating effects are assessed by four performance metrics, including the relative bias, the empirical standard error (SE), the relative root-mean-squared-error (RMSE), and the empirical coverage probability (CP) for a nominal 95% con\ufb01dence interval. We perform a real-world analysis in the Application section to illustrate the proposed models. The Discussion section is crafted with practical considerations, methodological considerations, and future directions.", + "main_content": "Approach to Decomposing a Time-varying Covariate This section presents the statistical methods to decompose a TVC into a trait and a collection of state features introduced above. In particular, Liu and Perera (2022a) developed a novel specification for the nonparametric LCSM (i.e., the LBGM), which allows for estimating the initial status and interval-specific slopes and, therefore, provides a natural way to decompose a TVC into a trait and a set of state features. In the proposed specification, the LBGM with J repeated measures is viewed as a linear piecewise function with J \u22121 segments following earlier studies (McArdle, 2001) and 4Another possible modification is adding latent variables to represent the change-from-baseline values as Liu (2022). This modification decomposes a TVC into the baseline value and a set of change-from-baseline values. Based on Liu (2022), regressing the repeated measurements on the change-from-baseline values may result in biased estimates for a single group model. Therefore, we do not consider this modification in the present article. 4 A PREPRINT JANUARY 24, 2023 (Grimm et al., 2016, Chapter 11). The model is speci\ufb01ed for the ith individual as xij = x\u2217 ij + \u03f5[x] ij , (1) x\u2217 ij = ( \u03b7[x] 0i , (j = 1) x\u2217 i(j\u22121) + dxij \u00d7 (tij \u2212ti(j\u22121)), (j = 2, . . . , J) , (2) dxij = \u03b7[x] 1i \u00d7 \u03b3j\u22121 (j = 2, . . . , J). (3) The above three equations de\ufb01ne a LBGM with the novel speci\ufb01cation proposed by Liu and Perera (2022a). Equation 1 writes an observed measurement (i.e., xij) as the sum of a latent true score (i.e., x\u2217 ij) and a residual (i.e., \u03f5[x] ij ) for the ith individual at time tj. Equation 2 de\ufb01nes the latent true scores at baseline and each post-baseline. Speci\ufb01cally, the true score is the growth factor indicating the intercept (i.e., \u03b7[x] 0i ) at baseline (i.e., t = ti1). Meanwhile, at each post-baseline (i.e., j \u22652), the true score is expressed as a linear combination of the score at the prior time point ti(j\u22121) and the amount of true change from time ti(j\u22121) to tij. As suggested in Equation 2, the interval-speci\ufb01c change is the product of the time interval (i.e., tij \u2212ti(j\u22121)) and the slope (i.e., dxij) in that interval. The interval-speci\ufb01c slope is further de\ufb01ned by the product of the slope of the \ufb01rst interval (i.e., the shape factor \u03b7[x] 1i ) and the corresponding relative rate (i.e., \u03b3j\u22121) as demonstrated in Equation 3. Note that the subscript i of time t indicates that the recorded times can be individually different. We can modify the model speci\ufb01ed in Equation 2 slightly to explicitly include additional latent variables for the interval-speci\ufb01c changes, as Equations 4 and 5 x\u2217 ij = ( \u03b7[x] 0i , (j = 1) x\u2217 i(j\u22121) + \u03b4xij, (j = 2, . . . , J) , (4) \u03b4xij = dxij \u00d7 (tij \u2212ti(j\u22121)) (j = 2, . . . , J), (5) where \u03b4xij indicates the amount of change that occurs between ti(j\u22121) and tij of the ith individual. Therefore, Equations 1, 4, 5 and 3 together de\ufb01ne a LBGM with the modi\ufb01ed speci\ufb01cation with explicit interval-speci\ufb01c changes (i.e., the model speci\ufb01ed in Figure 1b). The interval-speci\ufb01c slopes in Equation 3 or interval-speci\ufb01c changes in Equation 5 are allowed to be added to paths and serve as predictors in the SEM framework. The above two speci\ufb01cations for the LBGM can be expressed in the same matrix form, which is only based on the freely estimable parameters in the model, xi = \u039b[x] i \u00d7 \u03b7[x] i + \u03f5[x] i where xi is a J \u00d7 1 vector of the repeated measures of the TVC of individual i (in which J is the number of repeated measurements). In addition, \u03b7[x] i is a 2 \u00d7 1 vector of the growth factors of the TVC, representing the baseline value and the slope of the \ufb01rst time interval of the ith individual, respectively, while \u039b[x] i is a J \u00d7 2 matrix of the corresponding factor loadings, \u039b[x] i = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed 1 0 1 \u03b31 \u00d7 (ti2 \u2212ti1) 1 P3 j=2 \u03b3j\u22121 \u00d7 (tij \u2212ti(j\u22121)) . . . . . . 1 PJ j=2 \u03b3j\u22121 \u00d7 (tij \u2212ti(j\u22121)) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 , of which all elements in the \ufb01rst column are 1 since they are the factor loadings of the initial status of the TVC. The jth element of the second column is the cumulative value of the relative rate (i.e., \u03b3j) over time to tij, so its product with \u03b7[x] 1i represents the amount of change-from-baseline at occasion tj. Additionally, \u03f5[x] i is a J \u00d7 1 vector of residuals of the TVC for the ith individual. The growth factors \u03b7[x] i can be further written as \u03b7[x] i = \u00b5[x] \u03b7 + \u03b6[x] i , where \u00b5[x] \u03b7 is the mean vector of the TVC growth factors, while \u03b6[x] i is the vector of deviations of individual i from the corresponding growth factor means. More technical details are documented in Liu and Perera (2022a). 5 A PREPRINT JANUARY 24, 2023 Model Speci\ufb01cation of Growth Mixture Model with Time-invariant Covariates and A Decomposed Time-varying Covariate This section presents the model speci\ufb01cation for the proposed GMMs with TICs and a decomposed TVC. In this article, we build a GMM with two \ufb01rst-type TICs, one second-type TICs, and a decomposed TVC for illustration purposes. Speci\ufb01cally, the latent mixing component variable of the GMM depends on the two \ufb01rst-type TICs, which divides the sample into multiple latent classes. In each latent class, we regress the growth factors of the longitudinal outcome on the second-type TIC and the trait feature of the TVC and regress each post-baseline value of the longitudinal outcome on the corresponding state feature. The longitudinal outcome may take any function in a GMM; therefore, we only provide a general model speci\ufb01cation instead of pre-specifying them in the equations. Suppose that we have K latent classes in total. The proposed GMM can be expressed as p(yi|zi = k, xgi, xei, xi) = K X k=1 \u03c0(zi = k|xgi) \u00d7 p(yi|zi = k, xei, xi), (6) \u03c0(zi = k|xgi) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 1+PK k=2 exp(\u03b2(k) g0 +\u03b2(k)T g xgi) reference Group (k = 1) exp(\u03b2(k) g0 +\u03b2(k)T g xgi) 1+PK k=2 exp(\u03b2(k) g0 +\u03b2(k)T g xgi) other Groups (k = 2, . . . , K) , (7) \u0012 xi yi \u0013 |(zi = k) = \u039b[x] i 0 0 \u039b[y] i ! \u00d7 \u03b7[x] i \u03b7[y] i ! |(zi = k) + \u03ba(k) 1 \u00d7 \u0012 0 dxi \u0013 |(zi = k) + \u03f5[x] i \u03f5[y] i ! |(zi = k), (8) \u03b7[y] i |(zi = k) = \u03b1(k)[y] + \u0010 \u03b2(k) TIC \u03b2(k) TVC \u0011 \u00d7 \u0012xei \u03b7[x] 0i \u0013 |(zi = k) + \u03b6[y] i |(zi = k), (9) \u03b7[x] i |(zi = k) = \u00b5(k)[x] \u03b7 + \u03b6[x] i |(zi = k), (10) for the ith individual in the kth latent class. Equation 6 de\ufb01nes a GMM that combines K submodels. Following Liu and Perera (2022c), we de\ufb01ne xgi and xei as the \ufb01rst-type TICs and the second-type TIC, respectively. Note that xgi is a vector while xei is a scalar in the equations since we specify two \ufb01rst-type TICs and only one second-type TIC in this illustration. In addition, xi is the vector of the repeated measures of the TVC as de\ufb01ned in the previous section, and zi is the membership of the ith individual. As indicated in Equation 6, the latent mixing component variable \u03c0() depends on the \ufb01rst-type TICs and divides the sample into K latent classes. In each class, the longitudinal outcome depends on the second-type TIC and the TVC. Note that there are two constraints on the mixing component variable: (1) 0 \u2264\u03c0(zi = k|xgi) \u22641 and (2) PK k=1 \u03c0(zi = k|xgi) = 1. Equation 7 de\ufb01nes the mixing component variable as logistic functions of the \ufb01rst-type TICs, where \u03b2(k) 0 and \u03b2(k) are the intercept and coef\ufb01cients of the logistic functions, respectively. Equations 8, 9 and 10 together de\ufb01ne the submodel (i.e., the within-class model). In Equation 8, the longitudinal outcome yi is a J \u00d7 1 vector (where J is the number of measurement occasions) similar to the TVC xi. Moreover, \u03b7[y] i is a C \u00d7 1 vector of growth factors (where C is the number of growth factors of the longitudinal outcome), and \u039b[y] i is a J \u00d7 C matrix of the corresponding factor loadings. Note that the subscript i in the matrices of factor loadings indicates that the GMM is built up in the framework of individual measurement occasions. Additionally, \u03f5[y] i is a J \u00d7 1 vector of residuals of the longitudinal outcome for individual i. In Equation 8, dxi is a J \u00d7 1 vector of interval-speci\ufb01c slopes of the TVC. We can further express the interval-speci\ufb01c slopes as dxi = (0 dxi2 dxi3 . . . dxiJ), where the \ufb01rst element is 0, while dxij is the slope in time interval j \u22121 of individual i. Accordingly, \u03ba(k) 1 is the class-speci\ufb01c state effect of the TVC, characterizing how the measure of the longitudinal outcome at tj is in\ufb02uenced by the slope in the previous time interval (i.e., between tj\u22121 and tj) in the kth latent class. In Equation 8, 0 is also a J \u00d7 1 vector. Equation 9 further regresses the growth factors of the longitudinal outcome on the second-type TIC and the growth factor that indicates the initial status of the TVC. In Equation 9, \u03b1(k)[y] is a C \u00d7 1 vector of class-speci\ufb01c intercepts of the growth factors, \u03b2(k) TIC (\u03b2(k) TVC) is C \u00d7 1 vector of class-speci\ufb01c regression coef\ufb01cients from the second-type TIC (initial status of the TVC) to the growth factors. In addition, xei is the value of the second-type TIC and \u03b7[x] 0i is the TVC initial status of the ith individual, and \u03b6[y] i is a C \u00d7 1 vector of deviations of the individual i from the conditional means of growth factors. Equation 10 further de\ufb01nes the growth factors of the TVC in each latent class as the sum of the class-speci\ufb01c mean vector of the growth factors and the vector of deviations of the ith individual from the corresponding growth factor means. 6 A PREPRINT JANUARY 24, 2023 Similarly, the submodel of the proposed GMM with TICs and a TVC with the other type of decomposition for individual i can be expressed as \u0012 xi yi \u0013 |(zi = k) = \u039b[x] i 0 0 \u039b[y] i ! \u00d7 \u03b7[x] i \u03b7[y] i ! |(zi = k) + \u03ba(k) 2 \u00d7 \u0012 0 \u03b4xi \u0013 |(zi = k) + \u03f5[x] i \u03f5[y] i ! |(zi = k), (11) where \u03b4xi is a J \u00d7 1 vector of interval-speci\ufb01c changes of TVC. We further express interval-speci\ufb01c changes as \u03b4xi = (0 \u03b4xi2 \u03b4xi3 . . . \u03b4xiJ). Similar to dxi, the \ufb01rst element of \u03b4xi is 0, and \u03b4xij is the change that occurs in the (j \u22121)th time interval for individual i. Therefore, \u03ba(k) 2 is a class-speci\ufb01c state effect of the TVC, depicting how the value of the longitudinal outcome at tj is affected by the amount of change in the previous time interval in cluster k. So Equations 6, 7, 11, 9 and 10 together de\ufb01ne the GMM with TICs and a decomposed TVC into the initial status and interval-speci\ufb01c changes. In GMMs, the longitudinal outcome yi may take any functional form, either a linear or nonlinear function, and the pre-speci\ufb01ed functions are allowed to vary from class to class. The speci\ufb01cations of functional forms depend on the change patterns demonstrated by raw data and the research questions of particular interest. In Table 1, we list functional forms, growth factors and corresponding interpretations, and factor loadings for the linear growth curve and four commonly used nonlinear trajectories, namely the quadratic, negative exponential, Jenss-Bayley, and linear-linear function with an unknown \ufb01xed knot. ========================= Insert Table 1 about here ========================= Model Estimation This section describes how to estimate the proposed GMMs with TICs and a decomposed TVC. We make the following three assumptions to simplify model estimation. First, the second-type TIC and the growth factors of the TVC in each latent class are normally distributed; that is, xei|(zi = k) \u223cN(\u00b5[k] x , \u03c6[k] x ) and \u03b6[x] i |(zi = k) \u223cMVN(0, \u03a6(k)[x] \u03b7 ), in which \u00b5[k] x and \u03c6[k] x are the class-speci\ufb01c mean and variance of the second-type TIC, while \u03a6(k)[x] \u03b7 is a 2 \u00d7 2 class-speci\ufb01c variance-covariance matrix of the TVC growth factors. Second, we assume that the growth factors of the longitudinal outcome are normally distributed conditional on the second-type TIC and the true score of the TVC initial status in each latent class. Therefore, we have \u03b6[y] i |(zi = k) \u223cMVN(0, \u03a8(k)[y] \u03b7 ) in which \u03a8(k)[y] \u03b7 is a C \u00d7 C class-speci\ufb01c unexplained variance-covariance matrix of the growth factors of the longitudinal outcome. The third assumption is that the variable-speci\ufb01c residuals are identical and independent normal distributions and that the residual correlations are homogeneous over time, then we have \u03f5[x] i \u03f5[y] i ! |(zi = k) \u223cMVN \u0012 \u0012 0 0 \u0013 , \u03b8(k)[x] \u03f5 I \u03b8(k)[xy] \u03f5 I \u03b8(k)[y] \u03f5 I ! \u0013 , where I is a J \u00d7 J identify matrix, \u03b8(k)[x] and \u03b8(k)[y] are the class-speci\ufb01c residual variance of the TVC and the longitudinal outcome, respectively, and \u03b8(k)[xy] is the class-speci\ufb01c residual covariance. We provide the expressions of the model-implied mean vector and variance-covariance structure of the bivariate longitudinal variables in each latent class in Appendix A.1. The parameters in the proposed GMMs with TICs and a decomposed TVC include the class-speci\ufb01c mean (\u00b5(k) x ) and variance (\u03c6(k) x ) of the second-type TIC, the class-speci\ufb01c mean vector (\u00b5(k)[x] \u03b7 ) and variance-covariance matrix (\u03a6(k)[x] \u03b7 ) of the TVC growth factors, the class-speci\ufb01c intercepts (\u03b1(k)[y]) and unexplained variance-covariance (\u03a8(k)[y] \u03b7 ) of the growth factors of the longitudinal outcome, the class-speci\ufb01c effects on the growth factors of the longitudinal outcome of the TIC (\u03b2(k) TIC), the class-speci\ufb01c trait effects (\u03b2(k) TVC), the class-speci\ufb01c state effect of the TVC (\u03ba(k) 1 or \u03ba(k) 2 for the two types of decomposition, respectively), and the class-speci\ufb01c residual variances (\u03b8(k)[x] and \u03b8(k)[y]) and covariance (\u03b8(k)[xy]). The relationship between the second-type TIC and TVC is captured by the class-speci\ufb01c correlation (\u03c1(k) BL ) between the TIC and the trait feature of the TVC (i.e., the true score of the TVC initial status). In addition, we also 7 A PREPRINT JANUARY 24, 2023 need to estimate the coef\ufb01cients in the logistic regressions. We de\ufb01ne \u0398 as \u0398 ={\u00b5(k) x , \u03c6(k) x , \u00b5(k)[x] \u03b7 , \u03a6(k)[x] \u03b7 , \u03b3(k) 2 , . . . , \u03b3(k) J\u22121, \u03b1(k)[y], \u03a8(k)[y] \u03b7 , \u03b2(k) TIC, \u03b2(k) TVC, \u03ba(k) 1/2, \u03c1(k) BL , \u03b8(k)[x], \u03b8(k)[y], \u03b8(k)[xy], \u03b2(k) 0 , \u03b2(k)} k = 2, . . . , K for \u03b2(k) 0 , \u03b2(k) k = 1, . . . , K for other parameters to list the parameters in the speci\ufb01ed GMMs. In addition, as demonstrated in Table 1, there may exist additional class-speci\ufb01c growth coef\ufb01cients to capture the change patterns of the longitudinal outcome, such as class-speci\ufb01c coef\ufb01cients b(k) in the negative exponential growth curve, c(k) in the Jenss-Bayley trajectory, and \u03b3(k) in the bilinear spline functional form. These additional growth coef\ufb01cients also need to be estimated in the proposed model. We utilize the full information maximum likelihood (FIML) technique in the article to estimate the proposed GMMs with TICs and a decomposed TVC. The FIML approach allows for the heterogeneity of individual contributions to the likelihood. The proposed GMMs are built using the R package OpenMx with the optimizer CSOLNP (Neale et al., 2016; Pritikin et al., 2015; Hunter, 2018; Boker et al., 2020). We provide OpenMx code on the GitHub website (https://github.com/xxxx) (computing code will be uploaded upon acceptance) to demonstrate how to apply the proposed models. The proposed GMMs can also \ufb01t with other SEM software such as Mplus 8; the corresponding code is also provided on the GitHub website for researchers interested in using it. Model Evaluation We evaluate the proposed GMMs with TICs and a decomposed TVC using a simulation study with two goals. The \ufb01rst goal is to assess the performance metrics and clustering effect of the proposed GMMs. The performance measures include relative bias, empirical SE, RMSE, and empirical coverage for the 95% CI. The de\ufb01nitions and estimators of these four measures are provided in Table 2. The clustering effect is characterized by accuracy, de\ufb01ned as the proportion of all correctly classi\ufb01ed instances (Bishop, 2006, Chapter 1) as true memberships are available in the simulation study. With equation p(zi = k) = g(zi = k|xgi)p(yi|zi = k, xei, xi) PK k=1 g(zi = k|xgi)p(yi|zi = k, xei, xi) , we are able to estimate membership for each individual. The second goal is to compare the proposed GMMs with the corresponding full mixture model (i.e., GMMs with all TICs but only with the trait feature of the TVC) and \ufb01nite mixture model (i.e., GMM without any covariates)5. The comparison is based on Dumenci\u2019s Latent Kappa Statistics (Dumenci, 2011; Dumenci et al., 2019), performance metrics, and clustering effect. ========================= Insert Table 2 about here ========================= Similar to the Method section, we consider two \ufb01rst-type TICs, one second-type TIC, and a decomposed TVC in the simulation design. Moreover, we assume the longitudinal outcome takes the linear-linear functional form with an unknown \ufb01xed knot de\ufb01ned in Table 1 for three considerations. First, in each submodel, the variance of longitudinal outcome is divided into three parts, the variance explained by growth factors, the variance explained by state features of a TVC, and an unexplained variance (i.e., the residual). Therefore, including state features of the TVC in a GMM may affect the estimation of the growth factors of the longitudinal outcome. In practice, the majority of longitudinal processes are nonlinear. One advantage of the linear-linear piecewise function over other parametric trajectories is that it allows for assessing nonlinear change patterns with local linear functions. Therefore, this functional form is capable of explicitly evaluating the effects on the estimation of the slope of the TVC in each linear stage, which is unrealizable by other parametric nonlinear functions where each measurement is a composite of multiple growth factors. Second, Liu and Perera (2022c) have documented the simulation results for GMMs with the two types of TICs, which allows us to only focus on the simulation conditions related to the trait and state effects of the TVC, and in turn, reduces the size of simulation conditions in this project. Third, it is feasible to extend the simulation result observed from the GMMs with the linear-linear functions to other functional forms. 5Note that the \ufb01nite mixture model can be viewed as a reduced model of the proposed GMM by restricting all coef\ufb01cients from covariates as 0. However, the full mixture model, in which the growth factors are regressed on the observed initial status in each latent class, is not a reduced model of the proposed GMM, where the growth factors are regressed on the true initial status in each cluster. 8 A PREPRINT JANUARY 24, 2023 The number of repetitions in this simulation design is set as S = 1, 000, determined by an empirical method proposed by Morris et al. (2019). In particular, we conducted a pilot simulation and observed that the standard errors of all coef\ufb01cients, except those related to the initial status of longitudinal variables, were less than 0.15. This suggests that at least 900 repetitions are needed to keep the Monte Carlo standard error of the bias less than 0.0056. We then decided to perform a simulation study with 1, 000 replications to be more conservative. Design of Simulation Study Table 3 lists all conditions we considered for the proposed GMMs with TICs and a decomposed TVC in the simulation design. As stated earlier, we \ufb01xed the conditions examined in existing studies to limit the size of the simulation conditions in this research project. For example, we selected ten scaled and equally spaced study waves since Liu (2022) have shown that the one-group longitudinal model with linear-linear functional form and a TVC performed decently regarding the four performance metrics and that fewer study waves (e.g., six) of the longitudinal outcome did not affect model performance meaningfully. Following earlier studies, we obtained individual measurement occasions by allowing for a moderate time window (\u22120.25, 0.25) around each study wave (Coulombe et al., 2015). In addition, two allocation ratios were considered in the simulation design, 1:1 and 1:2, which are roughly controlled by the intercept coef\ufb01cients of the logistic functions. The class mixing proportions 1:1 was selected as it is a balanced allocation, while the other level of the allocation ratio helps examine model performance in an unbalanced condition (that is presumably more challenging) and understand how the sample size (of submodels) affects the proposed GMMs. ========================= Insert Table 3 about here ========================= In the simulation design, we manipulated the conditions related to the TVC, cluster separation, and allocation ratio, which presumably affect clustering algorithms and TVC effects. A critical characteristic of a model-based clustering model is its ability to detect sample heterogeneity and estimate parameters of interest. Intuitively, the algorithm should perform better under conditions with greater separation between unobserved groups. The distance between latent classes of a mixture model is captured by the difference between the class-speci\ufb01c density functions, which is mainly gauged by two measures, the Mahalanobis distance between class-speci\ufb01c growth factors of the longitudinal outcome and the difference in the knot locations (Kohli et al., 2015; Liu et al., 2021b; Liu and Perera, 2022c). In this project, we \ufb01x class-speci\ufb01c mean vectors and variance-covariance matrices of growth factors of the longitudinal outcome to keep the Mahalanobis distance as 0.86, a small distance according to Kohli et al. (2015). We set 1.0, 1.5, and 2.0 as small, medium, and large differences between knot locations, respectively (Liu et al., 2021b; Liu and Perera, 2022c). As pointed out by Liu and Perera (2022c), the covariates that explain within-class variability may also affect the distance between two classes, especially when the distributions of covariates are different across latent classes. Therefore, we set the joint distributions of the second-type TIC and the TVC baseline value to be the same across latent classes in the simulation study to minimize their possible impact on the distance between clusters. Moreover, we considered three scenarios of interval-speci\ufb01c slopes. In the \ufb01rst scenario, we set the same state features across classes. In the other two scenarios, the state features were set differently, either by different shape factors or different relative rates. In addition, the trait effects and state effects were set differently in the two latent classes. In addition, we considered two levels of the residual variance of the longitudinal outcome. Data Generation of Simulation Step We utilized a two-step data generation for each condition of the two GMMs listed in Table 3. First, we generated the \ufb01rst-type TICs, based on which we obtained the membership zi for each individual. Second, we simultaneously generate the longitudinal outcome, the second-type TIC, and the TVC for each cluster. The general steps of the simulation study are: 1. Obtain membership zi for the ith individual: (a) Generate data matrix of the \ufb01rst-type TICs xg, (b) Calculate the probability vector for each individual based on the \ufb01rst-type TICs and a collection of pre-speci\ufb01ed logistic coef\ufb01cients with a logic link, and assign each entry to the component with the highest probability, 6According to Morris et al. (2019), the equation of the Monte Carlo standard error of bias can be expressed as Monte Carlo SE(Bias) = q V ar(\u02c6 \u03b8)/S. 9 A PREPRINT JANUARY 24, 2023 2. Generate the second-type TIC, the growth factors of the TVC, and the growth factors of the longitudinal outcome for each latent class using the R package MASS (Venables and Ripley, 2002), 3. Generate a time structure with J = 10 waves tj and obtain individual measurement occasions by allowing for a time window around each wave tij \u223cU(tj \u2212\u2206, tj + \u2206) (\u2206= 0.25), 4. Calculate factor loadings for the TVC and the longitudinal outcome for each latent class, which are functions of the individual measurement occasions and additional class-speci\ufb01c growth coef\ufb01cient(s) (i.e., the relative rates of the TVC or the unknown knot for the longitudinal outcome), 5. Calculate a collection of state features of the TVC for each latent class, which are the interval-speci\ufb01c slopes or changes, 6. Calculate the true scores of repeated measurements for the TVC and the longitudinal outcome: the former is based on the TVC growth factors and the corresponding factor loadings, while the latter is based on the longitudinal outcome growth factors, factor loadings, and the interval-speci\ufb01c slopes or changes of the TVC, then add the residual matrix to the true scores of the TVC and the longitudinal outcome, 7. Implement the proposed GMM with each decomposition method, estimate the parameters, and construct the corresponding 95% Wald con\ufb01dence intervals, 8. Repeat the above steps until achieving 1, 000 convergent solutions. Results Model Convergence This section summarizes the convergence rate7 of the proposed GMMs with two types of TICs and a decomposed TVC. In general, the proposed GMMs converged well across all conditions. Speci\ufb01cally, out of 36 conditions for the GMM with a decomposed TVC into interval-speci\ufb01c slopes, 12 conditions reported 100% of convergence rate. It is noticed that all of these conditions with 100% of convergence rate were those with a large difference in knot locations (i.e., the difference was 2). In addition, 20 of the rest conditions reported a convergence rate above 97%. The worst scenario regarding the non-convergence rate is 37/1037, implying that the simulation process described above needs to be repeated 1037 times to generate 1, 000 replications with a convergent solution. It occurred under the condition of balanced allocation, the small difference between knot locations, the TVC having different shape factors but the same relative rates across clusters, and the small residual variance. The convergence rates of the GMM with a decomposed TVC into interval-speci\ufb01c changes were similar. We only kept the converged replications for further evaluation. Performance Measures This section recapitulates the four performance metrics, including the relative bias, empirical SE, relative RMSE, and CP for each parameter of interest of the proposed GMMs. Given the number of parameters and the size of simulation conditions, we \ufb01rst calculated each performance measure across 1, 000 replications for each parameter of interest under each condition. Then we summarized the values of each performance measure for each parameter of interest across all conditions as the corresponding median and range. Based on the result of our simulation study, the magnitudes of the performance metrics of each parameter between the two models were similar. Speci\ufb01cally, the magnitude of relative biases and relative RMSE across conditions and parameters of interest was less than 10% and 0.5, respectively. The CPs of all parameters of interest except for the knots were around 95%. These results suggest that both GMMs are capable of generating unbiased and accurate point estimates with target CPs in general. The summary statistics of the four performance metrics of the two proposed GMMs are provided in Tables S1 and S2 in the Online Supplementary Document. The CPs of class-speci\ufb01ed knots might be unsatisfactory. To examine the patterns, we provided the plots of CPs of cluster-speci\ufb01c knots, strati\ufb01ed by the cluster separation, in Figures S1 and S2 in the Online Supplementary Document. The CPs, which might be conservative under some challenging conditions, were still around 95% under the conditions with the large separation. This \ufb01nding aligns with results of simulation studies in earlier research works such as Liu and Perera (2022c). Clustering Effects In this section, we evaluate the clustering effect of the proposed GMMs, gauged by mean accuracy values across 1000 replications for the conditions listed in Table 3. We calculated the mean accuracy value across 1000 replications for 7In this present article, convergence is de\ufb01ned as the achievement of OpenMx status code 0, indicating a successful optimization with up to 10 attempts with different sets of starting values. 10 A PREPRINT JANUARY 24, 2023 each model under each condition. In the Online Supplementary Document, we provided the plots of the mean accuracy values, strati\ufb01ed by cluster separation and differences in TVCs of two latent classes, in Figures S3 and S4. The proposed GMMs performed well in separating trajectories into latent classes since the accuracy value achieved at least 88% across all conditions. The mean accuracy values were the highest under the conditions with the large cluster separation, followed by those with the medium separation and then the small separation. In addition, allowing different state features across latent classes, either by different shape factors or different relative rates, improved the accuracy values, as shown in Figures S3 and S4. It is within our expectations to see. The difference in state features between clusters increases the cluster separation; the larger the cluster separation is, the easier the clustering algorithm can tell them apart, leading to greater accuracy values. Comparison Among Models This section compares the proposed GMMs with a decomposed TVC to the corresponding full and \ufb01nite mixture models. We \ufb01rst evaluated how the exclusion of the decomposed TVC or its state features affects the clustering membership captured by Dumenci\u2019s Latent Kappa statistics. We \ufb01rst calculated the mean value of Dumenci\u2019s Latent Kappa statistics across 1000 repetitions. Then we plotted these values strati\ufb01ed by the cluster separation and the difference in TVC state features between clusters in Figures S5 and S6 for each condition of each GMM with a TVC, respectively. We \ufb01rst noticed that the mean values of latent Kappa between the proposed models and the corresponding full mixture models under the conditions with the same TVC state features were above 0.8, indicating almost perfect agreement (Landis and Koch, 1977; Nakazawa, 2019). It suggests that excluding the TVC state features from a GMM did not affect the membership meaningfully under such conditions. On the contrary, removing TVC state features from a GMM affected the membership to some extent when the state features differed across clusters, and the corresponding mean latent Kappa was less than 0.70. Excluding TICs from a GMM may further affect the clustering membership slightly. In addition, the performance measures and accuracy of the full and \ufb01nite mixture models were worse than the corresponding proposed model. Some estimates from the models without a TVC exhibited some bias greater than 10%. For example, the magnitude of relative bias of the effects of the second-type TICs may achieve 50%. One possible explanation for such unsatisfactory performance of the full mixture model is that we regressed the growth factors of the longitudinal outcome on the observed TVC initial status rather than the corresponding true scores as we did in the proposed GMMs. The four performance metrics of the model without a TVC were also provided in the Online Supplementary Materials (Tables S3-S6). We also plotted the mean accuracy values of the reduced mixture models in Figures S3 and S4. From the \ufb01gures, we observed that the mean accuracy values of the mixture model without a TVC were smaller than those with a TVC, suggesting less satisfactory clustering effects of the models excluding a TVC. Application This section demonstrates how to employ the proposed GMMs with the two types of TICs and a decomposed TVC to perform a real-world analysis with two goals. The \ufb01rst goal is to explore how the inclusion of a TVC when modeling a mixture model affects the estimated membership and growth factors of the longitudinal outcome. Based on this, we aim to provide a feasible set of recommendations for applying the proposed models. For this application, a random sample of 500 students was selected from the Early Childhood Longitudinal Study, Kindergarten Cohort: 2010-2011 (ECLS-K: 2011). The extracted dataset contains non-missing records of repeated reading and mathematics item response theory (IRT) scores with baseline family income, parents\u2019 education, and teacher-reported inhibitory control8. ECLS-K: 2011, which starts from the 2010-2011 school year, is a longitudinal study to gather information on child development from approximately 900 kindergarten programs across the United States. Students\u2019 reading and mathematics skills were evaluated in nine waves, each semester in the \ufb01rst three years (i.e., from Grade K to Grade 2) and then only the spring semester in the rest of three years (i.e., from Grade 3 to Grade 5). Only about one-third of students were assessed in the fall semesters of Grades 1 and 2 (L\u00ea et al., 2011). This survey has two types of time metrics: study waves, indicated by school semesters, and students\u2019 real age (in months). We used students\u2019 real age and converted it to age-in-years for individual measurement occasions. In this application, we viewed mathematics development as the longitudinal outcome while reading development as the TVC. In addition, the baseline socioeconomic status, including the highest education among parents and family income, and the baseline teacher-reported inhibitory control as the \ufb01rst and second types of covariates, respectively. 8There are n = 18174 participants in ECLS-K: 2011. After removing missing values (i.e., records with any NaN/-9/-8/-7/-1) in the selected variables, 2140 students were held. 11 A PREPRINT JANUARY 24, 2023 Enumeration Process This section demonstrated how to determine the number of latent classes for the proposed GMMs with the two types of TICs and a decomposed TVC. Following the SEM literature convention, we conducted the enumeration process without covariates. We \ufb01rst \ufb01t one-, two-, threeand four-class models for mathematics development. All four models converged, and the corresponding estimated likelihood values, information criteria, including Akaike information criterion (AIC) and BIC, and class-speci\ufb01c residual variances are provided in Table 4. From the table, we noticed that the model with three latent classes has the smallest BIC, leading to the conclusion that the optimal number of latent classes for mathematics development was three from the statistical perspective. The optimal model suggested the estimated mixing proportions of the student clusters are 20.20%, 42.20%, and 37.60%, respectively. ========================= Insert Table 4 about here ========================= Proposed Models We then \ufb01t the proposed GMMs with the two types of TICs and a decomposed TVC to evaluate how the baseline socioeconomic status, including family income and the highest education level among patients, contribute to student clusters of mathematics development. We are also interested in assessing how students\u2019 baseline teacher-reported inhibitory control and reading ability account for within-class differences in mathematics development. We standardized the TIC, baseline teacher-reported inhibitory control. For the TVC, reading IRT scores over time, we \ufb01rst calculated the mean and standard deviation of the baseline reading scores and then standardized the score at each wave with the baseline mean value and standard deviation. We built the proposed GMMs with the optimal number of latent classes determined in the enumeration process and provided their estimated likelihood, information criteria, and class-speci\ufb01c residuals in Table 49. In addition to the two GMMs with a TVC, we also constructed a reference model with TICs and only the trait effect of the TVC (i.e., a GMM with the standardized baseline socioeconomic variables as the \ufb01rst-type TICs, while the standardized baseline reading scores and teacher-reported inhibitory scores as the second-type TICs). The corresponding model summary is also included in Table 4. From the table, we noticed that the inclusion of TICs decreased the estimated likelihood and, therefore, increased AIC and BIC values. Considering the state effects of a TVC reduced the estimated likelihood and raised AIC and BIC values further. However, adding these covariates only slightly affected the classspeci\ufb01c residual variances of mathematics development. Among the two 3-class GMMs with a decomposed TVC, the one with interval-speci\ufb01c changes had the smaller estimated likelihood, AIC, and BIC, indicating that, from the statistical perspective, this model \ufb01t the raw data better. In Figure 2, we plot class-speci\ufb01c model-implied curves on the smooth lines of mathematics development for the four 3-class GMMs and examine how adding covariates affects the student clusters and the estimation of the growth factors of the trajectories mathematics IRT scores. We \ufb01rst observed that the estimated mixing proportions of the two proposed GMMs differed from those obtained from the model without the state effects or the model without covariates. Using the GMM with a decomposed TVC into interval-speci\ufb01c slopes as the reference, the Dumenci latent Kappa statistics of the \ufb01nite mixture model, full mixture model, and the other GMM with a decomposed TVC were 0.26 with 95% CI (0.19, 0.37), 0.33 with 95% CI (0.27, 0.40) and 0.98 with 95% CI (0.96, 0.99), indicating fair, fair and almost perfect agreement. Such \ufb01ndings suggested that adding a TVC in GMMs may change the estimated membership meaningfully, which aligns with the result of the simulation study and that obtained from the application of Liu and Perera (2022c). Yet, the decomposition method of a TVC only affected the estimated membership slightly. In addition, we noticed that adding a TVC helped separate the mathematics trajectories of Class 1 and Class 2, shown in Figure 2. The class-speci\ufb01c estimates of the two proposed GMMs are provided in Tables 5 and 6, respectively. The tables show that the mathematics development, TVC, and its trait and state effects are heterogeneous across latent classes. Two GMMs with a decomposed TVC provided similar class-speci\ufb01c estimated trajectories of mathematics development in general, as demonstrated in Tables 5 and 6. In particular, on average, students in Class 3 performed the best in mathematics tests throughout the entire study duration, followed by those in Class 2 and then in Class 1. Mathematics development slowed down around 8or 9years old for all three classes. In addition, better mathematics performance was associated with better reading ability and better inhibitory control at baseline. As shown in Table 5, the estimated mean values of the standardized baseline reading IRT scores/inhibitory control were negative (\u22120.67/\u22120.44), around 9In Table 4, we also list the estimated likelihood, information criteria, and (class-speci\ufb01c) residual(s) for one-group models with a decomposed TVC and the proposed models with two latent classes. We noticed that the optimal number of latent classes was still three when adding TICs and a decomposed TVC. 12 A PREPRINT JANUARY 24, 2023 zero (\u22120.26/0.10), and positive (0.81/0.19) for class 1, Class 2, and Class 3, respectively. Additionally, the slopes of standardized reading scores during the \ufb01rst time interval were 1.51, 1.96, and 2.55 for three latent classes, respectively. These \ufb01ndings suggest that the region with higher reading scores and inhibitory control values was associated with better mathematics development and vice versa. ========================= Insert Table 5 about here ========================= ========================= Insert Table 6 about here ========================= The two proposed models\u2019 estimated class-speci\ufb01c trait effects of the reading development were also similar. Take the GMM with interval-speci\ufb01c slopes as an example. The trait effects on the initial status of mathematics development were 11.17, 6.47, and 6.29 for the three latent classes, suggesting that the initial mathematics IRT scores would increase 11.17, 6.47, and 6.29 with one SD increase in baseline reading scores, respectively. The estimated class-speci\ufb01c state effects were different between the two models as such effects were based on interval-speci\ufb01c slopes and changes, respectively. In particular, the class-speci\ufb01c state effects were 3.38, 1.96, and 1.35 for the GMM with interval-speci\ufb01c slopes, while they were 8.13, 3.71, and 2.76 for the GMM with interval-speci\ufb01c changes. The interpretation of such state effects is also straightforward. Take the effects of the \ufb01rst class as an example. The effect of 3.38 suggests that a one-unit increase in the slope of standardized reading IRT scores in a school semester (grade) led to a 3.38 increase in mathematics \ufb01nal scores that semester (grade) for the students in the \ufb01rst latent class. Similarly, 8.13 indicates that a one-unit increase in the change of standardized reading scores in a school semester (grade) resulted in a 8.13 increase in mathematics \ufb01nal scores that semester (grade) for the students in Cluster 1. We also noticed that the estimated growth factors of mathematics development in the \ufb01rst latent class obtained from the GMM with interval-speci\ufb01c changes were relatively smaller than those from the other model. This led to underestimated trajectories indicated by growth factors in the \ufb01rst class, as shown in Figure 2d. We will discuss this observation in the section Practical Considerations. Discussion This article further explores the effects on trajectory heterogeneity of a covariate. In particular, it examines two possible ways to decompose a TVC into the trait and state features in the GMM framework. We view the true score of the baseline value of a TVC as the trait feature and evaluate class-speci\ufb01c trait effects on the growth factors of the longitudinal outcome. Moreover, we consider either interval-speci\ufb01c slopes or changes as a set of state features and examine class-speci\ufb01c state effects on the measurements of the longitudinal outcome. We evaluated the proposed GMMs through the simulation study and a real-world application. Generally, the GMMs with a decomposed TVC can separate the longitudinal outcome and estimate the class-speci\ufb01c parameters of interest unbiasedly and accurately with 95% CPs. Similar to other GMMs, increased separation between latent classes improved clustering effects indicated by accuracy values. In addition, we noticed that the mixture models with a TVC, especially when the TVC\u2019s state features differed across clusters, improved the accuracy values. It is within our expectations since the inclusion of the TVC increased the separation between latent classes. We also demonstrated the proposed GMMs with real-world data analyses. Practical Considerations This section provides a set of recommendations for empirical researchers based on the simulation study and real-world analysis. Liu (2022) proposed three TVC decomposition methods for the one-group model: (1) a baseline value with interval-speci\ufb01c slopes, (2) a baseline value with interval-speci\ufb01c changes, and (3) a baseline value with changes from baseline at each post-baseline point. All three methods can be extended to the mixture model framework. Therefore, along with the existing GMM-TVC model constructed by Liu and Perera (2022c), a TVC can be included in GMMs through four approaches. In the simulation study and the real-world application of the present work, we only considered the \ufb01rst two decomposition methods that are able to provide valid estimates for a one-group model shown by the simulation study in Liu (2022). It is not our intention to suggest not utilizing the other two approaches since they can also provide insights if the evaluation of the effects of the observed values or change-from-baseline of a TVC is of research interest. However, the interpretation of the estimates, especially the parameters related to growth factors of the longitudinal outcome, from the other two methods needs to be cautious. Adding a TVC into growth models allows for regressions of 13 A PREPRINT JANUARY 24, 2023 the longitudinal outcome on growth factors and the TVC and, therefore, divides the variability of the observed values at each occasion of the longitudinal outcome into three parts: (1) the variability explained by (class-speci\ufb01c) growth factors that may be further characterized by (second-type) TICs and the trait feature of the TVC, (2) the variability explained by a TVC, and (3) residual variances. As demonstrated by Liu (2022) and the present work, the inclusion of a TVC in growth models usually shrinks the estimates of the (class-speci\ufb01c) growth factors while only slightly affecting the estimates of the residual variances. The greater the state effects and/or the corresponding values of state features are, the higher the squeezing impacts are. Conceptually, the observed values at each point and the changes from baseline at each post-baseline are larger than interval-speci\ufb01c slopes or changes, which leads to more underestimated growth factor means. As we showed in the application section, there are two ways to gauge the gap between the estimated growth factors and those characterized by the raw data. One may construct reference models without a TVC and/or plot the (class-speci\ufb01c) trajectory based on the estimated growth factors and the smooth line in a real-world analysis as shown in Figure 2. Following Liu (2022), we standardized the TVC with its baseline mean value and standard deviation at each study wave to keep its underlying change patterns and variance-covariance structure and allow for transforming back and obtaining estimates in the original scales. Similar to the \ufb01ndings in Liu and Perera (2022c), the clustering algorithm also divides the second-type TIC and the TVC into K regions when separating trajectories of the longitudinal outcome into K latent classes. For example, in our application, standardized baseline inhibitory control and standardized reading IRT scores were split into negative, about zero, and positive regions when classifying mathematics development into three latent classes. This division, along with the heterogeneity in state effects, also suggests that the squeezing impact of the TVC on the estimation of the growth factors may also be different across latent classes. As demonstrated in Figure 2d, the growth factors in Class 1 might be somewhat underestimated due to relatively greater state effects of the class-speci\ufb01c interval-speci\ufb01c changes and the corresponding feature values. On the contrary, the impacts of the TVC in the other two latent classes were negligible. In addition, we still recommend building one-step GMMs with a stepwise fashion for the proposed models, as we demonstrated in the application section. In particular, the enumeration process should be performed to have optimal solutions to the number of latent classes without covariates by following the SEM literature convention. With the stable solution of the number of latent classes, we construct the entire model and estimate parameters of interest. The determination of covariates in mixture models could be data-driven, which can be realized by SEM machine learning techniques, such as EFA and SEM forests, as shown by Liu et al. (2021b); Liu and Perera (2022c,b). When applying these machine learning techniques, we recommend only keeping the trait feature of a TVC. Methodological Considerations and Future Directions This study has shown several directions in need of further examination. First, a TIC is not allowed to account for between-class differences and within-class heterogeneity simultaneously in this present study following Liu and Perera (2022c) for both conceptual and technical considerations that have been elaborated in the existing work. However, a TIC, even the trait feature of a TVC, may simultaneously contribute to betweenand within-class differences. As demonstrated in the application section, the covariate space of the standardized baseline inhibitory control and reading IRT scores were also divided by the clustering algorithm into the negative, about zero, and positive regions. It suggests that other functions rather than the logistic regression, which only allows for the generalized linear relationship between the covariate and the membership, for the mixing proportions can be developed. One possible function is similar to a tree-based model, and the corresponding estimates based on covariates could be the \u2018boundary\u2019 of each region. Second, as pointed out in the application section, the underestimated growth factors of the longitudinal outcome are due to the squeezing effects of the TVC, which is also a predictor of measurement of the longitudinal outcome at each point. As demonstrated in Figures 2c and 2d, such shrinking effects are heterogeneous over time. One may consider relaxing the homogeneous state effects (i.e., \u03ba1 or \u03ba2) over time to allow for varying state effects, adjusted by the magnitude of the corresponding state features. Although the investigation of the GMM with a TVC with heterogeneous state effects over time is out of the scope of the current article, it can be a future direction. Third, as pointed out in earlier sections and summarized in Table 1, the longitudinal outcome may take other functional forms, such as quadratic or negative exponential curves, if the change patterns of the raw data warrant and/or the speci\ufb01c growth coef\ufb01cients, such as acceleration or growth capacity, are of research interests. The GMMs with a decomposed TVC and the parametric functions listed in Table 1 are also provided on the GitHub website for researchers interested in employing them. The TVC can also take other functional forms with the speci\ufb01cation proposed by Liu and Perera (2022a), which allows for the examination of growth coef\ufb01cients of the TVC in addition to those interval-speci\ufb01c slopes or changes. 14 A PREPRINT JANUARY 24, 2023 Fourth, the proposed GMMs are illustrated with the same time structure for the longitudinal outcome and the TVC. Yet, one may extend the models to analyze a process with longitudinal variables having different measurement times since the models are constructed in the framework of individual measurement occasions. Last, the proposed models can also be utilized to analyze data with dropout under the assumptions of missing at random since the FIML technique is employed for model estimation. Concluding Remarks This article proposes two GMMs with a decomposed TVC to evaluate the trait effect and state effect of the TVC on sample heterogeneity separately. In the two models, we view the baseline values of the TVC as its trait feature and the interval-speci\ufb01c slopes or changes as a collection of state features. The proposed growth models allow for the examination of the heterogeneity in the trait and state features and their heterogeneous effects. They can be further extended in practice and further investigated in methodology." + } + ] +} \ No newline at end of file