| { |
| "url": "http://arxiv.org/abs/2404.16456v1", |
| "title": "Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities", |
| "abstract": "Multimodal sentiment analysis (MSA) aims to understand human sentiment\nthrough multimodal data. Most MSA efforts are based on the assumption of\nmodality completeness. However, in real-world applications, some practical\nfactors cause uncertain modality missingness, which drastically degrades the\nmodel's performance. To this end, we propose a Correlation-decoupled Knowledge\nDistillation (CorrKD) framework for the MSA task under uncertain missing\nmodalities. Specifically, we present a sample-level contrastive distillation\nmechanism that transfers comprehensive knowledge containing cross-sample\ncorrelations to reconstruct missing semantics. Moreover, a category-guided\nprototype distillation mechanism is introduced to capture cross-category\ncorrelations using category prototypes to align feature distributions and\ngenerate favorable joint representations. Eventually, we design a\nresponse-disentangled consistency distillation strategy to optimize the\nsentiment decision boundaries of the student network through response\ndisentanglement and mutual information maximization. Comprehensive experiments\non three datasets indicate that our framework can achieve favorable\nimprovements compared with several baselines.", |
| "authors": "Mingcheng Li, Dingkang Yang, Xiao Zhao, Shuaibing Wang, Yan Wang, Kun Yang, Mingyang Sun, Dongliang Kou, Ziyun Qian, Lihua Zhang", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Distillation", |
| "gt": "Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities", |
| "main_content": "Introduction \u201cCorrelations serve as the beacon through the fog of the missingness.\u201d \u2013Lee & Dicken Multimodal sentiment analysis (MSA) has attracted wide attention in recent years. Different from the traditional unimodal-based emotion recognition task [7], MSA \u00a7Corresponding author. Equal contribution. Modality Content Label Prediction Language Visual Audio Neutral Positive It was a great movie and I loved it. \u2026 Language Visual Audio It was a great movie and I loved it. \u2026 Positive Positive Figure 1. Traditional model outputs correct prediction when inputting the sample with complete modalities, but incorrectly predicts the sample with missing modalities. We define two missing modality cases: (i) intra-modality missingness (i.e., the pink areas) and (ii) inter-modality missingness (i.e., the yellow area). understands and recognizes human emotions through multiple modalities, including language, audio, and visual [28]. Previous studies have shown that combining complementary information among different modalities facilitates the generation of more valuable joint multimodal representations [34, 36]. Under the deep learning paradigm [3, 17, 42, 43, 54, 59, 60], numerous studies assuming the availability of all modalities during both training and inference stages [10, 19, 22, 49\u201353, 55\u201358, 62]. Nevertheless, this assumption often fails to align with real-world scenarios, where factors such as background noise, sensor constraints, and privacy concerns may lead to uncertain modality missingness issues. Modality missingness can significantly impair the effectiveness of well-trained models based on complete modalities. For instance, as shown in Figure 1, the entire visual modality is missing, and some frame-level fea1 arXiv:2404.16456v1 [cs.CV] 25 Apr 2024 \ftures in the language and audio modalities are missing, leading to an incorrect sentiment prediction. In recent years, many works [20, 21, 23, 24, 32, 45, 46, 66] attempt to address the problem of missing modalities in MSA. As a typical example, MCTN [32] guarantees the model\u2019s robustness to the missing modality case by learning a joint representation through cyclic translation from the source modality to the target modality. However, these methods suffer from the following limitations: (i) inadequate interactions based on individual samples lack the mining of holistically structured semantics. (ii) Failure to model cross-category correlations leads to loss of sentiment-relevant information and confusing distributions among categories. (iii) Coarse supervision ignores the semantic and distributional alignment. To address the above issues, we present a Correlationdecoupled Knowledge Distillation (CorrKD) framework for the MSA task under uncertain missing modalities. There are three core contributions in CorrKD based on the tailored components. Specifically, (i) the proposed samplelevel contrastive distillation mechanism captures the holistic cross-sample correlations and transfers valuable supervision signals via sample-level contrastive learning. (ii) Meanwhile, we design a category-guided prototype distillation mechanism that leverages category prototypes to transfer intraand inter-category feature variations, thus delivering sentiment-relevant information and learning robust joint multimodal representations. (iii) Furthermore, we introduce a response-disentangled consistency distillation strategy to optimize sentiment decision boundaries and encourage distribution alignment by decoupling heterogeneous responses and maximizing mutual information between homogeneous sub-responses. Based on these components, CorrKD significantly improves MSA performance under uncertain missing-modality and complete-modality testing conditions on three multimodal benchmarks. 2. Related Work 2.1. Multimodal Sentiment Analysis MSA aims to understand and analyze human sentiment utilizing multiple modalities. Mainstream MSA studies [9, 10, 22, 37, 50, 53, 55\u201358] focus on designing complex fusion paradigms and interaction mechanisms to enhance the performance of sentiment recognition. For instance, CubeMLP [37] utilizes three independent multi-layer perceptron units for feature-mixing on three axes. However, these approaches based on complete modalities cannot be deployed in real-world applications. Mainstream solutions for the missing modality problem can be summarized in two categories: (i) generative methods [6, 23, 25, 45] and (ii) joint learning methods [24, 32, 46, 66]. Reconstruction methods generate missing features and semantics in modalities based on available modalities. For example, TFR-Net [63] leverages the feature reconstruction module to guide the extractor to reconstruct missing semantics. MVAE [6] solves the modality missing problem by the semi-supervised multi-view deep generative framework. Joint learning efforts refer to learning joint multimodal representations utilizing correlations among modalities. For instance, MMIN [69] generates robust joint multimodal representations via cross-modality imagination. TATE [66] presents a tag encoding module to guide the network to focus on missing modalities. However, the aforementioned approaches fail to account for the correlations among samples and categories, leading to inadequate compensation for the missing semantics in modalities. In contrast, we design effective learning paradigms to adequately capture potential inter-sample and inter-category correlations. 2.2. Knowledge Distillation Knowledge distillation utilizes additional supervisory information from the pre-trained teacher\u2019s network to assist in the training of the student\u2019s network [11]. Knowledge distillation methods can be roughly categorized into two types, distillation from intermediate features [15, 29, 38, 61] and responses [4, 8, 27, 48, 68]. Many studies [13, 18, 33, 40, 47] employ knowledge distillation for MSA tasks with missing modalities. The core concept of these efforts is to transfer \u201cdark knowledge\u201d from teacher networks trained by complete modalities to student networks trained by missing modalities. The teacher model typically produces more valuable feature presentations than the student model. For instance, [13] utilizes the complete-modality teacher network to implement supervision on the unimodal student network at both feature and response levels. Despite promising outcomes, they are subject to several significant limitations: (i) Knowledge transfer is limited to individual samples, overlooking the exploitation of clear correlations among samples and among categories. (ii) Supervision on student networks is coarse-grained and inadequate, without considering the potential alignment of feature distributions. To this end, we propose a correlation-decoupled knowledge distillation framework that facilitates the learning of robust joint representations by refining and transferring the crosssample, cross-category, and cross-target correlations. 3. Methodology 3.1. Problem Formulation Given a multimodal video segment with three modalities as S = [XL, XA, XV ], where XL \u2208RTL\u00d7dL, XA \u2208 RTA\u00d7dA, and XV \u2208RTV \u00d7dV denote language, audio, and visual modalities, respectively. Tm(\u00b7) is the sequence length and dm(\u00b7) is the embedding dimension, where m \u2208 {L, A, V }. Meanwhile, the incomplete modality is denoted 2 \f\u2026\u2026 \u2026\u2026 Teacher representations Student representations Cos Cos SCD CPD RCD It was a great movie and I loved it. \u2026 Transformer Encoder GAP It was a great movie and I loved it. \u2026 MRM Training Data Flow Inference Data flow Transformer Encoder Transformer Encoder Concatenation \u2217 1D Convolution \u2217 \u2217 \u2217 Transformer Encoder GAP Transformer Encoder Transformer Encoder \u2217 \u2217 \u2217 Batch Data Classifier Classifier Teacher Network Student Network Figure 2. The structure of our CorrKD, which consists of three core components: Sample-level Contrastive Distillation (SCD) mechanism, Category-guided Prototype Distillation (CPD) mechanism, and Response-disentangled Consistency Distillation (RCD) strategy. as \u02c6 Xm. We define two missing modality cases to simulate the most natural and holistic challenges in real-world scenarios: (i) intra-modality missingness, which indicates some frame-level features in the modality sequences are missing. (ii) inter-modality missingness, which denotes some modalities are entirely missing. Our goal is to recognize the utterance-level sentiments by utilizing the multimodal data with missing modalities. 3.2. Overall Framework Figure 2 illustrates the main workflow of CorrKD. The teacher network and the student network adopt a consistent structure but have different parameters. During the training phase, our CorrKD procedure is as follows: (i) we train the teacher network with complete-modality samples and then freeze its parameters. (ii) Given a video segment sample S, we generate a missing-modality sample \u02c6 S with the Modality Random Missing (MRM) strategy. MRM simultaneously performs intra-modality missing and inter-modality missing, and the raw features of the missing portions are replaced with zero vectors. S and \u02c6 S are fed into the initialized student network and the trained teacher network, respectively. (iii) We input the samples S and \u02c6 S into the modality representation fusion module to obtain the joint multimodal representations Ht and Hs. (iv) The sample-level contrastive distillation mechanism and the category-guided prototype distillation mechanism are utilized to learn the feature consistency of Ht and Hs. (v) These representations are fed into the task-specific fully-connected layers and the softmax function to obtain the network responses Rt and Rs. (vi) The response-disentangled consistency distillation strategy is applied to maintain consistency in the response distribution, and then Rs is used to perform classification. In the inference phase, testing samples are only fed into the student network for downstream tasks. Subsequent sections provide details of the proposed components. 3.3. Modality Representation Fusion We introduce the extraction and fusion processes of modality representations using the student network as an example. The incomplete modality \u02c6 Xs m \u2208RTm\u00d7dm with m \u2208{L, A, V } is fed into the student network. Firstly, \u02c6 Xs m passes through a 1D temporal convolutional layer with kernel size 3 \u00d7 3 and adds the positional embedding [39] to obtain the preliminary representations, denoted as \u02c6 F s m = W3\u00d73( \u02c6 Xs m) + PE(Tm, d) \u2208RTm\u00d7d. Each F s m is fed into a Transformer [39] encoder Fs \u03d5(\u00b7), capturing the modality dynamics of each sequence through the self-attention mechanism to yield representations Es m, denoted as Es m = Fs \u03d5(F s m). The representations Es m are concatenated to obtain Zs, expressed as Zs = [Es L, Es A, Es V ] \u2208RTm\u00d73d. Subsequently, Zs is fed into the Global Average Pooling (GAP) to further enhance and refine the features, yielding the joint multimodal representation Hs \u2208R3d. Similarly, the joint multimodal representation generated by the teacher network is represented as Ht \u2208R3d. 3.4. Sample-level Contrastive Distillation Most previous studies of MSA tasks with missing modalities [33, 40, 47] are sub-optimal, exploiting only onesided information within a single sample and neglecting to consider comprehensive knowledge across samples. To 3 \fthis end, we propose a Sample-level Contrastive Distillation (SCD) mechanism that enriches holistic knowledge encoding by implementing contrastive learning between sample-level representations of student and teacher networks. This paradigm prompts models to sufficiently capture intra-sample dynamics and inter-sample correlations to generate and transfer valuable supervision signals, thus precisely recovering the missing semantics. The rationale of SCD is to take contrastive learning within all mini-batches, constraining the representations in two networks originating from the same sample to be similar, and the representations originating from different samples to be distinct. Specifically, given a mini-batch with N samples B = {S0, S1, \u00b7 \u00b7 \u00b7 , SN}, we obtain their sets of joint multimodal representations in teacher and student networks, denoted as {Hw 1 , Hw 2 , \u00b7 \u00b7 \u00b7 , Hw N} with w \u2208{t, s}. For the same input sample, we narrow the distance between the joint representations of the teacher and student networks and enlarge the distance between the representations for different samples. The contrastive distillation loss is formulated as follows: \\s m a l l \\ m a thcal {L }_{S C D } = \\sum _{i =1}^N\\ s u m _{j=1,j\\neq i}^N\\mathcal {D}(\\bm {H}^s_i,\\bm {H}^t_i)^2 + max\\{0, \\eta \\mathcal {D}(\\bm {H}^s_i,\\bm {H}^t_j)\\}^2, (1) where D(Hs, Ht) = \u2225Hs \u2212Ht\u22252 , \u2225\u00b7\u22252 represents \u21132 norm function, and \u03b7 is the predefined distance boundary. When negative pairs are distant enough (i.e., greater than boundary \u03b7), the loss is set to 0, allowing the model to focus on other pairs. Since the sample-level representation contains holistic emotion-related semantics, such a contrastive objective facilitates the student network to learn more valuable knowledge from the teacher network. 3.5. Category-guided Prototype Distillation MSA data usually suffers from the dilemmas of high intracategory diversity and high inter-category similarity. Previous approaches [13, 18, 33] based on knowledge distillation to address the modality missing problem simply constrain the feature consistency of the teacher and student networks. The rough manner lacks consideration of crosscategory correlation and feature variations, leading to ambiguous feature distributions. To this end, we propose a Category-guided Prototype Distillation (CPD) mechanism, with the core insight of refining and transferring knowledge of intraand inter-category feature variations via category prototypes, which is widely utilized in the field of few-shot learning [35]. The category prototype represents the embedding center of every sentiment category, denoted as: \\ s mall \\bm { c}_k = \\frac {1}{|\\bm {B}_k|}\\sum _{\\bm {S}_i \\in \\bm {B}_k}{\\bm {H}_i}, (2) where Bk denotes the set of samples labeled with category k in the mini-batch, and Si denotes the i-th sample in Bk. The intraand inter-category feature variation of the sample Si is defined as follows: \\sm a ll \\ b m {M} _k(i) = \\frac {\\bm {H}_i \\, \\bm {c}_k^\\top }{\\left \\| \\bm {H}_i \\right \\|_2 \\left \\| \\bm {c}_k \\right \\|_2 }, (3) where Mk(i) denotes the similarity between the sample Si and the prototype ck. If the sample Si is of category k, Mk(i) represents intra-category feature variation. Otherwise, it represents inter-category feature variation. The teacher and student networks compute similarity matrices M t and M s, respectively. We minimize the squared Euclidean distance between the two similarity matrices to maintain the consistency of two multimodal representations. The prototype distillation loss is formulated as: \\ s m a ll \\ mat h c al { L} _ {CPD } = \\ frac { 1 }{N K} \\sum _{i=1}^N \\sum _{k=1}^K\\left \\|\\bm {M}_k^s(i)-\\bm {M}_k^t(i)\\right \\|_2, (4) where K is the category number of the mini-batch. 3.6. Response-disentangled Consistency Distillation Most knowledge distillation studies [15, 29, 38, 61] focus on extracting knowledge from intermediate features of networks. Although the model\u2019s response (i.e., the predicted probability of the model\u2019s output) presents a higher level of semantics than the intermediate features, responsebased methods achieve significantly worse performance than feature-based methods [41]. Inspired by [67], the model\u2019s response consists of two parts: (i) Target Category Response (TCR), which represents the prediction of the target category and describes the difficulty of identifying each training sample. (ii) Non-Target Category Response (NTCR), which denotes the prediction of the non-target category and reflects the decision boundaries of the remaining categories to some extent. The effects of TCR and NTCR in traditional knowledge distillation loss are coupled, i.e., high-confidence TCR leads to low-impact NTCR, thus inhibiting effective knowledge transfer. Consequently, we disentangle the heterogeneous responses and constrain the consistency between the homogeneous responses. From the perspective of information theory, knowledge consistency between responses can be characterized as maintaining high mutual information between teacher and student networks [1]. This schema captures beneficial semantics and encourages distributional alignment. Specifically, the joint multimodal representation Hw with w \u2208{t, s} of teacher and student networks pass through fully-connected layers and softmax function to obtain response Rw. Based on the target indexes, we decouple the response Rw to obtain TCR Rw T and NTCR Rw NT . Define Q \u2208Q and U \u2208U as two random variables. Formulaically, the marginal probability density functors of Q and U are denoted as P(Q) and P(U). P(Q, U) is regarded as the joint probability density functor. The mutual 4 \fSelf-MM CubeMLP MCTN TransM SMIL CorrKD GCNet DMD 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Happy) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Sad) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 90 100 F1 Score (Angry) Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 30 40 50 60 70 80 F1 Score (Neutral) Missing Ratio Figure 3. Comparison results of intra-modality missingness on IEMOCAP. We comprehensively report the F1 score for the happy, sad, angry, and neutral categories at various missing ratios. Self-MM CubeMLP MCTN TransM SMIL CorrKD GCNet DMD (a) (b) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 F1 Score Missing Ratio 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 F1 Score Missing Ratio Figure 4. Comparison results of intra-modality missingness on (a) MOSI and (b) MOSEI. We report the F1 score at various ratios. information between Q and U is represented as follows: \\s ma l l I ( \\bm {Q }, \\ bm { U} )=\\int _ { \\bm {\\mathcal {Q}}} \\int _{\\bm {\\mathcal {U}}} P(\\bm {Q}, \\bm {U}) \\log \\left (\\frac {P(\\bm {Q}, \\bm {U})}{P(\\bm {Q}) P(\\bm {U})}\\right ) d \\bm {Q} d \\bm {U}. (5) The mutual information I(Q, U) can be written as the Kullback-Leibler divergence between the joint probability distribution PQU and the product of the marginal distributions PQPU, denoted as I(Q, U) = DKL (PQU\u2225PQPU) . For efficient and stable computation, the Jensen-Shannon divergence [12] is employed in our case to estimate the mutual information, which is denoted as follows: \\s ma l l \\begi n {a li g ne d} I( \\ b m { Q } , \\bm {U}) & \\ ge q \\h at { I}_ \\ t h eta ^{(\\ma thrm {JSD})}(\\bm {Q}, \\bm {U}) \\\\ &= \\mathbb {E}_{P(\\bm {Q}, \\bm {U})}\\left [-\\log \\left (1+e^{-\\mathcal {F}_\\theta (\\bm {Q}, \\bm {U})}\\right )\\right ] \\\\ & -\\mathbb {E}_{P(\\bm {Q}) P(\\bm {U})}\\left [\\log \\left (1+e^{\\mathcal {F}_\\theta (\\bm {Q}, \\bm {U})}\\right )\\right ], \\end {aligned} (6) where F\u03b8 : Q\u00d7U \u2192R is formulated as an instantiated statistical network with parameters \u03b8. We only need to maximize the mutual information without focusing on its precise value. Consequently, the distillation loss based on the mutual information estimation is formatted as follows: \\s m al l \\ m ath cal {L}_{ R C D} = \\mat hc a l {L }_{RCD}^T + \\mathcal {L}_{RCD}^{NT} = -I(\\bm {R}^t_T,\\bm {R}^s_T) I(\\bm {R}^t_{NT},\\bm {R}^s_{NT}). (7) Finally, the overall training objective Ltotal is expressed as Ltotal = Ltask + LSCD + LCP D + LRCD, where Ltask is the standard cross-entropy loss. 4. Experiments 4.1. Datasets and Evaluation Metrics We conduct extensive experiments on three MSA datasets with word-aligned data, including MOSI [64], MOSEI [65], and IEMOCAP [2]. MOSI is a realistic dataset that comprises 2,199 short monologue video clips. There are 1,284, 229, and 686 video clips in train, valid, and test data, respectively. MOSEI is a dataset consisting of 22,856 video clips, which has 16,326, 1,871, and 4,659 samples in train, valid, and test data. Each sample of MOSI and MOSEI is labeled by human annotators with a sentiment score of -3 (strongly negative) to +3 (strongly positive). On the MOSI and MOSEI datasets, we utilize weighted F1 score computed for positive/negative classification results as evaluation metrics. IEMOCAP dataset consists of 4,453 samples of video clips. Its predetermined data partition has 2,717, 798, and 938 samples in train, valid, and test data. As recommended by [44], four emotions (i.e., happy, sad, angry, and neutral) are selected for emotion recognition. For evaluation, we report the F1 score for each category. 4.2. Implementation Details Feature Extraction. The Glove embedding [31] is used to convert the video transcripts to obtain a 300-dimensional vector for the language modality. For the audio modality, we employ the COVAREP toolkit [5] to extract 74dimensional acoustic features, including 12 Mel-frequency cepstral coefficients (MFCCs), voiced/unvoiced segmenting features, and glottal source parameters. For the visual modality, we utilize the Facet [14] to indicate 35 facial action units, recording facial movement to express emotions. Experimental Setup. All models are built on the Pytorch [30] toolbox with NVIDIA Tesla V100 GPUs. The Adam optimizer [16] is employed for network optimization. For MOSI, MOSEI, and IEMOCAP, the detailed hyper-parameter settings are as follows: the learning rates are {4e \u22123, 2e \u22123, 4e \u22123}, the batch sizes are {64, 32, 64}, the epoch numbers are {50, 20, 30}, the attention heads are {10, 8, 10}, and the distance boundaries \u03b7 are {1.2, 1.0, 1.4}. The embedding dimension is 40 on all three datasets. The hyper-parameters are determined via 5 \fTable 1. Comparison results under inter-modality missing and complete-modality testing conditions on MOSI and MOSEI. Dataset Models Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} MOSI Self-MM [62] 67.80 40.95 38.52 69.81 74.97 47.12 56.53 84.64 CubeMLP [37] 64.15 38.91 43.24 63.76 65.12 47.92 53.85 84.57 DMD [22] 68.97 43.33 42.26 70.51 68.45 50.47 57.33 84.50 MCTN [32] 75.21 59.25 58.57 77.81 74.82 64.21 68.31 80.12 TransM [46] 77.64 63.57 56.48 82.07 80.90 67.24 71.32 82.57 SMIL [26] 78.26 67.69 59.67 79.82 79.15 71.24 72.64 82.85 GCNet [23] 80.91 65.07 58.70 84.73 83.58 70.02 73.84 83.20 CorrKD 81.20 66.52 60.72 83.56 82.41 73.74 74.69 83.94 MOSEI Self-MM [62] 71.53 43.57 37.61 75.91 74.62 49.52 58.79 83.69 CubeMLP [37] 67.52 39.54 32.58 71.69 70.06 48.54 54.99 83.17 DMD [22] 70.26 46.18 39.84 74.78 72.45 52.70 59.37 84.78 MCTN [32] 75.50 62.72 59.46 76.64 77.13 64.84 69.38 81.75 TransM [46] 77.98 63.68 58.67 80.46 78.61 62.24 70.27 81.48 SMIL [26] 76.57 65.96 60.57 77.68 76.24 66.87 70.65 80.74 GCNet [23] 80.52 66.54 61.83 81.96 81.15 69.21 73.54 82.35 CorrKD 80.76 66.09 62.30 81.74 81.28 71.92 74.02 82.16 the validation set. The raw features at the modality missing positions are replaced by zero vectors. To ensure an equitable comparison, we re-implement the state-of-the-art (SOTA) methods using the publicly available codebases and combine them with our experimental paradigms. All experimental results are averaged over multiple experiments using five different random seeds. 4.3. Comparison with State-of-the-art Methods We compare CorrKD with seven representative and reproducible SOTA methods, including complete-modality methods: Self-MM [62], CubeMLP [37], and DMD [22], and missing-modality methods: 1) joint learning methods (i.e., MCTN [32] and TransM [46]), and 2) generative methods (i.e., SMIL [26] and GCNet [23]). Extensive experiments are implemented to thoroughly evaluate the robustness and effectiveness of CorrKD in the cases of intra-modality and inter-modality missingness. Robustness to Intra-modality Missingness. We randomly drop frame-level features in modality sequences with ratio p \u2208{0.1, 0.2, \u00b7 \u00b7 \u00b7 , 1.0} to simulate testing conditions of intra-modality missingness. Figures 3 and 4 show the performance curves of models with various p values, which intuitively reflect the model\u2019s robustness. We have the following important observations. (i) As the ratio p increases, the performance of all models decreases. This phenomenon demonstrates that intra-modality missingness leads to a considerable loss of sentiment semantics and fragile joint multimodal representations. (ii) Compared to the complete-modality methods (i.e., Self-MM, CubeMLP, and DMD), our CorrKD achieves significant performance advantages in the missing-modality testing conditions and competitive performance in the complete-modality testing conditions. The reason is that complete-modality methods are based on the assumption of data completeness, whereas customized training paradigms for missing modalities perform better at capturing and reconstructing valuable sentiment semantics from incomplete multimodal data. (iii) Compared to the missing-modality methods, our CorrKD exhibits the strongest robustness. Benefiting from the decoupling and modeling of inter-sample, inter-category, and inter-response correlations by the proposed correlation decoupling schema, the student network acquires informative knowledge to reconstruct valuable missing semantics and produces robust multimodal representations. Robustness to Inter-modality Missingness. In Table 1 and 2, we drop some entire modalities in the samples to simulate testing conditions of inter-modality missingness. The notation \u201c{l}\u201d indicates that only the language modality is available, while audio and visual modalities are missing. \u201c{l, a, v}\u201d represents the complete-modality testing condition where all modalities are available. \u201cAvg.\u201d indicates the average performance across six missing-modality testing conditions. We present the following significant insights. (i) Inter-modality missingness causes performance degradation for all models, suggesting that the integration of complementary information from heterogeneous modalities enhances the sentiment semantics within joint representations. (ii) In the testing conditions of the inter-modality missingness, our CorrKD has superior performance among 6 \fTable 2. Comparison results under six testing conditions of inter-modality missingness and the complete-modality condition on IEMOCAP. Models Categories Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} Self-MM [62] Happy 66.9 52.2 50.1 69.9 68.3 56.3 60.6 90.8 Sad 68.7 51.9 54.8 71.3 69.5 57.5 62.3 86.7 Angry 65.4 53.0 51.9 69.5 67.7 56.6 60.7 88.4 Neutral 55.8 48.2 50.4 58.1 56.5 52.8 53.6 72.7 CubeMLP [37] Happy 68.9 54.3 51.4 72.1 69.8 60.6 62.9 89.0 Sad 65.3 54.8 53.2 70.3 68.7 58.1 61.7 88.5 Angry 65.8 53.1 50.4 69.5 69.0 54.8 60.4 87.2 Neutral 53.5 50.8 48.7 57.3 54.5 51.8 52.8 71.8 DMD [22] Happy 69.5 55.4 51.9 73.2 70.3 61.3 63.6 91.1 Sad 65.0 54.9 53.5 70.7 69.2 61.1 62.4 88.4 Angry 64.8 53.7 51.2 70.8 69.9 57.2 61.3 88.6 Neutral 54.0 51.2 48.0 56.9 55.6 53.4 53.2 72.2 MCTN [32] Happy 76.9 63.4 60.8 79.6 77.6 66.9 70.9 83.1 Sad 76.7 64.4 60.4 78.9 77.1 68.6 71.0 82.8 Angry 77.1 61.0 56.7 81.6 80.4 58.9 69.3 84.6 Neutral 60.1 51.9 50.4 64.7 62.4 54.9 57.4 67.7 TransM [46] Happy 78.4 64.5 61.1 81.6 80.2 66.5 72.1 85.5 Sad 79.5 63.2 58.9 82.4 80.5 64.4 71.5 84.0 Angry 81.0 65.0 60.7 83.9 81.7 66.9 73.2 86.1 Neutral 60.2 49.9 50.7 65.2 62.4 52.4 56.8 67.1 SMIL [26] Happy 80.5 66.5 63.8 83.1 81.8 68.2 74.0 86.8 Sad 78.9 65.2 62.2 82.4 79.6 68.2 72.8 85.2 Angry 79.6 67.2 61.8 83.1 82.0 67.8 73.6 84.9 Neutral 60.2 50.4 48.8 65.4 62.2 52.6 56.6 68.9 GCNet [23] Happy 81.9 67.3 66.6 83.7 82.5 69.8 75.3 87.7 Sad 80.5 69.4 66.1 83.8 81.9 70.4 75.4 86.9 Angry 80.1 66.2 64.2 82.5 81.6 68.1 73.8 85.2 Neutral 61.8 51.1 49.6 66.2 63.5 53.3 57.6 71.1 CorrKD Happy 82.6 69.6 68.0 84.1 82.0 70.0 76.1 87.5 Sad 82.7 71.3 67.6 83.4 82.2 72.5 76.6 85.9 Angry 82.2 67.0 65.8 83.9 82.8 67.3 74.8 86.1 Neutral 63.1 54.2 52.3 68.5 64.3 57.2 59.9 71.5 w/o SCD w/o RCD w/o CPD CorrKD 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 20 30 40 50 60 70 80 90 Missing Ratio F1 Score Figure 5. Ablation results of intra-modality missingness using various missing ratios on MOSI. the majority of metrics, proving its strong robustness. For example, on the MOSI dataset, CorrKD\u2019s average F1 socre is improved by 0.85% compared to GCNet, and in particular by 3.72% in the testing condition where language modality is missing (i.e., {a, v}). The merit stems from the proTable 3. Ablation results for the testing conditions of intermodality missingness on MOSI. Models Testing Conditions {l} {a} {v} {l, a} {l, v} {a, v} Avg. {l, a, v} CorrKD 81.20 66.52 60.72 83.56 82.41 73.74 74.69 83.94 w/o SCD 78.80 64.96 57.49 81.95 80.53 71.05 72.46 82.13 w/o CPD 79.23 63.72 57.83 80.11 79.45 70.53 71.81 82.67 w/o RCD 79.73 65.32 59.21 82.14 81.05 72.18 73.27 83.05 posed framework\u2019s capability of decoupling and modeling potential correlations at multiple levels to capture discriminative and holistic sentiment semantics. (iii) In the unimodal testing conditions, the performance of CorrKD with only the language modality favorably outperforms other cases, with comparable results to the complete-modality 7 \fHappy Sad Angry Neutral (c) GCNet (a) Self-MM (b) MCTN (d) CorrKD Figure 6. Visualization of representations from different methods with four emotion categories on the IEMOCAP testing set. The default testing conditions contain intra-modality missingness (i.e., missing ratio p = 0.5 ) and inter-modality missingness (i.e., only the language modality is available). The red, orange, green, and blue markers represent the happy, angry, neutral, and sad emotions, respectively. case. In the bimodal testing conditions, cases containing the language modality perform the best, even surpassing the complete-modality case in individual metrics. This phenomenon proves that language modality encompasses the richest knowledge information and dominates the sentiment inference and missing semantic reconstruction. 4.4. Ablation Studies To validate the effectiveness and necessity of the proposed mechanisms and strategies in CorrKD, we conduct ablation studies under two missing-modality cases on the MOSI dataset, as shown in Table 3 and Figure 5. The principal findings are outlined as follows. (i) When SCD is eliminated, there is a noticeable degradation in model performance under both missing cases. This phenomenon suggests that mining and transferring comprehensive crosssample correlations is essential for recovering missing semantics in student networks. (ii) The worse results under the two missing modality scenarios without CPD indicate that capturing cross-category feature variations and correlations facilitates deep alignment of feature distributions between both networks to produce robust joint multimodal representations. (iii) Moreover, we substitute the KL divergence loss for the proposed RCD. The declining performance gains imply that decoupling heterogeneous responses and maximizing mutual information between homogeneous responses motivate the student network to adequately reconstruct meaningful sentiment semantics. 4.5. Qualitative Analysis To intuitively show the robustness of the proposed framework against modality missingness, we randomly choose 100 samples from each emotion category on the IEMOCAP testing set for visualization analysis. The comparison models include Self-MM [62] (i.e., complete-modality method), MCTN [32] (i.e., joint learning-based missingmodality method), and GCNet [23] (i.e., generative-based missing-modality method). (i) As shown in Figure 6, SelfMM cannot address the modality missing challenge, as the representations of different emotion categories are heavily confounded, leading to the least favorable outcomes. (ii) Although MCTN and GCNet somewhat alleviate the issue of indistinct emotion semantics, their effectiveness remains limited since the distribution boundaries of the different emotion representations are generally ambiguous and coupled. (iii) Conversely, our CorrKD ensures that representations of the same emotion category form compact clusters, while representations of different categories are clearly separated. These observations confirm the robustness and superiority of our framework, as it sufficiently decouples intersample, inter-category and inter-response correlations. 5. Conclusions In this paper, we present a correlation-decoupled knowledge distillation framework (CorrKD) to address diverse missing modality dilemmas in the MSA task. Concretely, we propose a sample-level contrast distillation mechanism that utilizes contrastive learning to capture and transfer cross-sample correlations to precisely reconstruct missing semantics. Additionally, we present a categoryguided prototype distillation mechanism that learns crosscategory correlations through category prototypes, refining sentiment-relevant semantics for improved joint representations. Eventually, a response-disentangled consistency distillation is proposed to encourage distribution alignment between teacher and student networks. Extensive experiments confirm the effectiveness of our framework. Acknowledgements This work is supported in part by the Shanghai Municipal Science and Technology Committee of Shanghai Outstanding Academic Leaders Plan (No. 21XD1430300), and in part by the National Key R&D Program of China (No. 2021ZD0113503). 8", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2403.00336v1", |
| "title": "Never-Ending Embodied Robot Learning", |
| "abstract": "Relying on large language models (LLMs), embodied robots could perform\ncomplex multimodal robot manipulation tasks from visual observations with\npowerful generalization ability. However, most visual behavior-cloning agents\nsuffer from manipulation performance degradation and skill knowledge forgetting\nwhen adapting into a series of challenging unseen tasks. We here investigate\nthe above challenge with NBCagent in embodied robots, a pioneering\nlanguage-conditioned Never-ending Behavior-Cloning agent, which can continually\nlearn observation knowledge of novel robot manipulation skills from\nskill-specific and skill-shared attributes. Specifically, we establish a\nskill-specific evolving planner to perform knowledge decoupling, which can\ncontinually embed novel skill-specific knowledge in our NBCagent agent from\nlatent and low-rank space. Meanwhile, we propose a skill-shared semantics\nrendering module and a skill-shared representation distillation module to\neffectively transfer anti-forgetting skill-shared knowledge, further tackling\ncatastrophic forgetting on old skills from semantics and representation\naspects. Finally, we design a continual embodied robot manipulation benchmark,\nand several expensive experiments demonstrate the significant performance of\nour method. Visual results, code, and dataset are provided at:\nhttps://neragent.github.io.", |
| "authors": "Wenqi Liang, Gan Sun, Qian He, Yu Ren, Jiahua Dong, Yang Cong", |
| "published": "2024-03-01", |
| "updated": "2024-03-01", |
| "primary_cat": "cs.RO", |
| "cats": [ |
| "cs.RO", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Distillation", |
| "gt": "Never-Ending Embodied Robot Learning", |
| "main_content": "Introduction Embodied robot learning (ERL) has attracted growing interests in merging machine learning with robot control system to solve various manipulation tasks. With the success 1State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences 2Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences 3University of Chinese Academy of Sciences 4South China University of Technology. Correspondence to: Gan Sun <sungan1412@gmail.com>. \u673a\u5668\u4eba\u5b66\u56fd\u5bb6\u91cd\u70b9\u5b9e\u9a8c\u5ba4 2 ERL agent NBCagent Sweep garbage Stack blocks Initial training Open drawer Open drawer Generalization Continual learning Frozen Training Figure 1. Illustration comparison between ERL and our NBCagent, where ERL performs novel task generalization over a fixed dataset, and our NBCagent can continually learn novel skill knowledge without catastrophic forgetting. of large language models, language-conditioned embodied robot can understand language instructions of users to achieve complex robot tasks and hold significant prospects for applications in industry, healthcare and home robot (Jiang et al., 2022; Brohan et al., 2022). Current languageconditioned behavior-cloning methods focus on address multi-modal data to efficiently execute complex manipulation tasks with visual observations. For instance, PerAct (Shridhar et al., 2023) utilizes a PerceiverIO Transformer (Jaegle et al., 2021) to encode language goals and RGBD voxel observations, subsequently generating discretized robotic actions. As shown in Fig. 1, they could leverage robust scene understanding capabilities of large language models, and perform task generalization to achieve various novel manipulation tasks. However, the task generalization performance of most existing methods (e.g., PerAct (Shridhar et al., 2023)) is constrained when undertaking never-ending manipulation tasks and handling novel objects with intricate structures. Similar to the way humans acquire skills, this motivates us to enable robots to learn novel challenging manipulation skills in a continual learning manner. For instance, a home robot in the open-ended world is expected to consecutively learn various novel manipulation skills to meet the evolving needs of their owners. A trivial approach for this scenario involves enabling the robot to relearn these complex skills via visual observations, while avoiding forgetting previously acquired skills. However, the majority of existing methods assume training on a fixed dataset and only achieve this via storing old data and retraining model on all data, which leads 1 arXiv:2403.00336v1 [cs.RO] 1 Mar 2024 \fNever-ending Embodied Robot Learning to a large computational burden and high cost of memory storage, thereby being limited in real world. To address the aforementioned scenarios, we consider a practical challenging embodied robot learning problem, i.e., Never-ending Embodied Robot Learning (NERL), where embodied agent can perform continual learning on successive behavior-cloning manipulation skills and efficiently counteract catastrophic forgetting from old skills. Continual learning (Rebuffi et al., 2017; Yang et al., 2022; Sun et al., 2023) aims to continuously acquire knowledge from a successive data stream and has achieved remarkable performance in several computer vision tasks, such as image classification, object detection, image generation and so on. A naive solution for the NERL problem is to directly integrate continual learning and embodied robot learning together. However, different from learning category-wise knowledge focused by most existing methods, some attributes about skill-wise knowledge should be considered: \u2022 Skill-Specific Attribute originates from distinct manipulation sequences, object recognition, and scene understanding inherent in each unique skill. However, current continual learning methods neglect this attribute, which constrains their ability to continually learning novel skills. \u2022 Skill-Shared Attribute indicates that different skills possess shared knowledge. For instance, similar object recognition and scene understanding exist between different skills such as stacking bottles and opening bottle caps. Transferring skill-shared knowledge plays a key role in addressing skill-wise knowledge forgetting. To tackle the above-mentioned challenges, we propose a pioneering language-conditioned Never-ending BehaviorCloning agent (i.e., NBCagent), which continually acquire skill-wise knowledge from skill-specific and skill-shared attributes with visual observations. To the best of our knowledge, this is an earlier attempt to explore never-ending embodied robot learning for multi-modal behavior-cloning robotic manipulation. To be specific, we propose a skillspecific evolving planner (SEP) to decouple the skill-wise knowledge in the latent and low-rank space, and focus on skill-specific knowledge learning. Additionally, we design a skill-shared semantics rendering module (SSR) and a skillshared representation distillation module (SRD) to transfer skill-shared knowledge from semantics and representation aspects, respectively. Supervised by Neural Radiance Fields (NeRFs) and a vision foundation model, the SSR can complete skill-shared semantics in 3D voxel space across novel and old skills. Furthermore, the SRD can effectively distill knowledge between old and current models to align skillshared representation. Several major contributions of our work are as follows: \u2022 We take the earlier attempt to explore a novel realworld challenging problem called Never-ending Embodied Robot Learning (NERL), where we propose Never-ending Behavior-Cloning agent (i.e., NBCagent) to address the core challenges of skill-wise knowledge learning from skillspecific and skill-shared attributes. \u2022 We design a skill-specific evolving planner to decouple the skill-wise knowledge and continually embed skill-specific novel knowledge in our NBCagent. Moreover, a skill-shared semantic rendering module and a skill-shared representation distillation module is developed to learn skill-shared knowledge from semantics and representation aspects, respectively, and further overcome catastrophic forgetting. \u2022 We present a continual embodied robot manipulation benchmark for home robotic manipulation, which consists of two manipulation scenes, kitchen and living room. Qualitative experiments demonstrate the effectiveness and robustness of our proposed NBCagent. 2. Related Work Robotic Manipulation. Recent works (Brohan et al., 2023; Chowdhery et al., 2023; Huang et al., 2022; Shah et al., 2023; Driess et al., 2023; Brohan et al., 2022; Zitkovich et al., 2023) have resulted in substantial advancements in the accomplishment of intricate tasks. VIMA (Jiang et al., 2022) proposes a novel multi-modal prompting scheme, which transforms a wide range of robotic manipulation tasks into a sequence modeling problem. Compared with directly using images as the manipulation input (Goyal et al., 2022; Shridhar et al., 2022), voxelizing 3D point clouds as a 3D representation (Shridhar et al., 2023; Ze et al., 2023; Goyal et al., 2023; James et al., 2022) can accomplish more complex tasks. PerAct (Shridhar et al., 2023) enables agent to perform better in robotic manipulation by voxelizing RGBD images and discretizing output actions. Continual Learning. Continual learning provides the foundation for the adaptive development of AI systems (Wang et al., 2023). The main approaches of continual learning can be categorized into three directions: Parameter regularization-based methods (Rebuffi et al., 2017; Li & Hoiem, 2017; Derakhshani et al., 2021; Douillard et al., 2020; Dong et al., 2023) balance the old and new tasks by adding more explicit regularization terms. Architecturebased methods (Jung et al., 2020; Wu et al., 2021; Wang et al., 2022; Toldo & Ozay, 2022) construct network parameters for different tasks. Replay-based methods include empirical replays (Bang et al., 2021; Rebuffi et al., 2017; Sun et al., 2022; Tiwari et al., 2022) and generative replays (Li et al., 2022; Xiang et al., 2019). Some works focus on the improvement of robots by continual learning (Ayub & Wagner, 2023; Gao et al., 2021; Hafez & Wermter, 2023; Ayub & Fendley, 2022; Ayub 2 \fNever-ending Embodied Robot Learning \u673a\u5668\u4eba\u5b66\u56fd\u5bb6\u91cd\u70b9\u5b9e\u9a8c\u5ba4 1 \u2026 \u0398\ud835\udc5d \ud835\udc5a\u22121 \u0398\ud835\udc5d \ud835\udc5a Voxelize Voxel Encoder Voxel Encoder 3D Voxel CA Block SA Block SA Block ... CA Block CA Block ... 3D Voxel CA Block SA Block SA Block ... CA Block CA Block ... Patch \u2026 Patch Old Perceiver Model Current Perceiver Model Q-function Q-function \u2112\ud835\udc60\ud835\udc5f\ud835\udc51 \u2112\ud835\udc50\ud835\udc52 \u2026 Skill-Specific Evolving Planner NeRF Skill-Shared Semantics Rendering Diffusion \u2112\ud835\udc50 \u2112s Language instruction Language encoder \u2026 \uf056 Novel Task Detection Latent Space Low-rank Space CA Layer Concatenate Residual Skip MLP Layer \ud835\udc44 \ud835\udc3e \ud835\udc49 CA Block Selected elements \u2295 \u2295 RGB-D Input Task \ud835\udc5a\u22121 Task \ud835\udc5a Task \ud835\udc5a+ 1 \u2026 \u2026 Figure 2. Overview of the proposed NBCagent, where the perceiver model can continually learn novel manipulation skills. A skill-specific evolving planner is designed to learn skill-specific knowledge from latent and low-rank space. Meanwhile, we develop a skill-shared semantics rendering module and skill-shared representation distillation loss Lsrd to transfer skill-shared knowledge from semantics and representation aspects. et al., 2023). LOTUS (Wan et al., 2023) stores a few human demos of novel tasks into the growing skill library, enabling lifelong learning ability for robots. However, these methods cannot be applied to perform language-conditioned behaviour-cloning manipulation as they are incapable of handling multi-modal skill data. Furthermore, they neglect two inherent attributes in learning skill-wise knowledge, and suffer catastrophic forgetting on old skills. 3. Methodology 3.1. Problem Definition and Overview Problem Definition. Following traditional continual learning methods (Rebuffi et al., 2017; Douillard et al., 2020), we define a multi-modal skill data stream as T = {T m}M m=1, where M denotes the number of incremental tasks. Each incremental task consists of various robotic manipulation skills, resulting in a total of N m manipulation skills. The m-th incremental task T m = {Dm i }N d i=1 consists of N d skill demonstrations. Specifically, each skill demonstration can be extracted to a set of keyframe actions, i.e., Dm i = {km,i j }N k j=1, km,i j = {am,i j , rm,i j , lm,i}, where N k is the total keyframe action quantity. Given the current state in action space a, the structured observation r and language instruction l, the agent is expected to predict the next best keyframe action, which can be served as an action classification task (James et al., 2022). Additionally, the structured observation r is composed of the RGB-D images captured by a single front camera. An action state a can be divided into a discretized translation atran \u2208R3, rotation arot \u2208R(360/5)\u00d73, gripper open state agrip \u2208{0, 1}, and collision avoidance acol \u2208{0, 1}. Here the rotation parameter arot entails the discretization of each rotation axis into a set of R = 5 bins. The collision avoidance parameter acol provides guidance to the agent regarding the imperative need to avoid collisions. Overview. The overview of our NBCagent to learn skillwise knowledge is shown in Fig. 2. When observing a novel incremental task T m, we initialize the perceiver model \u0398m p for the current task utilizing the model \u0398m\u22121 p obtained from the last task and store \u0398m\u22121 p as a teacher model to perform our SRD module (we refer to PerceiverIO (Jaegle et al., 2021) as perceiver model for brevity). Following ER (Chaudhry et al., 2019), we build a memory buff M to replay only few samples form the previous tasks. In the m-th task, our NBCagent aims to execute all learned multi-modal robotic manipulation skills, achieved through iteratively optimizing the model on T m and M. Specifically, as shown in Fig. 2, given a RGB-D and language input, \u0398m p first encode RGB-D input to obtain a deep 3D voxel utilizing a scaled-down voxel encoder Em v . Then, we design a SSR module to transfer and complete skill-shared semantics across novel and old skills, which can effectively address catastrophic forgetting on old skills. After that, the patched voxel and language embeddings are sent to crossattention blocks and self-attention blocks to perform feature extraction and semantics fusion, where we propose SEP to learn skill-specific knowledge from latent and low-rank space. Finally, we utilize a Q-function head to predict the state of the next keyframe in voxel space, where we develop a SRD loss Lsrd to tackle catastrophic forgetting by aligning skill-shared representation. 3 \fNever-ending Embodied Robot Learning 3.2. Skill-Specific Evolving Planner Aiming at learning category-wise knowledge, existing continual learning methods (Rebuffi et al., 2017; Douillard et al., 2020) assume that the knowledge acquired from novel tasks and that from previous tasks are mutually independent. Differently, for learning skill-wise knowledge, we consider decoupling the knowledge to the skill-shared knowledge and skill-specific knowledge. For instance, an embodied agent aims to stack the wine bottle after learning to open the wine bottle. It does not require relearning the skill-shared knowledge of scene understanding and object recognition; instead, the agent is expected to focus on acquiring the skill-specific knowledge related to the operation sequence of stacking and reducing forgetting on skill-shared knowledge. Considering this motivation, we design a skill-specific evolving planner (SEP) to perform knowledge decoupling, enabling effectively continual learning of novel skills. Specifically, we first develop an adaptive language semantic bank to retrieve the skill-specific language semantic information. In light of this information, SEP can encode the multi-modal input from skill-wise latent and low-rank space, and learn the novel knowledge through a skill-specific network. First of all, when observing a novel skill, we utilize a language encoder Ec from CLIP (Radford et al., 2021) to encode the language instruction. This can obtain a skillspecific language semantic information ls \u2208RDc, and text token information lx \u2208RN c\u00d7Dx, i.e., ls, lx = Ec(l), where Ds and Dx represent the dimension of ls and lx, and N c denotes the token length. Then, we compensate semantic information for our adaptive language semantic bank B during training via an exponential moving average strategy: B[I, :] = (1 \u2212Cmax)B[I, :] + Cmaxls, (1) where B \u2208RN b\u00d7Dc is initialized by N b zero vectors. C \u2208 RN b is the cosine similarity matrix between the sentence information ls and each vector in B. if Cmax > \u03b4, we set I = argmax(C); otherwise, we set I = nonzero(B) + 1 and Cmax = 1, where \u03b4 is a fixed threshold, and nonzero(\u00b7) is a operation employed to calculate the number of nonzero vectors in B. To this end, we can obtain the skill-wise code I for the mini-batch training data, which plays a key role in performing skill-specific network training in the following. Furthermore, different from existing multi-modal ERL methods (Shridhar et al., 2023; Ze et al., 2023) that only encode multi-modal input from a skill-shared latent space, our NBCagent considers to develop a dynamic skill-specific latent space S \u2208RN s\u00d7N l\u00d7Ds, where N s denotes the number of learned skills, and N l represents the learnable latent vector quantity. NBCagent encodes these latent vectors with the multi-modal input utilizing a cross-attention layer to obtain the final feature. Specifically, following (Shridhar et al., 2023), we first to apply a scaled-down 3D convolution enAlgorithm 1 Pipeline of Our ESP. Require: Initialized adaptive language semantic bank B with N b zero vectors; Initialized dynamic skill-specific latent space S = \u2205; Initialized low-rank space W = \u2205; The hyper-parameter \u03b4; Input: language embedding ls; Output: S[I, :], W[I, :]; 1: Compute cosine similarity matrix C between B and ls; 2: if Cmax > \u03b4 then 3: I \u2190 \u2212argmax(C); 4: Update B by Eq. (1); 5: Return: S[I, :], W[I, :]; 6: else 7: I \u2190 \u2212nonzero(B + 1); 8: Expand B by Eq. (1); 9: Randomly initialize S[I, :], W[I, :]; 10: Return: S[I, :], W[I, :]; 11: end if coder to patch and encode a voxel observation v to obtain \u02c6 v, where v is obtained by voxelization process from r. Then, we concatenate the encoded proprioception of agent in current state a and voxel observation \u02c6 v to obtain p \u2208RN p\u00d7D. In light of this, we employ a cross-attention layer to perform semantics interaction and obtain the cross-attention feature Fc \u2208R(N p+N c)\u00d7D: Fc = \u03c1(Cat(p, cx)Wq(S[I, :]Wk)\u22a4 \u221a d )(S[I, :]Wv), (2) where Wq, Wk, Wv \u2208RD\u00d7D are linear projection layers, and Cat(\u00b7) denotes the concatenation operation. \u03c1 indices the softmax function and d is a scaling factor. Then, we apply two additional linear projection layers Wo to handle the cross-attention feature Fc. Similarly, we apply a series of self-attention blocks and a cross-attention decoder to further extract feature. In light of this, NBCagent can continually encode the novel multi-modal input from a skill-specific latent space, which is beneficial to learn some skill-specific knowledge from latent space. Considering the limitation in representing skill-specific knowledge from latent space, we further explore to learn skill-specific knowledge from low-rank space. Specifically, we introduce a low-rank adaptation layer (LoRA) (Hu et al., 2021) that can learn skill-specific knowledge in an efficient manner. For Wq, Wk, Wv, Wo in each attention block, we design a set of skill-specific LoRA layers to perform skill-specific forward and obtain the final output feature Fx as follows: Fx = XW + XWr[I, :] = XW + XWa[I, :]Wb[I, :], (3) where W \u2208RD\u00d7D represents one of Wq, Wk, Wv, Wo and X \u2208R(N v+N p)\u00d7D denotes the input feature for these 4 \fNever-ending Embodied Robot Learning Algorithm 2 Optimization Pipeline of Our NBCagent. Require: Robotics incremental tasks {T m}M m=1 with datasets T m = {Dm}N d i=1; Initialized perceiver model: \u03980 p; Initialized NeRF model: \u03980 n; Pre-trained diffusion model: \u0398u; Pre-trained CLIP language encoder: Ec; Initialized memory buff: M = \u2205; Iterations: {Im}M m=1. 1: #While observing a new task T m: 2: for z = 1, 2, \u00b7 \u00b7 \u00b7 , Im do 3: Randomly select keyframe km j from {Dm i }N d i=1 \u222aM; 4: Obtain vm, vm\u22121, ls, lx utilizing Em v , Em\u22121 v and Ec; 5: Compute Lssr by SSR (vm, vm\u22121, lx, \u0398u) using Eq. (4) and Eq. (12); 6: S[I, :], W[I, :] \u2190 \u2212ESP (ls); 7: Compute Lce, Lsrd utilizing S[I, :], W[I, :], \u0398m p , \u0398m\u22121 p ; 8: Update \u0398m p by Eq. (15); 9: end for 10: Store few samples from {Dm i }N d i=1 in M; 11: Return: \u0398m p , M. projection layers. Wr[I, :] = Wa[I, :]Wb[I, :] is a lowrank decomposition, where Wa \u2208RN l\u00d7D\u00d7N r is initialized in a random Gaussian manner, Wb \u2208RN l\u00d7N r\u00d7D is initialized by zero and N r is a hyper-parameter controlling the size of LoRA layers. On the one hand, obviously, W is shared among all skills and expected to learn skill-shared knowledge. On the other hand, each Wr is executed to learn different novel skills and perform skill-specific forward, resulting in continually embedding skill-specific knowledge to our NBCagent. Specifically, we summary the process of our SEP in Algorithm 1. 3.3. Skill-Shared Semantics Rendering Module For language-conditional behaviour-cloning manipulation, a comprehensive semantics understanding of the 3D scene (Driess et al., 2022) plays a key role in enabling agent to perform complicated manipulation skills. Especially in NERL problem, there exist skill-shared semantics across various skills, such as 3D object and scene semantics. The existence of forgetting on semantic space makes these semantics incomplete, further resulting in catastrophic forgetting on old skills. Considering this motivation, we develop a skill-shared semantics rendering module (SSR) to transfer skill-shared semantic information of 3D voxel space, where a NeRF model and vision foundation model provide semantics supervision to effectively enrich the 3D voxel semantics. Drawing inspiration from 3D visual representation learning methods (Shim et al., 2023; Ze et al., 2023), we leverage a latent-conditioned NeRF architecture (Yu et al., 2021) not only to synthesizes RGB color c of a novel image views like traditional NeRF (Mildenhall et al., 2021), but also to render the semantic feature s from 3D voxel space as follows: F\u0398m n (x, d, vs) = (\u03c3, c, s), (4) where F\u0398m n denotes the neural rendering function of NeRF model \u0398m n . The 3D voxel feature vs is obtained by a grid sample method based on trilinear interpolation from the 3D voxel observation v. x, \u03c3 is the 3D input point and differential density, and d represents unit viewing direction. The camera ray r can be obtained by: r = o + td, where o indicates the camera origin. By adding field-wise branches, \u0398m n performs the same neural rendering function F\u0398m n to estimate RGB color c and semantic feature s. Thus, the same accumulated transmittance T(t) is shared to predict the two different fields and is defined as follows: T(t) = exp(\u2212 Z t tn \u03c3(s)ds). (5) In light of this, a RGB image C and 2D semantic map S can be rendered as : C(r, vs) = Z tf tn T(t)\u03c3(r, vs)c(r, d, vs)dt, (6) S(r, vs) = Z tf tn T(t)\u03c3(r, vs)s(r, d, vs)dt. (7) To distill skill-shared knowledge, we initialize a NeRF model acquired from the last task and denote it as \u0398m\u22121 n . Then, we feed the same input to obtain the pseudo ground truth \u02c6 C by Eq. 6. We design a loss function to supervise the reconstruction process as follows: Lc = X r\u2208R \u2225C(r, vs) \u2212Yc(r)\u22252 2 + \u03b2 X r\u2208R \u2225C(r, vs) \u2212\u02c6 C(r, vs)\u22252 2 \u00b7 Ivs / \u2208T m, (8) where Yc indicates the ground truth color and R is the set of all camera rays. \u03b1 is the hyper-parameter to control the weight of loss function. Ivs / \u2208T m is defined such that Ivs / \u2208T m = 1 when the condition vs / \u2208T m is satisfied, and Ivs / \u2208T m = 0 otherwise. Considering the insufficiency in capturing skill-shared semantics by reconstructing novel views, we introduce a pretrained visual foundation model that contains robust scene semantics to provide supervision. Relying on being pretrained on large-scale vision-language dataset, Stable Diffusion model (Rombach et al., 2022) can possess robust intrinsic representational capabilities and are consequently utilized for semantic representation in segmentation and classification tasks (Xu et al., 2023; Li et al., 2023). In light of this, we employ a text-to-image Stable Diffusion model \u0398u to extract vision-language semantic feature for supervising. Specifically, given a input view, i.e., Yc, we perform 5 \fNever-ending Embodied Robot Learning \u673a\u5668\u4eba\u5b66\u56fd\u5bb6\u91cd\u70b9\u5b9e\u9a8c\u5ba4 1 Initial state: Ours-w/o SEP&SRD: Ours-w/o SRD: Ours: Stack wine bottle Water plant Pick cup up Take plate off Hang frame on Figure 3. Visualization of prediction results on various manipulation skills between Ours, Ours-w/oSRD and Ours-w/oSEP&SRD. a one-step noise adding process to obtain a a noisy image Yc,t. Then we utilize our diffusion model \u0398u to collect vision-language semantic feature as the ground truth \u02c6 S: Yc,t(r) := \u221a\u03b1tEv(Yc(r)) + \u221a 1 \u2212\u03b1t\u03f5, (9) \u02c6 S(r, lp) = \u0398u(Yc,t(r), Ec(lp)), (10) where Ev is a VAE encoder to encode image Yc from pixel space to latent semantic space. t represents the diffusion process step, \u03f5 \u223cN(0, 1) and \u03b1t is designed to control the noise schedule. lp denotes the language prompt modified from task description l. To this end, we align the rendered semantic feature S and diffusion feature \u02c6 S to perform semantics transfer as follows: Ls = X r\u2208R \u2225S(r, vs) \u2212\u02c6 S(r, lp)\u22252 2. (11) In summary, the major objective of our SSR module to complete skill-shared semantics can be expressed as: Lsrr = Lc + \u03bb1Ls, (12) where \u03bb1 is the hyper-parameter. 3.4. Skill-Shared Representation Distillation Module To address catastrophic forgetting on old skills, we develop a skill-shared representation distillation module (SRD) to align skill-shared representation, as presented in Fig. 2. Specifically, given a multi-modal keyframe input km j = {am j , rm j , lm} from a mini batch, we can obtain a keyframe prediction Pm(km j , \u0398m p ) = {Pm j,tran, Pm j,rot, Pm j,grip, Pm j,col}. To supervise our NERagent to learn skill-wise knowledge, we follow (Shridhar et al., 2023) to introduce a cross-entropy loss for optimizing as follows: Lce = \u22121 B B X j=1 Ym j log(Pm(km j , \u0398m p )), (13) where B represents the batch size, and Ym j = {Ym j,tran, Ym j,rot, Ym j,grip, Ym j,col} is the ground truth for predicting the next keyframe. In light of this, NBCagent can continually learn skill-wise knowledge from current dataset T m and memory buff M. However, due to the limited amount of data available from old skills in memory buff M, the data imbalance occurs between novel and old skills, further resulting in overfitting to learn novel skillwise knowledge and forgetting skill-shared knowledge on old skills. To address the aforementioned problems, we take an attempt to employ knowledge distillation in NERL problem. Specifically, we initialize a teacher model with the perceiver model \u0398m\u22121 p from the last task to extract the soft label: \u02c6 Ym j = Pm(km j , \u0398m\u22121 p ), \u02c6 Ym j = { \u02c6 Ym j,tran, \u02c6 Ym j,rot, \u02c6 Ym j,grip, \u02c6 Ym j,col} and apply the KullbackLeibler divergence to align the outputs of two agents as follows: Lsrd = 1 \u02c6 B B X j=1 \u03c1( \u02c6 Ym j /\u03c4)log( \u03c1( \u02c6 Ym j /\u03c4) \u03c1(Pm(km j , \u0398m p )/\u03c4)) \u00b7 Ikm j / \u2208T m, (14) 6 \fNever-ending Embodied Robot Learning Table 1. Comparisons of success rate (%) on Kitchen and Living Room. Red and Blue represents the highest results and runner-up. Comparison Methods 5-5 (2 steps) 5-1 (6 steps) 6-3 (3 steps) 6-2 (4 steps) 1-5 6-10 All Avg. For. 1-5 6-10 All Avg. For. 1-6 7-12 All Avg. For. 1-6 7-12 All Avg. For. PerAct 58.9 30.1 44.5 \u2212 \u2212 58.9 30.1 44.5 \u2212 \u2212 34.7 27.3 31.0 \u2212 \u2212 34.7 27.3 31.0 \u2212 \u2212 GNFactor 56.3 32.3 44.3 \u2212 \u2212 56.3 32.3 44.3 \u2212 \u2212 48.0 32.0 40.0 \u2212 \u2212 48.0 32.0 40.0 \u2212 \u2212 Fine-Tuning 15.7 26.4 21.1 38.9 41.1 3.2 20.0 9.6 13.8 39.1 4.4 29.8 17.1 29.0 44.1 6.2 19.1 12.7 24.2 43.9 ER 56.0 25.6 40.8 50.0 3.2 53.6 29.6 41.6 49.7 9.8 43.8 31.1 37.4 42.2 7.7 40.7 30.2 35.4 38.1 17.6 Ours-w/oSEP&SRD 56.0 26.7 41.3 49.1 0.8 56.8 30.4 43.6 50.2 7.1 42.0 33.1 37.6 45.3 12.6 38.4 35.6 37.0 40.9 16.5 Ours-w/oSRD 41.1 38.1 39.6 49.8 18.9 48.0 41.6 44.8 53.9 14.7 41.1 34.4 37.8 45.9 15.6 40.4 39.1 39.8 43.3 19.1 Ours 53.6 36.3 44.9 52.5 6.4 54.4 37.6 46.0 55.2 8.9 44.9 42.2 43.6 47.6 7.7 45.8 35.3 40.6 43.5 10.9 \u673a\u5668\u4eba\u5b66\u56fd\u5bb6\u91cd\u70b9\u5b9e\u9a8c\u5ba4 3 Language: Press the red button with the green base Input View RGB GT RGB RD Semantics GT Semantics RD Language: Slide the bottom drawer open Language : Sweep the dirt up into the short dustpan Figure 4. Visualization of rendering results in our SSR module. RGB GT denotes the color ground truth, and Semantics GT represents the semantics ground truth extracted by Stable Diffusion model. RGB RD and Semantics RD are the rendering novel view and semantic feature. where \u02c6 B = PB j=1 Ikm j / \u2208T m, and \u03c4 is a temperature hyperparameter. In conclusion, to perform skill-specific knowledge learning, we first develop SEP to accumulate skill-specific knowledge on latent and low-rank space, thereby effectively learning novel skills. Furthermore, SSR and SRD modules are designed to transfer skill-shared knowledge from semantics and representation aspects, resulting in efficiently tackling old skill forgetting. The optimization of our NBCagent can be simplified as: Ltotal = Lce + Lsrr + \u03bb2Lsrd. (15) 4. Experiments 4.1. Implementation Details Dataset. Following PerAct, we conduct our experiments on RLBench (James et al., 2020) and simulate in CoppelaSim (Rohmer et al., 2013). To simulate the working scenarios of NERL, we design two NERL benchmark datasets, called Kitchen and Living Room. Specifically, Kitchen is constructed by gathering 10 manipulation skills pertinent to kitchen environments, and Living Room consists of 12 manipulation skills associated with living room scenarios. Each manipulation skill includes a training set of 20 episodes and a test set of 25 episodes. Furthermore, these skills involve various variations encompassing randomly sampled attributes such as colors, sizes, counts, placements, and object categories, resulting in a total of 101 distinct variations. We provide comprehensive details of two benchmark datasets in Appendix A. Baselines. We conduct the comprehensive evaluation between our NBCagent and the following four methods: PerAct (Shridhar et al., 2023), GNFactor (Ze et al., 2023), Fine-Tuning and ER (Chaudhry et al., 2019). PerAct and GNFactor joint train all manipulation skills within one-stage training, for two datasets respectively, referred to as the upperbound. Fine-Tuning achieves continual learning by fine-tuning all parameters in perceiver model on novel skills. ER randomly stores old skill data to memory buff and replay them when detecting novel skills. Training Details. In NERL, we assume that the agent undergoes initial learning through a set of manipulation skills, referred to as base task, while characterizing novel skills as incremental tasks. On Kitchen dataset, the base task includes 5 manipulation skills, and each incremental task consists of 1 manipulation skill (total 6 steps) and 5 manipulation skills (total 2 steps), marked as 5-1 and 5-5. Likewise, on Living Room dataset, 12 manipulation skills are divided into two NERL settings: 6-3 (total 3 steps) and 6-2 (total 4 steps). In addition, the LAMB (You et al., 2019) optimizer is applied for all methods with a initial learning rate of 5.0 \u00d7 10\u22124 and a batch size of 2. We utilize 100K training iterations for PerAct and GNFactor, 80K for base task training, 20K for incremental task training. We store a fixed 4 episodes of each old skill in M for ER and Ours. Additional training details are available in Appendix B. Evaluation Metric. Following ERL methods (Ze et al., 2023; Goyal et al., 2023), we use the success score (%) Im i as the basic indicator for evaluation, where Im i represents the success score of i-th manipulation skill at m-th incremental task. Specifically, we first compute the mean 7 \fNever-ending Embodied Robot Learning Table 2. Comparison results on Kitchen under the setting of 5-1. Red and Blue represents the highest results and runner-up. Comparison Methods 1 2 3 4 5 6 7 8 9 10 All Imp. PerAct 96.0 77.3 53.3 30.7 37.3 52.0 2.7 18.7 4.0 73.3 44.5 \u21d11.5 GNFactor 92.0 60.0 70.7 12.0 46.7 52.0 5.3 22.7 10.7 70.7 44.3 \u21d11.7 Fine-Tuning 17.3 0.0 0.0 0.0 0.0 8.0 0.0 0.0 9.3 64.0 9.7 \u21d136.3 ER 90.7 54.7 37.3 54.7 28.0 60.0 5.3 0.0 24.0 62.7 41.6 \u21d14.4 Ours-w/oSEP&SRD 86.7 49.3 34.7 65.3 48.0 66.7 9.3 24.0 5.3 46.7 43.6 \u21d12.4 Ours-w/oSRD 68.0 65.3 50.7 25.3 33.3 76.0 18.7 24.0 22.7 64.0 44.8 \u21d11.2 Ours 92.0 61.3 44.0 34.7 41.3 76.0 17.3 26.7 14.7 52.0 46.0 \u2212 Table 3. Comparison results on Living Room under the setting of 6-3. Red and Blue represents the highest results and runner-up. Comparison Methods 1 2 3 4 5 6 7 8 9 10 11 12 All Imp. PerAct 5.3 66.7 0.0 84.0 38.7 13.3 16.0 2.7 41.3 0.0 97.3 6.7 31.0 \u21d112.6 GNFactor 2.7 92.0 1.3 84.0 80.0 28.0 8.0 1.3 46.7 32.0 100 0.0 40.0 \u21d13.6 Fine-Tuning 8.0 18.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 68.0 100 10.7 17.1 \u21d125.6 ER 10.7 58.7 24.0 81.3 76.0 12.0 5.3 4.0 78.7 8.0 89.3 1.3 37.4 \u21d16.2 Ours-w/oSEP&SRD 13.3 74.7 28.0 76.0 52.0 8.0 9.3 12.0 53.3 25.3 96.0 2.7 37.6 \u21d16.0 Ours-w/oSRD 8.0 78.7 6.7 81.3 60.0 12.0 9.3 20.0 52.0 9.3 90.7 25.3 37.8 \u21d15.8 Ours 17.3 74.7 9.3 82.7 62.7 22.7 22.7 9.3 85.3 9.3 98.7 28.0 43.6 \u2212 Table 4. Comparison results in terms of success rate (%) on Living Room dataset when setting the various size of memory buff M. Buffer size 6-3 (2 steps) 6-2 (4 steps) 1-6 7-12 All Avg. For. 1-6 7-12 All Avg. For. |M| = 2 42.0 41.3 41.7 44.9 4.4 32.0 30.7 31.3 37.2 18.8 |M| = 4 44.9 42.2 43.6 47.6 7.7 45.8 35.3 40.6 43.5 10.9 |M| = 6 50.0 36.7 43.3 45.0 2.2 50.0 34.7 42.3 45.4 8.0 success score after the last step for the base task (Base) IM B , incremental tasks (Novel) IM N and all manipulation skills (All) IM A . These metrics respectively reflect the robustness of old skill forgetting , the capacity of novel skill learning, as well as its overall performance. Additionally, we introduce a Avg. metric A and For. metric F to measure average performance and skill-wise forgetting rate over the whole NERL process, where A = 1 M PM m=1 Im A and F = 1 N m PN m i=1 max m\u2208{1,...,M\u22121}(Im i \u2212IM i ). 4.2. Comparison Performance We present comparison results between our NBCagent and other methods on Kitchen and Living Room datasets in Tabs. 1, 2 and 3. As shown in Tab. 1, NBCagent significantly outperforms compared methods by 1.2% \u223c51.2% in terms of success score on base task and 5.1% \u223c17.6% on incremental tasks. This indicates that NBCagent can learn skillspecific and skill-shared knowledge, thereby effectively addressing old skill forgetting and novel skill learning. Furthermore, as presented in Tabs. 2 and 3, our model exhibits the highest mean success rate across all manipulation skills, improving by 1.2% \u223c36.3% and 3.6% \u223c25.6% respectively when compared to other methods. This demonstrates the effectiveness of our model in addressing the NERL problem. Additionally, our NBCagent achieves a large improvements about 2.5% \u223c41.4% and 0.1% \u223c36.4% in terms of Avg. and For. metrics, which suggests the robust and significant performance of NBCagent over the whole NERL process. Surprisingly, as shown in Tabs. 2 and 3, our NBCagent performs better than joint training methods, specifically PerAct and GNFactor, providing additional evidence to support the efficacy of our model. The comparison results under other settings can be found in Appendix C. 4.3. Ablation Studies To evaluate effectiveness of each module in our NBCagent, we eliminate them one by one and present results in Tabs. 1, 2 and 3. Compared to Ours, the scores of Ours-w/oSRD on both base task and all tasks are dropped by 3.8% \u223c12.5% and 0.8% \u223c5.8% respectively. This indicates that SRD can effectively learn skill-shared knowledge to tackle old task forgetting. In addition, Ours-w/oSRD outperforms Oursw/oSEP&SRD on incremental tasks by 1.3% \u223c11.4%, which demonstrates that SEP benefits our model in learning novel skills by performing skill-specific knowledge learning. Furthermore, to evaluate the effectiveness of SSR module, we visualize the rendering results in Fig. 4. It suggests that our SSR module can efficiently complete skill-shared semantics under the supervision of novel view and diffusion features, thereby achieving an improvement about 0.5% \u223c 3.1% in terms of Avg. compared with ER. The visualization results in Fig. 3 also shows the effectiveness of our model to tackle the NERL problem. We also explore the impact of various sizes of memory buffer M. As shown in Tab. 4, with |M| = 6, the forgetting rate of our model notably reduced by 1.1% \u223c10.8% in comparison to |M| = 4 and 2, respectively. This indicates that increasing the memory size significantly addresses catastrophic forgetting but also incurs a larger memory load. 8 \fNever-ending Embodied Robot Learning 5. Conclusion In this paper, we explore a pioneering Never-ending Embodied Robot Learning (NERL) problem and propose a novel NBCagent to continually learn skill-wise knowledge. Specifically, we propose a skill-specific evolving planner to decouple the skill-wise knowledge to effectively learning novel skills. In addition, we design a skill-shared semantics rendering module and skill-shared representation distillation module to tackle catastrophic forgetting on old skills from semantics and representation aspects. We develop two NERL benchmarks and expensive experiments on them verify the effectiveness of our NBCagent against baselines." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13322v1", |
| "title": "MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities", |
| "abstract": "In this study, we focus on heterogeneous knowledge transfer across entirely\ndifferent model architectures, tasks, and modalities. Existing knowledge\ntransfer methods (e.g., backbone sharing, knowledge distillation) often hinge\non shared elements within model structures or task-specific features/labels,\nlimiting transfers to complex model types or tasks. To overcome these\nchallenges, we present MergeNet, which learns to bridge the gap of parameter\nspaces of heterogeneous models, facilitating the direct interaction,\nextraction, and application of knowledge within these parameter spaces. The\ncore mechanism of MergeNet lies in the parameter adapter, which operates by\nquerying the source model's low-rank parameters and adeptly learning to\nidentify and map parameters into the target model. MergeNet is learned\nalongside both models, allowing our framework to dynamically transfer and adapt\nknowledge relevant to the current stage, including the training trajectory\nknowledge of the source model. Extensive experiments on heterogeneous knowledge\ntransfer demonstrate significant improvements in challenging settings, where\nrepresentative approaches may falter or prove less applicable.", |
| "authors": "Kunxi Li, Tianyu Zhan, Shengyu Zhang, Kun Kuang, Jiwei Li, Zhou Zhao, Fei Wu", |
| "published": "2024-04-20", |
| "updated": "2024-04-20", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Distillation", |
| "gt": "MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities", |
| "main_content": "Introduction In an era where edge computing devices are ubiquitous, the deployment of deep neural networks (DNNs) on such devices is often constrained by limited computational resources and storage capacity. This limitation typically necessitates the utilization of smaller neural network architectures, which, while computationally economical, often compromise performance. A promising approach to mitigate this limitation involves the strategic transfer of knowledge from larger, more capable models to these constrained counterparts, aiming to elevate their performance. A quintessential embodiment of this approach is knowledge distillation [Hinton et al., 2015; Beyer et al., 2022; Shen et al., 2021]. This technique entails training a compact student model to emulate the output \u2217Corresponding author 61 63 65 67 Top-1 Acc(%) Base Parameter Sharing KD MergeNet Figure 1: The parameter sharing method is ineffective for heterogeneous knowledge transfer, and in fact, may lead to a loss of accuracy due to the incompatibility of knowledge. logic or intermediate layer representations of a more comprehensive teacher model, thereby enhancing its performance. Transfer learning [Zhuang et al., 2020; He et al., 2021; HassanPour Zonoozi and Seydi, 2023] offers another avenue, leveraging insights and patterns gleaned from one task or dataset to enhance performance on another related yet distinct task or dataset. However, typical methods in these veins require similar task formulations or partially shareable modules. These constraints potentially limit the application scope, particularly for complex model types or tasks. For instance, transferring knowledge between models with different architectures and tasks. Therefore, how to facilitate heterogeneous knowledge transfer across entirely different model architectures, tasks, and modalities is an urgent challenge to address. Unlike previous knowledge transfer methods, we consider knowledge transfer between models from a different perspective. We pivot on the intrinsic properties of model parameters, regarding them as the natural carriers of knowledge. This perspective transcends the confines of specific tasks, encapsulating knowledge within the parameters themselves. An intuitive method is to adopt the idea of parameter sharing, similar to [Cai et al., 2021] by putting the smaller model into the larger one, making it a subset of the larger model for additional supervision. We test the effectiveness of parameter sharing on two models with different architectures, ResNet50 [He et al., 2016] and MobileNetV2 [Sandler et al., 2018], on the CIFAR-100 [Krizhevsky et al., 2009]. As shown in Figure 1, the performance of MobileNetV2 suggests that diarXiv:2404.13322v1 [cs.LG] 20 Apr 2024 \frect parameter sharing may not be the optimal conduit for heterogeneous knowledge transfer. We speculate that the reasons might be: (i) Direct parameter sharing might disrupt the knowledge of the original modules when heterogeneous modules have significantly different functionalities, like sharing between linear layers and attention mechanism modules. (ii) Typically, larger models contain more advanced knowledge than smaller ones, which the latter might not directly comprehend, leading to potential incompatibility in knowledge between models due to direct parameter sharing. In this paper, we propose Knowledge Migration across Heterogeneous Models, Tasks, and Modalities, abbreviated as MergeNet, which is a universal framework for knowledge transfer. To address the issue of knowledge incompatibility between heterogeneous models, we introduce parameter adapters between the source and target models to refine and summarize the knowledge within the parameters. Specifically, we re-encode model parameters through low-rank decomposition, obtaining low-rank matrices that effectively encapsulate the comprehensive knowledge of the original parameters. Next, using our proposed Low-rank Parametric Knowledge Adapter (LPKA), we facilitate interactions within the low-rank parameter space and generate parameters that contain knowledge from both models, thus achieving knowledge transfer. This process can be likened to the target model extracting the knowledge it currently needs from the source model, based on its existing knowledge. We conduct on an extensive exploration of our methodology across a diverse array of scenarios, showcasing the versatility and robustness of our approach. For instance, we validate our method\u2019s proficiency in cross-structure knowledge transfer by facilitating knowledge migration between the linear classifier of one model and the convolutional layer of another. Furthermore, We demonstrate the transfer of knowledge between models operating in different modalities, as well as between models dedicated to distinct tasks. Moreover, our exploration extends into the domain of single-model scenarios. Inspired by the concept of self-distillation [Zhang et al., 2019], we investigate the internal knowledge transfer within a single model, channeling the profound insights from its deeper layers to the shallower ones. Overall, our contributions can be summarized as follows: \u2022 We introduce a novel method for knowledge transfer between heterogeneous models, specifically by using the model parameter matrix as the carrier of knowledge. This approach offers a new perspective on knowledge transfer and is independent of specific model structures and tasks. \u2022 We propose MergeNet. Unlike conventional direct knowledge transfer, this method ingeniously orchestrates the mapping of model parameters into a lowdimensional parameter space, thereby harmonizing and aligning this space to facilitate a seamless and efficient knowledge transfer. \u2022 We conduct extensive experiments in multiple challenging scenarios to validate the effectiveness of our method. The results demonstrate that MergeNet significantly improves model performance and surpasses the widelyused knowledge distillation techniques. 2 Related Work In the intricate landscape of artificial intelligence, knowledge transfer emerges as a pivotal mechanism, enabling the assimilation and application of insights gleaned from specific tasks, domains, or models to novel and related contexts. Among these, transfer learning [Zhuang et al., 2020; He et al., 2021] is the most common form of knowledge transfer, mainly involving the extraction of knowledge from one task and applying it to another different but related task to enhance the model\u2019s generalization ability and learning efficiency. This typically includes pre-training a model on large datasets or complex tasks, followed by fine-tuning it to adapt to new, more specific tasks. Domain adaptation [Farahani et al., 2021; HassanPour Zonoozi and Seydi, 2023; Ding et al., 2023; Kim et al., 2021], another facet of knowledge transfer, meticulously addresses the challenge of adapting models trained on one or multiple source domains to perform effectively in a distinct target domain. Multi-task learning [Crawshaw, 2020; Standley et al., 2020; Kurin et al., 2022; Groenendijk et al., 2021] aims to train models simultaneously to solve multiple related tasks, facilitating cross-task information flow through shared representations, allowing each task to learn useful features and patterns from other tasks. These cross-task learning methods not only improve data utilization and learning efficiency but also enhance the adaptability and robustness of models when faced with new tasks. Knowledge distillation (KD) [Hinton et al., 2015; Beyer et al., 2022; Shen et al., 2021] is another class of knowledge transfer methods used to transfer knowledge from large neural teacher networks to small student models. This process is achieved by training the student model to mimic the output logic [Hinton et al., 2015; Zhao et al., 2022; Yang et al., 2021; Zhou et al., 2021] or intermediate layer features [Zagoruyko and Komodakis, 2016; Yang et al., 2022a] of the teacher model. Recent research has increasingly focused on optimizing the distillation process, such as by improving loss functions and distillation strategies to enhance the performance of the student model. By artfully masking the features of the student model, MGD [Yang et al., 2022b] compels it to reconstruct the characteristic features of the teacher model, thereby ensuring a more faithful and robust transfer of knowledge. NKD [Yang et al., 2023] allows the student model to leverage the rich information embedded in these soft labels more effectively, by normalizing the non-target logits. Unlike previous knowledge transfer methods, our approach is designed to address scenarios of heterogeneous knowledge transfer across entirely different model architectures, tasks, and modalities. Technically, our method does not rely on output logic or intermediate layer features specific to any task or model architecture. Instead, it utilizes universal model parameters as carriers of knowledge. By bridging the differences in parameter spaces of heterogeneous models, our method fuses knowledge within the parameter space and maps it onto the parameters of the target model. \fA Large Model\u00a0 \u00a0 \u00a0\u00a0 A Small Model WL N \u00d7 M WS n \u00d7 m SA WS n \u00d7 m WS n \u00d7 m ... Knowledge Transfer Layer Input/ Output Copy parameters/ Load parameters SA SA SA SA Softmax Attention Weighted Sum Q K, V WL N \u00d7 M (a) MergeNet \u00a0\u00a0 (c) Parameter Re-Encode ... WL N \u00d7 M WS n \u00d7 m WS r \u00d7 m WS r \u00d7 m WL r \u00d7 M R L R L WS n \u00d7 r WS n \u00d7 r WS n \u00d7 m R Expand by row L Expand by column Matrix multiplication ReEncode ReEncode LPKA Low-rank Parametric Knowledge Adapter(LPKA) WS r \u00d7 m \uff08b\uff09Knowledge Transfer Layer (d) LPKA Copy Figure 2: Overview of MergeNet. In (a), the MergeNet takes parameters from different models as inputs and generates parameters that integrate knowledge from these models, where more knowledge transfer layers indicate a greater amount of knowledge transferred. In (c), the parameter re-encoding module adaptively adjusts the size of the parameter matrix. In (d), the Low-rank Parametric Knowledge Adapter (LPKA) acts on the re-encoded parameters, facilitating an efficient knowledge transfer. It is important to note that the descriptions in (c) and (d) are based on the knowledge transfer from Ml to Ms, but the process from Ms to Ml is completely symmetrical. 3 Method 3.1 Problem Formulation The objective of heterogeneous knowledge transfer is to facilitate knowledge transfer between models with different architectures and tasks. We consider two models: a larger capacity model Ml and a smaller capacity model Ms, each with their own independent datasets Dl = {(x(i) l , y(i) l )}|Dl| i=1 and Ds = {(x(i) s , y(i) s )}|Ds| i=1 . We denote the weights of two models as Wl and Ws, and divide the parameters of each model into two parts: those that participate in knowledge transfer Wt and those that are uninvolved in knowledge transfer Wu, e.g., Wl = {Wl,t, Wl,u}, Ws = {Ws,t, Ws,u}. We aim to learn a model M(\u00b7) that can receive these model parameters and generate parameters that integrate knowledge from heterogeneous models, which can be represented as: { \u02dc Wl,t, \u02dc Ws,t} = M({Wl,t, Ws,t}), (1) where \u02dc Wl,t and \u02dc Ws,t represent the parameters of the two models after knowledge transfer. Since the knowledge transfer process from a smaller model to a larger model is structurally symmetrical to the process from a larger model to a smaller model, we only demonstrate the knowledge transfer from the larger model to the smaller model here. Specifically, we consider Ml as the source model and Ms as the target model. 3.2 Implementation Initially, we consider using a simple model structure, a MultiLayer Perceptron (MLP), to transform the parameter matrix of the source model into that of the target model. Specifically, we define the parameter matrices of the two models as Wl,t \u2208RN\u00d7M and Ws,t \u2208Rn\u00d7m, respectively. The generation process of Ws,t is as follows: Ws,t = \u03be2((\u03be1(Wl,t))T ), (2) where \u03be1(\u00b7) and \u03be2(\u00b7) both represent the structures of MLP. However, we notice some issues with using such a simple network structure for knowledge transfer between heterogeneous models: (i) This method directly uses the generated parameters to overwrite the existing parameters of the target model, thereby overlooking the knowledge accumulated in the original parameters of the target model. (ii) The vectors of the generated Ws,t are produced independently, which may lead to a loss of information between the vectors. \fTo address the aforementioned issues, we propose MergeNet, which can generate hybrid parameters containing knowledge from heterogeneous models based on parameters from these models, thus elegantly facilitating knowledge transfer between different models. As shown in Figure 2(a), MergeNet is composed of multiple knowledge transfer layers (KTL), where each layer receives the parameters generated by the previous layer and produces new parameters based on them: {W (i) l,t , W (i) s,t } = Mi({W (i\u22121) l,t , W (i\u22121) s,t }), \u2200i = 1, ..., L, (3) where Mi represents the i-th KTL, W (i) l,t and W (i) s,t respectively denote the parameters generated by the i-th KTL, and L signifies the number of layers in the KTL. Parameter Re-Encode. Similar to LoRA [Hu et al., 2021], we re-encode the parameter matrices Wl,t \u2208RN\u00d7M and Ws,t \u2208Rn\u00d7m involved in knowledge transfer through lowrank decomposition: Wl,t = Bl,tAl,t, (4) Ws,t = Bs,tAs,t, (5) where Bl,t \u2208RN\u00d7rl, Al,t \u2208Rrl\u00d7M, Bs,t \u2208Rn\u00d7rs and As,t \u2208Rrs\u00d7m with rl \u226a{N, M} and rs \u226a{n, m}. Our approach significantly differs from LoRA in terms of usage and objectives: (i) LoRA aims to model the incremental updates \u2206of parameters, whereas our method directly performs a low-rank decomposition of the parameter matrices. (ii) LoRA decomposes \u2206into two low-rank matrices to reduce the computational cost of fine-tuning. In contrast, we use low-rank decomposition to more effectively represent the complete information in the original matrix. In our approach, the low-rank matrices are considered as fundamental units containing model knowledge, differing from the potential information loss due to the independence of vectors in MLP-based parameter adapter. Low-rank Parametric Knowledge Adapter. MLP-based parameter adapter directly maps the knowledge from the source model to the target model, without considering the existing knowledge in the target model. To address this issue, we introduce Low-rank Parametric Knowledge Adapter (LPKA). This mechanism is used to extract knowledge from the low-rank matrices and merge knowledge from different models to generate new parameters. Figure 2(d) illustrates the specific process of knowledge transfer through LPKA. Taking into account the knowledge from both models, we unfold the address matrices Al,t and As,t, obtained from low-rank decomposition, by rows/columns and then use the attention mechanism to integrate the knowledge from the source model into the target model: \u02dc As,t = 4 X i=1 \u03c9iAttn(\u03d5R/L(As,t), \u03d5R/L(Al,t), \u03d5R/L(Al,t)), (6) where Attn represents the softmax attention mechanism, while \u03d5R(\u00b7) and \u03d5L(\u00b7) denote the operations of unfolding the low-rank matrices by rows and columns, respectively. For the low-rank matrices of both models, there is a choice of unfolding either by rows or by columns, resulting in four possible combinations. Additionally, \u03c9i is a learnable parameter used to balance the proportion of the i-th part of the attention mechanism. Training Process. During the training process, the optimization updates of a single model through the gradient descent algorithm constitute self-learning, while knowledge transfer between two models involves learning from each other. We believe that solely conducting knowledge transfer will not yield the best results; self-learning should be interleaved with the knowledge transfer process, akin to a selfconsolidation phase under the guidance of a teacher. Therefore, we divide the entire model training process into two parts: the knowledge transfer phase and the self-learning phase. Specifically, we define the knowledge transfer cycle as Tcycle, during the time step t: \u02dc As,t = \u001aM({As,t, Al,t}) if t mod Tcycle = 0, As,t otherwise. (7) To demonstrate this, in the experimental section, we conduct an analysis to compare the effects of different knowledge transfer cycles on model performance. We define the loss function of the smaller model as Ls. During the knowledge transfer phase, the parameters Ws of the smaller model are optimized to minimize Ls with gradient updates: Ws = Ws \u2212\u03b7s\u2207WsLs({ \u02dc Ws,t, Ws,u}), where \u03b7s is the learning rate. In our work, the parameter adapter is trained concurrently with the network involved in knowledge transfer, and its parameters are defined as \u03c6. We employ a more general update rule: \u03c6 = \u03c6 \u2212\u03b7(\u2207\u03c6Ws,t)T \u2206Ws,t, where \u2206Ws,t represents the change after a gradient descent update step.In the testing phase, the parameter adapters are removed, resulting in zero additional overhead. 4 Experiments Our method is designed to facilitate heterogeneous knowledge transfer, independent of the architecture and specific tasks of the models involved. To assess the effectiveness of our approach, we conduct extensive experiments in several challenging scenarios: cross-structure, cross-modal, crosstask, and self knowledge transfer. To ensure a fair comparison, all experiments are conducted under the same setup conditions. 4.1 Cross-Structural Knowledge Transfer Implementation Details. In our experiment, we conduct cross-structure knowledge transfer using the CIFAR-100 [Krizhevsky et al., 2009] dataset for image classification tasks. This dataset comprises 100 categories, with training and validation sets containing 50k and 10k images, respectively. We use ResNet50 [He et al., 2016] as the larger model and MobileNetV2 [Sandler et al., 2018] as the smaller model. Notably, these models are not pre-trained on ImageNet. The aim of cross-structure knowledge transfer is to move the knowledge from a module of one model to a different structural module of another model. Specifically, our experiments involve transferring the knowledge from the linear \fTable 1: Performance comparison of the proposed method and baselines on CIFAR-100. R and M mean the ResNet50 and MobileNetV2, respectively. R\u2192M denotes knowledge transfer from ResNet50 to MobileNetV2, and R\u2194M represents mutual knowledge transfer between ResNet50 and MobileNetV2. The best results for each setting are highlighted in bold. Top-1 Acc(%) Top-5 Acc(%) MobileNetV2 63.87 88.77 KD [Hinton et al., 2015] 64.32 88.62 RKD [Park et al., 2019] 65.48 88.9 DKD [Zhao et al., 2022] 65.23 89.01 NKD [Yang et al., 2023] 65.09 88.9 MergeNet(R\u2192M) 66.23 89.66 MergeNet(R\u2194M) 66.51 89.75 ResNet50 68.11 89.61 KD[Hinton et al., 2015] 68.36 89.9 RKD[Park et al., 2019] 68.6 90.21 DKD [Zhao et al., 2022] 69.03 90.25 NKD[Yang et al., 2023] 69.27 90.18 MergeNet(R\u2194M) 69.84 90.57 classifier of one model to the convolutional layer of another model. Cross-Structural Knowledge Transfer Results. We compare our method with several baseline methods and present the experimental results on the CIFAR-100 dataset in Table 1. We find that our method consistently outperform the baselines. For example, on MobileNetV2, our method achieve a 1.02% improvement in Top-1 accuracy. Furthermore, we explore transferring knowledge from smaller models to larger models. As suggested by [Yuan et al., 2020], student models can enhance the performance of teacher models through reverse distillation. Following this perspective, we compare knowledge distillation with our method. The results indicate that our method surpasses various distillation approaches. The improvement brings about by our method can be attributed to the fact that different models focus on different aspects of a task. Through knowledge transfer, a model can learn the focus points of other models. Additionally, transferring knowledge from the linear classifier to the convolutional layer allows the convolutional layer to learn the focus points of the linear classifier, thus generating representations more aligned with the linear classifier. 4.2 Cross-Modal Knowledge Transfer Implementation Details. We conduct experiments in cross-modal knowledge transfer on two distinct tasks: employing the VQA v2.0 [Goyal et al., 2017] dataset for the Visual Question Answering task, and the MSCOCO [Lin et al., 2014] dataset for the Image-Text Retrieval task, using X-VLM [Zeng et al., 2021] as the experimental model. For the Image-Text Retrieval task, we use R@10 as the evaluation metric. Given the large size of the datasets and limited computational resources, which led to lengthy training times, we opt to train the model using only 10% of the training set Table 2: Performance of cross-modal knowledge transfer is as follows: \u2019V\u2192T\u2019 represents the knowledge transfer from visual to textual modality, while \u2019T\u2192V\u2019 signifies the transfer from textual to visual modality. The last row indicates the scenario where both visualto-textual and textual-to-visual knowledge transfers are conducted simultaneously. The best results for each setting are highlighted in bold. VQA ITR overall other number TR IR X-VLMsmall 45.78 31.33 28.71 41.48 37.64 MergeNet(V\u2192T) 46.33 33.29 31.33 44.72 39 MergeNet(T\u2192V) 45.96 31.99 31.15 44.58 38.93 MergeNet 46.51 33.84 31.54 44.78 39.26 Table 3: Performance of cross-task knowledge transfer: BERT executing a question answering task and DistilBERT performing a sentiment classification task. \u2191(\u2193)denotes that a higher (lower) result corresponds to better performance. The best results for each setting are highlighted in bold. SQuAD v2.0 IMDb EM\u2191 F1\u2191 Err\u2193 DistilBERT 8.02 BERT-base 70.17 73.06 MergeNet 71.89 75.43 7.5 and assess the effectiveness of cross-modal knowledge transfer on the complete test set. Our goal is to use knowledge from one modality to guide the learning of another modality. Specifically, we explore transferring knowledge from the parameters of the visual encoder to the textual encoder and vice versa, transferring knowledge from the textual encoder to the visual encoder. Cross-Modal Knowledge Transfer Results. We conduct cross-modal knowledge transfer experiments in various ways, including unidirectional transfers between modalities and bidirectional transfers where modalities mutually transfer knowledge. The results of these experiments are summarized in Table 2. It is evident from the results that our method provides significant improvements in accuracy across different settings. We speculate that transferring knowledge between modal encoders allows for the integration of different modal information before it enters the modal interactor, thereby reducing the complexity for the modal interactor in merging information from various modalities. 4.3 Cross-Task Knowledge Transfer Implementation Details. We study the cross-task knowledge transfer effectiveness of our method on the following tasks: a classification task (IMDb sentiment classification [Maas et al., 2011]) and a question answering task (SQuAD v2.0 [Rajpurkar et al., 2018]). We utilized BERT [Devlin et al., 2018] and DistilBERT [Sanh et al., 2019], respectively, to perform these tasks. DistilBERT is a distilled version of BERT, maintaining the general architecture of BERT but with half the number of layers. Due to the difference in dataset \fsizes, we arranged to perform knowledge transfer between the question-answering task and the classification task after every two batches in the question-answering task (unless otherwise specified, knowledge transfer is conducted after every batch by default). Cross-Task Knowledge Transfer Results. The results of cross-task knowledge transfer on the SQuAD v2.0 and IMDb datasets are shown in Table 3. Our method achieve performance improvements in both tasks. For instance, in the knowledge transfer from the classification task to the question-answering task, BERT\u2019s Exact Match (EM) and F1 scores improved by 1.72% and 2.37%, respectively. Conversely, in the knowledge transfer from the questionanswering task to the classification task, DistilBERT\u2019s error rate decreased by 0.52%. We believe that in similar tasks, the forms of knowledge expression are likely similar, and models performing different tasks can enhance their own task performance by learning the knowledge from other tasks. 4.4 More Challenging Cross-Task Knowledge Implementation Details. We conduct cross-structure modal task knowledge transfer experiments in questionanswering and image classification tasks. For the questionanswering task, we use the BERT model on the SQuAD v2.0 dataset, and for image classification, we utilize MobileNetV2 on the CIFAR-100 dataset. In our approach, we choose to transfer knowledge between the Value matrix of the attention module in the last layer of BERT and the linear classifier of MobileNetV2. As in Section 4.3, due to the difference in dataset sizes for the two tasks, we adopt a specific strategy to balance the knowledge transfer process. Specifically, in the image classification task, we arranged to perform knowledge transfer with the question-answering task after every two batches. Integrated Knowledge Transfer Results. We conduct cross-structure modal task knowledge transfer experiments, which can be seen as a more challenging form of cross-task knowledge transfer. We choose two significantly different tasks for our experiments: question-answering and image classification. As shown in Table 4, our method is effective in transferring and applying knowledge learned in one task to another, significantly different task. For instance, for MobileNetV2, there is a 2.09% improvement in Top-1 accuracy, and for BERT, there is a 1.79% increase in the F1 score. We believe that despite the significant differences between the tasks used in our experiments, there may be some common information processing mechanisms shared among different tasks. By learning knowledge relevant to their own tasks from other tasks, models can improve their performance. 4.5 Self Knowledge Transfer. Implementation Details. To comprehensively evaluate the broad applicability of our method, we conduct a series of self knowledge transfer experiments similar to self-distillation on the CIFAR-100 dataset using MobilenetV2. Specifically, we attempt to transfer knowledge from the linear classifier to the 4th, 8th, 12th, and 16th Inverted Residual Blocks out of a total of 17, to test the self-knowledge transfer capability of our Table 4: Performance of integrated knowledge transfer: MobileNetV2 executing an image classification task and BERT performing a question answering task. The best results for each setting are highlighted in bold. CIFAR-100 SQuAD v2.0 Top-1 Acc Top-5 Acc EM F1 MobileNetV2 63.87 88.77 BERT 70.17 73.06 MergeNet 65.96 90.06 71.49 74.85 Table 5: Performance of self knowledge transfer on the CIFAR-100 dataset. Here, \u2019IRB-x\u2019 denotes the x-th Inverted Residual Block in MobileNetV2. The best results for each setting are highlighted in bold. Top-1 Top-5 Acc Acc MobileNetV2 63.87 88.77 Tf-KD [Yuan et al., 2020] 65.43 88.56 USKD [Yang et al., 2023] 65.66 86.61 Linear Classifier\u2192IRB-4 63.51 88.64 Linear Classifier\u2192IRB-8 64.02 88.71 Linear Classifier\u2192IRB-12 64.42 88.95 Linear Classifier\u2192IRB-16 66.48 89.49 method. Furthermore, we compare our approach with stateof-the-art self-distillation methods, including Tf-KD [Yuan et al., 2020] and USKD [Yang et al., 2023]. These methods obtain additional supervisory signals by setting manual soft labels. Self Knowledge Transfer Results. The results of the selfknowledge transfer are shown in Table 5. It can be observed that: (i) in terms of single-model self knowledge transfer, our method outperforms existing self-distillation methods, bringing significant improvements to the model. For example, compared to self-distillation methods, the knowledge transfer from the linear classifier to the 16th Inverted Residual Block results in a 0.82% increase in top-1 accuracy, and a 0.93% increase in top-5 accuracy. (ii) The deeper the knowledge is transferred to an Inverted Residual Block, the better the performance. This is because deeper Inverted Residual Blocks have stronger expressive capabilities and can better understand the knowledge of the linear classifier. In contrast, shallower Inverted Residual Blocks find it more challenging to comprehend the knowledge of the linear classifier. For instance, the knowledge transfer from the linear classifier to the 4th Inverted Residual Block is less effective than not performing knowledge transfer. (iii) The knowledge transfer from the linear classifier to the 16th Inverted Residual Block significantly outperforms other settings. A possible reason is that the linear classifier and the 16th Inverted Residual Block are closer in terms of the amount of parameters, and their proximity in terms of location leads to similar mean and variance in the parameters, thereby facilitating easier knowledge transfer. \fTable 6: The results of knowledge transfer across different layers on the SQuAD v2.0 dataset. \u2019x\u2192y\u2019 denotes transferring knowledge from the x-th layer of BERT to the y-th layer of DistilBERT. The best results for each setting are highlighted in bold. EM F1 Layer 6\u2192Layer 3 57.69 60.42 Layer 12\u2192Layer 3 54.77 58.5 Layer 6\u2192Layer 6 64.89 68.26 Layer 12\u2192Layer 6 66.98 70.44 Figure 3: Ablation with respect to Tcycle. 4.6 Knowledge Transfer Across Different Layers We explore the performance of our method in transferring knowledge across different levels of information. Similar to Section 4.3, we use knowledge transfer from BERT to DistilBERT as our experimental case. The results, as shown in Table 6, indicate that the most significant performance improvement occurs when both models select their last layers for knowledge transfer. It is generally believed that deeper layers in neural networks contain more advanced semantic information and are capable of understanding higher-level semantics. Transferring knowledge from deeper layers can convey richer semantic information to DistilBERT. However, we observe an exception: the knowledge transfer between the 6th layer of BERT and the 3rd layer of DistilBERT performs better than that between the 12th layer of BERT and the 3rd layer of DistilBERT. A plausible explanation is that while higher-level information typically enhances model performance, overly advanced information being transferred might be difficult for the model to comprehend and can even disrupt the existing knowledge structure of the model. 4.7 Ablation Study Knowledge Transfer Cycle Tcycle. We study the impact of the knowledge transfer cycle Tcycle, which controls the proportion of self-learning during the training process. Figure 3 shows the experimental results of knowledge transfer for MobileNetV2 and ResNet50 on the CIFAR-100 dataset under different coefficients of Tcycle. We observe consistent performance improvements when interspersing self-learning throughout the training process. For instance, in the case of MobileNetV2, when the self-learning ratio is set to 1, the performance of the model reaches 65.61%. This represents a 5.52% improvement compared to the performance without Table 7: Ablation study of the effect of individual module. MobileNetV2 ResNet50 Top-1 Acc Top-1 Acc MLP-based 64.69 68.53 LPKA(Individual Attn) 65.76 69.38 LPKA(Avg Attn) 66.02 69.66 MergeNet 66.51 69.84 self-learning, which is 60.09%, further indicating the differences in knowledge between heterogeneous models. Additionally, when the self-learning ratio is high, the performance of both models tends to decline. This is due to the lower frequency of knowledge transfer, preventing the auxiliary module from being adequately trained, thus affecting the effectiveness of the knowledge transfer. Notably, compared to MobileNetV2, ResNet50 is less affected by changes in the selflearning ratio, suggesting that larger models are more adept at assimilating knowledge transferred from smaller models. Effectiveness of Each Component. We conduct an ablation study, as shown in Table 7, to demonstrate the effectiveness of each component in MergeNet. As mentioned in Section 3.2, we could also use MLP as the backbone for the parameter adapter. In this case, the target model directly adopts the knowledge from the source model but ignores the accumulated knowledge of the target model, which may lead to training instability. We compared MergeNet with the MLPbased parameter adapter (row 1 vs row 4), and the results showed that the MLP-based parameter adapter caused a significant decrease in performance of both models by 1.82% and 1.31%, respectively. Furthermore, we compared MergeNet with different variants of LPKA: (1) LPKA(Individual Attn), which only uses the row unfolding form of the lowrank matrix for computation; (2) LPKA(Avg Attn), which does not use trainable weight parameters and averages each softmax attention module. The results indicate that using only the row unfolding form is less effective than considering both row and column unfoldings, suggesting that column vectors in the parameter matrix also contain crucial model knowledge. Additionally, the performance of MergeNet may decline without trainable weight parameters. These results validate the contribution of each component to enhancing model performance. 5 Conclusion We propose a novel knowledge transfer framework named MergeNet. This framework utilizes model parameters as the natural carriers of knowledge for the transfer process, independent of specific model architectures and tasks. MergeNet adaptively extracts the required knowledge from the source model based on the knowledge needs of the target model. We hope that MergeNet will provide new insights into the field of knowledge transfer." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.03574v1", |
| "title": "TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices", |
| "abstract": "Traditional machine learning models often require powerful hardware, making\nthem unsuitable for deployment on resource-limited devices. Tiny Machine\nLearning (tinyML) has emerged as a promising approach for running machine\nlearning models on these devices, but integrating multiple data modalities into\ntinyML models still remains a challenge due to increased complexity, latency,\nand power consumption. This paper proposes TinyVQA, a novel multimodal deep\nneural network for visual question answering tasks that can be deployed on\nresource-constrained tinyML hardware. TinyVQA leverages a supervised\nattention-based model to learn how to answer questions about images using both\nvision and language modalities. Distilled knowledge from the supervised\nattention-based VQA model trains the memory aware compact TinyVQA model and low\nbit-width quantization technique is employed to further compress the model for\ndeployment on tinyML devices. The TinyVQA model was evaluated on the FloodNet\ndataset, which is used for post-disaster damage assessment. The compact model\nachieved an accuracy of 79.5%, demonstrating the effectiveness of TinyVQA for\nreal-world applications. Additionally, the model was deployed on a Crazyflie\n2.0 drone, equipped with an AI deck and GAP8 microprocessor. The TinyVQA model\nachieved low latencies of 56 ms and consumes 693 mW power while deployed on the\ntiny drone, showcasing its suitability for resource-constrained embedded\nsystems.", |
| "authors": "Hasib-Al Rashid, Argho Sarkar, Aryya Gangopadhyay, Maryam Rahnemoonfar, Tinoosh Mohsenin", |
| "published": "2024-04-04", |
| "updated": "2024-04-04", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.AI", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Distillation", |
| "gt": "TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices", |
| "main_content": "INTRODUCTION Tiny Machine Learning (tinyML) is rapidly transforming how we deploy machine learning models to resource-constrained devices at the edge, particularly in remote or disaster-stricken areas. Its minimal power consumption and independence from internet connectivity make it ideal for scenarios where real-time insights are crucial, but resources are limited. While traditionally focused on analyzing single data modalities, tinyML now faces the exciting challenge of integrating multimodal deep neural networks (M-DNNs) [17, 18, 21\u201325]. These networks leverage data from multiple sources, such as text, audio, and image, to provide richer insights and more robust predictions for tasks like Visual Question Answering (VQA). VQA technology presents a powerful solution for many computer vision tasks by combining vision and language modalities to extract high-level scene information from images. Unlike traditional computer vision tasks, VQA\u2019s ability to answer questions about images using natural language streamlines the decision-making process, providing critical insights for immediate and long-term response efforts [27\u201330]. However, implementing M-DNNs (i.e. VQA) on resource constrained tiny devices presents significant challenges. Their high computational complexity and memory footprint, coupled with the intricacies of optimizing them for resource-constrained platforms, pose formidable barriers. Fortunately, recent advancements in model compression techniques, including parameter pruning, knowledge distillation, and quantization, offer promising solutions [1, 2, 9, 10, 12, 14, 20] . These techniques have been effective in reducing the size and complexity of unimodal models, paving the way for adapting M-DNNs for edge deployment. MobiVQA [3] proposed on-device VQA, focusing on early exit and selective processing and optimizing existing VQA models for mobile devices. However, their implementation is not suitable for tinyML hardware deployment. The integration of TinyML and VQA holds immense potential to revolutionize disaster preparedness, response, and recovery efforts. This transformative technology empowers communities with realtime, on-site intelligence, ultimately saving lives and minimizing the impact of catastrophic events. However, current state-of-the-art VQA models rely heavily on cloud servers and GPUs due to their resource-intensive nature, rendering them impractical in disaster arXiv:2404.03574v1 [cs.CV] 4 Apr 2024 \ftinyML Research Symposium\u201924, April 2024, San Jose, CA Hasib-Al Rashid, Argho Sarkar, Aryya Gangopadhyay, Maryam Rahnemoonfar, and Tinoosh Mohsenin On-Board Processing Q: Is the area flooded? Rescuer interact with UAV through VQA A: Yes TinyVQA System UAV captures images from affected area (a) (b) Figure 1: (a) Highlevel overview of proposed TinyVQA system. Rescuer can acquire effective information about the affected area by asking questions when a drone coupled with a VQA system captures images from the hurricane-stricken area from a high altitude. (b) The flow diagram of proposed TinyVQA system. Proposed TinyVQA is the sequential combination of the steps shown in the diagram. scenarios with limited resources or compromised infrastructure. As the integration of TinyML with multimodal Visual Question Answering (VQA) is a burgeoning area of research, it presents an untapped opportunity for innovation in disaster management. The potential for developing refined M-DNN models tailored for resource-limited platforms like TinyML heralds a promising future in effectively mitigating the impacts of natural disasters. In this paper, we introduce TinyVQA, a compact multimodal deep neural network specifically designed for Visual Question Answering (VQA) tasks, tailored for deployment on resource-constrained TinyML hardware. Highlevel overview of the proposed TinyVQA is presented in the figure 1 (a). To the best of our knowledge, this represents the first attempt to deploy a VQA task on extremely resource-limited hardware. The main contributions of this paper can be summarized as follows: \u2022 We propose TinyVQA, a novel multimodal (vision and language based) deep neural network for visual question answering task, deployable on resource-constrained tinyML hardware. \u2022 We designed supervised attention-based visual question answering framework, distilled its knowledge to train our compact, memory-aware TinyVQA model. \u2022 We evaluated our TinyVQA with FloodNet [19, 26] dataset, which is used for post-disaster damage assessment purposes, conveying the efficacy of tinyML during disaster management. \u2022 We deployed our compact TinyVQA model on the resourceconstrained Crazyflie 2.0 drone, equipped with an AI-deck, which operates using the GAP8 microprocessor. We conducted an analysis of the onboard latency and power consumption for the proposed TinyVQA architecture. 2 TinyVQA MODEL ARCHITECTURE The figure 1 (b) delineates the workflow of the TinyVQA System, which consists of two main components: the Baseline VQA Model and the Memory Aware Compact VQA Model. 2.1 Baseline VQA Model Design We designed a vision-based question-answering model following the approach mentioned in the previous work [26]. This model allows decision-makers to pose image-specific questions in natural language, enabling the model to generate answers derived from visual content. We adopted an enhanced visual attention mechanism by introducing an auxiliary visual mask, which refines the model\u2019s estimated attention weights. This augmentation is designed to work in harmony with the cross-entropy loss used for classification, thereby not only improving the model\u2019s accuracy but also its interpretability. The architecture of the model is structured into four distinct stages: Visual Feature Extraction, Textual Feature Extraction, Fine-Grained Fusion, and Classification, with the comprehensive architecture of this baseline model illustrated in Figure 2. In the visual feature extraction stage, the process involves feeding an image into a Convolutional Neural Network (CNN), specifically VGG-16. This step aims to extract the grid feature matrix from the final convolutional layer of the CNN model. When the image \ud835\udc3chas an input size of 224 \u00d7 224, the CNN model generates a grid feature matrix \ud835\udc53\ud835\udc3cwith dimensions 14 \u00d7 14 \u00d7 1024. This matrix extracts the relevant visual features from the image, forming a semantic visual representation that serves as a foundation for subsequent stages in the processing pipeline. \ud835\udc53I = \ud835\udc49\ud835\udc3a\ud835\udc3a(I) \u2208R\u210e\u00d7\ud835\udc64\u00d7\ud835\udc58 (1) Here, \u210e, \ud835\udc64, and \ud835\udc58represent the height, width, and number of channels of the image feature matrix \ud835\udc53\ud835\udc3c, respectively. Within the textual feature extraction stage, a two-layer Long Short-Term Memory (LSTM) architecture is employed to extract the 1024-dimensional semantic question feature \ud835\udc53\ud835\udc47from the input question \ud835\udc44. This specific feature is derived from the last cell of the final layer within the LSTM model. Before being input into the LSTM layer, all questions are tokenization and padding with 0 to guarantee a consistent length. \ud835\udc53T = \ud835\udc3f\ud835\udc46\ud835\udc47\ud835\udc40(Q) \u2208R\ud835\udc58 (2) Within the fine-grained fusion stage, the Multi-Modal Factorized Bilinear (MFB) pooling technique is employed to integrate each grid-feature vector from an image feature matrix with the semantic textual feature extracted from the corresponding question. This fusion process is structured into two stages, the expand stage and the squeeze stage. In the expanding process, the image grid vector and question feature vectors undergo element-wise multiplication, followed by the introduction of a dropout layer. The subsequent squeezing step involves sum pooling, followed by power and \ud835\udc592 normalization layers. The resulting matrix is then subjected to the Softmax function to compute the spatial visual attention weight. Notably, an auxiliary visual mask, derived from ground-truth visual \fTinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Hardware tinyML Research Symposium\u201924, April 2024, San Jose, CA Estimated Visual Attention\u00a0Weight\u00a0 Visual Mask Weighted Feature Matrix Soft Labels Loss 2 MFB Fusion Softmax How many non-flooded buildings are in the image? CNN M Classifier Actual Answer Predicted Answer y LSTM Loss 1 How many non-flooded buildings are in the image? MFB Fusion Hard Labels Actual Answer Predicted Answer Embedding Layer (300) LSTM (32) Conv2D (32, 3x3)+ MP (2x2)\u00a0 DSConv2D (64, 3x3)+ MP (2x2)\u00a0 DSConv2D (64, 3x3)+ MP (2x2)\u00a0 Dense (64) Dense (128) Output Distilled Knowledge Baseline VQA Model Memory Aware Compact VQA Model Squeeze Stage Elementwise Multiplication Dropout FeatureCNN FeatureLSTM MFB Fusion Block Sum Pooling Power Normalization L2 Normalization Expand Stage (a) (b) Figure 2: (a) Overview of our proposed TinyVQA model where the baseline VQA model uses VGG-16 and a one-layer LSTM to obtain the image feature matrix and question feature, respectively. We then consider MFB pooling to obtain a fine-grained multimodal representation. A softmax function is applied to that joint representation to estimate attention weights from the images for given questions. Finally, we calculate two loss functions: one minimizes the distance between the visual mask and the estimated visual attention weight, and the other minimizes the loss between the ground-truth answer and the predicted answer from the VQA classifier. Memory aware compact VQA model is designed with 3 CNN layers and 1 LSTM layer for each of the image and text modality feature extraction. Distilled knowledge is used from the baseline model to have the final result. (b) Detailed structure of the MFB Fusion block. attention based on a semantic segmented image, is employed to guide the estimated spatial attention weight in this phase. Following the estimation of spatial attention weight, it is multiplied with the image feature matrix and summed channel-wise. The resultant summed vector is subsequently added to the question vector, forming the input for the classification layer responsible for generating the final prediction. This comprehensive fusion strategy enhances the model\u2019s ability to capture high-level relationships between visual and textual features for accurate and meaningful predictions. 2.2 Memory-Aware Compact VQA Model Design We designed a novel compact model introducing memory-aware model compression of the large, complex M-DNN model. Our proposed memory-aware compression technique for Multimodal Deep Neural Networks (M-DNNs) employs off-the-shelf knowledge distillation and quantization to minimize the memory footprint of M-DNNs while preserving their accuracy. This method involves training a compact student M-DNN to emulate a more complex teacher model, aiming to fit the student model within the constrained on-chip memories (L1 and L2) of tiny processors. The focus is on striking a balance between model size reduction and performance retention, making it suitable for deployment on resourcelimited devices. To this extent, we considered several factors in memory-aware model compression: Memory Hierarchy of the Deployment Hardware: Memory aware model compression in TinyVQA takes into account the memory hierarchy of the target hardware platform, such as onchip SRAM and off-chip DRAM, to ensure that the compressed model can be effectively stored and executed within the available lower level of memory hierarchies for faster and more efficient deployment. Model Compression Techniques: To compress the large multimodal neural network models, TinyVQA used off-the-shelf model compression such as: Knowledge Distillation, Uniform 8-bit Quantization and Compact Network architecture design for inference. Knowledge Distillation involves training a smaller student model to mimic the behavior of a larger, more accurate teacher model. The student model is designed to be smaller than the teacher model, with fewer layers, parameters, or connections, to reduce memory requirements for deployment on memory-constrained devices. Various distillation techniques, such as soft targets, attention transfer, or feature map matching, can be used for transferring knowledge from the teacher to the student model. We have chosen to use soft targets for this purpose. Soft targets are generated by the larger model and used as training labels for the smaller model. These are obtained by applying a temperature scaling factor, \ud835\udc47, to the output probabilities of the larger model, which smooths the peaks in the distribution, making it more spread out. The soft targets, denoted as \ud835\udc5e\ud835\udc56, are defined by the following equation: \ud835\udc5e\ud835\udc56= exp(\ud835\udc67\ud835\udc56/\ud835\udc47) \u00cd \ud835\udc57exp(\ud835\udc67\ud835\udc57/\ud835\udc47) (3) where \ud835\udc67\ud835\udc56is the logit (unnormalized log-probability) output of the larger model for class \ud835\udc56. The temperature scaling factor, \ud835\udc47, determines the \"softness\" of the targets, with higher values leading to softer targets. The smaller model is then trained to minimize the Kullback-Leibler (KL) divergence between its output probabilities, \ud835\udc5d\u2032 \ud835\udc56, and the soft targets, \ud835\udc5e\ud835\udc56. This is expressed by the following loss function: \ftinyML Research Symposium\u201924, April 2024, San Jose, CA Hasib-Al Rashid, Argho Sarkar, Aryya Gangopadhyay, Maryam Rahnemoonfar, and Tinoosh Mohsenin L = \u2211\ufe01 \ud835\udc56 \ud835\udc5e\ud835\udc56log \ud835\udc5e\ud835\udc56 \ud835\udc5d\u2032 \ud835\udc56 (4) where \ud835\udc5d\u2032 \ud835\udc56represents the output probability of the smaller model for class \ud835\udc56. This KL divergence loss function encourages the smaller model to learn a probability distribution similar to that of the larger model. Designing compact and efficient network architectures can help create models with smaller memory footprints without sacrificing accuracy. We have used depthwise separable convolution layers in stead of regular convolution layers to further compress the student model without significant loss of accuracy. Reducing the bit-width of the model parameters and activations can significantly reduce the memory footprint of the model. We quantized our models with Tensorflow Lite (tf-lite) post-training quantization adopting full integer quantization. This method quantizes both the weights and activations to 8-bit integers, resulting in a model that performs only integer arithmetic. The hyper parameters of the compact model are fixed empirically. 3 TinyVQA EVALUATION 3.1 Dataset Description In this study, we consider the FloodNet-VQA dataset [19] for the experiment. The data collection process occurred in the aftermath of Hurricane Harvey, a devastating Category 4 hurricane that struck Texas and Louisiana in August 2017, resulting in widespread flooding and a tragic loss of over 100 lives. Leveraging the unmanned aerial vehicle (UAV) platform, images and videos of the affected regions were captured. Specifically, DJI Mavic Pro quadcopters were employed for this data collection initiative. Numerous flights were conducted to cover primarily Ford Bend County, Texas, and other directly affected areas. The dataset comprises a total of 2348 images and 10, 480 question-answer (QA) pairs. Within the dataset, there are 7 distinct question categories, including Simple Counting, Complex Counting, Road Condition Recognition, Density Estimation, Risk Assessment, Building Condition Recognition, and Entire Image Condition. These diverse question categories are crucial for a comprehensive understanding of the damaged scenario, ultimately contributing to the effectiveness of rescue missions. Notably, the dataset encompasses questions with varying complexities, with the longest question containing 11 words. 3.2 Evaluation Results and Analysis Figure 3 presents TinyVQA evaluation results for baseline VQA and TinyVQA on FloodNet [19] dataset. The baseline model achieved 81% accuracy with 479 MB model size whereas final TinyVQA model achieved 79.5% accuracy with 339 KB model size. The figure 3 presents a comparative analysis of the Baseline VQA and TinyVQA Models, focusing on the trade-off between accuracy and model size. The Baseline VQA Model, when both distilled and quantized to TinyVQA model, shows a 100% decrease in memory usage with a corresponding 1.5% drop in accuracy which exhibit a huge reduction in model size, indicating an optimization for memoryconstrained deployments. The downward trajectory of the dashed line highlights the inverse relationship between model size and 1412\u00a0x decrease\u00a0in memory usage 1.5% accuracy drop Figure 3: Accuracy and Model Size Correlation for Baseline VQA and TinyVQA for FloodNet [19] dataset. Baseline model achieved 81% accuracy with 479 MB model size whereas final TinyVQA model achieved 79.5% accuracy with 339 KB model size. TinyVQA Inputs Predicted: High Ground-truth: High Predicted: Yes Ground-truth: Yes Q: What is the level of density in this image? Q: Do the rescuers need to provide help\u00a0urgently? Figure 4: Derived visual attentions for given questions from TinyVQA model. The yellowish tone in the image denotes higher attention weight. Attention learned with visual supervision (the last column) emphasizes the relevant image portions (buildings and roads in this case) to address the questions from the top and bottom images, respectively. accuracy, emphasizing efficiency gains in the TinyVQA Models at a slight cost to accuracy. Figure 4 represents the qualitative results from TinyVQA model. This figure shows that derived visual attention from the TinyVQA model focuses on the relevant image portions depending on the questions. These qualitative results prove the trustworthiness of the predictions from our model. 4 TinyVQA DEPLOYMENT ON RESOURCE CONSTRAINED HARDWARE 4.1 Hardware Architecture Crazyflie 2.0 tiny drone equipped with the AI-deck, powered by the GAP8 RISC-V microprocessor [4, 5], has become very popular to deploy tinyML models for UAVs [8, 13, 15, 16]. We have also chosen Crazyflie 2.0 as the deployment platform testbed for our tinyML \fTinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Hardware tinyML Research Symposium\u201924, April 2024, San Jose, CA CrazyFlie AI Deck\u00a0 Gap8 Microprocessor Logarithmic Interconnect Shared L1 Memory PMU DC/DC RTC LVDS UART SPI I2S I2C GPIOs JTAG L2 Memory\u00a0 Micro DMA I-Cache Fabric Control FC-L1 ROM DBG DMA HWCE HW Sync DBG Unite Shared Instruction Cache Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 SOC Cluster Input Feature Map Filter Weights Output Feature Map Input Buffer Output Buffer Weights Buffer Input Buffer Output Buffer Weights Buffer L1 Buffer I L1 Buffer II L2 Memory L1 Memory Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Inputs Weights Cluster Cycle 0 Cycle 1 Cycle 2 0 0 1 2 1 1 1 2 2 L1 Memory: 100 KB L2 Memory: 512 KB DRAM: 8 MB On-Chip Memory GAP8 Memory Hierarchy (b) (c) (a) Figure 5: (a) Detailed block diagram of Crazyflie AI-deck powered by GAP8 microprocessor (b) Memory Hierarchy for GAP8 microprocessor. GAP 8 microprocessor has L1 Memory of 100 KB (80 KB shared in compute engine + 20 KB for low power MCU.), L2 memory of 512 KB and 8MB of DRAM (c) TinyVQA flow; Left: the DMA manages L2 -L1 communication using double-buffering. Right: the cluster executes PULP-NN on tile stored in one of the L1 buffers. model. The GAP8 microprocessor emerges as a powerful contender for edge AI applications, wielding a multi-core RISC-V architecture optimized for parallel processing and hardware-accelerated deep learning. A single RISC-V core, the Fabric Controller (FC), acts as the conductor, orchestrating operations, scheduling tasks, and managing peripheral connections. Eight additional RISC-V cores in the Compute Cluster work in tandem, tackling compute-intensive tasks like image and audio processing. For CNNs, the dedicated Hardware Convolution Engine (HWCE) steps in, boosting AI performance without compromising energy efficiency. A layered memory hierarchy ensures smooth data flow: private L1 caches for each core and a shared L2 pool of 512 KB offer ample storage for data and code. Independent DMA units, the Micro-DMA and ClusterDMA, handle complex I/O tasks and data transfers between memory levels, further fueling efficient operation. Extensive peripheral interfaces provide seamless integration with sensors, cameras, and other external devices, enabling versatile edge applications. Moreover, GAP8 excels in power management, utilizing power-gating for idle components and dynamic voltage/frequency scaling to adapt to varying workloads. This intelligent power handling, coupled with integrated DC/DC regulators and clock generators, allows for rapid reconfiguration and maximizes energy efficiency. In conclusion, with its focus on parallel processing, dedicated AI acceleration, and robust memory management, the GAP8 architecture establishes itself as a compelling platform for realizing AI-powered edge applications, even under tight energy constraints. Figure 5 (a) and (b) shows the memory hierarchy and detailed block diagram for GAP8 microprocessor. 4.2 Hardware Implementation Details The GAPFlow toolchain, comprising PULP-based Neural Network Tool (NNTooL) and AutoTiler, played a pivotal role in deploying the DNN onto the GAP8 architecture (Fig. 1(b)) [6, 11]. NNTooL meticulously adapted the DNN architecture to ensure compatibility with AutoTiler and meticulously transformed weights for optimal execution on GAP8. AutoTiler then took the reins, algorithmically optimizing memory layout and generating GAP8-compatible C code, streamlining the deployment process. Despite the toolchain\u2019s automation, manual intervention was occasionally required to address DNN-specific memory requirements. This included adjusting maximum stack sizes and fine-tuning heap space allocation to prevent potential heap overflows, data corruption, and stack-related issues. Notably, AutoTiler\u2019s default behavior of allocating the entirety of L1 and L2 memory could lead to such challenges. Further considerations involved the GAP8\u2019s Real-Time Operating System (RTOS), which pre-allocates heap memory, potentially reducing available space for DNN operations, necessitating careful memory management. The GAP8 architecture presents a promising platform for efficient deployment of multi-modal models at the edge, thanks to its unique blend of hardware acceleration and parallel processing capabilities. To fully leverage these features, we employed a strategic partitioning of model components: 1. Hardware Acceleration: The Hardware Convolution Engine (HWCE) was tasked with handling computationally intensive convolutional layers, capitalizing on its dedicated hardware optimization for accelerated execution. The remaining layers, including fully connected layers, activation functions, dropout layers, and the LSTM layer, were executed by the Compute Cluster, comprising eight RISC-V cores operating in parallel. This division of labor effectively harnessed the strengths of each processing unit. 2. Data Flow and Memory Optimization: Image data was strategically stored in L2 memory for direct accessibility by the HWCE, streamlining convolutional processing. Intermediate feature maps, generated during model execution, were cached in L1 memory for rapid reuse, optimizing data transfer. Textual input processing was assigned to the RISC-V cores, potentially utilizing general-purpose matrix multiplication libraries for embedding and LSTM computations. To address model size constraints imposed by GAP8\u2019s L2 memory capacity, model quantization was employed, reducing data precision to 8-bit integers and shrinking the model\u2019s memory footprint. Additionally, the Cluster-DMA unit proactively prefetched data from L2 to L1, anticipating upcoming computations and minimizing memory stalls. 3. Parallelism for Enhanced Performance: Parallelism emerged as a cornerstone of GAP8\u2019s performance advantage. The Fabric Controller (FC) core masterfully orchestrated concurrent operations, expertly dividing neural network tasks between the Compute Cluster and HWCE for seamless collaboration. Layer parallelism further amplified efficiency, allowing different model layers to execute simultaneously on separate cores, unlocking significant speed gains. The FC core also demonstrated ingenuity in offloading data preparation and post-processing tasks to idle cores, effectively overlapping \ftinyML Research Symposium\u201924, April 2024, San Jose, CA Hasib-Al Rashid, Argho Sarkar, Aryya Gangopadhyay, Maryam Rahnemoonfar, and Tinoosh Mohsenin Crazyflie 2.0 with AI-deck INA 219 Arduino Gap8 Figure 6: Crazyflie Ai-deck board power measurement setup. INA219 and Arduino measure the GAP8 power consumption. them with model computations and streamlining the overall process. Furthermore, the Micro-DMA unit independently managed I/O operations, enabling parallel data transfers and minimizing processing bottlenecks. Figure 5 (b) and (c) elucidates the memory hierarchy and computational flow orchestrated within the GAP8 processor for efficient execution of CNN-LSTM multimodal models. It highlights the interplay between L2 memory, L1 memory, and the processing cluster, demonstrating how data flows seamlessly across these components over three distinct computation cycles. L2 memory serves as the primary repository for the input feature map, filter weights, and post-processed output feature map. L1 memory, strategically divided into two buffers (L1 Buffer I and L1 Buffer II), employs a double-buffering technique to facilitate concurrent data loading and processing. Each buffer is further partitioned into dedicated areas for inputs, outputs, and weights. The processing cluster, comprising seven cores, forms the heart of the computation. It ingests inputs and weights and executes neural network operations in parallel. The figure 5 (c) meticulously depicts this processing over three cycles, effectively illustrating the temporal dynamics of the dataflow: Cycle 0: The initial loading of the input feature map from L2 memory to L1 Buffer I is marked by solid lines. Cycle 1: As indicated by dashed lines, L1 Buffer I processes the first data set while L1 Buffer II concurrently commences loading the subsequent set, showcasing a seamless overlap. Cycle 2: Dotted lines visualize the transfer of L1 Buffer I outputs back to L2 memory and the initiation of processing within L1 Buffer II, ensuring uninterrupted operation and eliminating idle time. Arrows of varying styles delineate the data movement across cycles, with color-coded arrows (blue, red, and green) corresponding to the distinct paths for inputs, weights, and outputs, respectively. 4.3 Deployment Results and Analysis We deployed the TinyVQA model on the GAP8 processor. Table 1 reports resource utilization of TinyVQA for post-disaster damage assessment purposes. TinyVQA uses around 49 KB of L1 memory and 312 KB of L2 memory which are 93% and 78% of the available Table 1: Resource utilization data of TinyVQA implemented on GAP8 Processor Resources L1 Memory L2 Memory DRAM Available for Use (KB) 52.7 400 8000 TinyVQA Utilization (KB) 49 (93%) 290 (73%) 0 Table 2: Implementation Results of the proposed TinyVQA and Comparison with Previous Work. Architecture TinyVQA (this work) MobiVQA [3] Dataset FloodNet-VQA [19] VQAv2 [7] Modality Used Image + Text Image + Text Deployment Devices Gap8 Processor Nvidia TX2 Board Frequency (MHz) 175 \u2013 Latency (ms) 56 213 Power (W) 0.7 \u2013 Energy (J) 0.2 5.6 L1 and L2 memories. The inference model does not require offchip DRAM to store its weights and activations which ensures the minimum latency. Figure 6 (a) displays the power measurement setup used in this work for Creazyflie 2.0 with AI-deck, using INA 219 sensor and Arduino board. Table 2 provides comparative analysis where we examine two architectures, proposed TinyVQA and MobiVQA [3], designed for multimodal (image and text) question answering tasks. TinyVQA is deployed on a GAP8 processor, while MobiVQA, optimized by Pytorch mobile, utilizes an Nvidia TX2 Board. TinyVQA exhibits lower operating frequency of 175 MHz but achieves significantly lower latency (56 ms vs. 213 ms of MobiVQA), suggesting real-time suitability. Additionally, TinyVQA shines in energy consumption per query (0.2 J vs. 5.60 J), making it ideal for edge deployments. This represents a significant advantage in scenarios where energy efficiency is paramount, such as battery-powered or remote devices. 5 CONCLUSION The proposed TinyVQA system represents a significant advancement in the realm of tinyML, successfully addressing the challenges of integrating multiple data modalities into compact, low-power devices. Through meticulous evaluation on the FloodNet dataset, TinyVQA has proven its effectiveness by achieving 79.5% accuracy in post-disaster scenarios, highlighting its real-world applicability. Furthermore, the deployment of TinyVQA on a power-efficient Crazyflie 2.0 drone equipped with an AI deck and GAP8 microprocessor exemplifies the system\u2019s operational proficiency. The tiny drone-based TinyVQA model, with its low latency of 56 ms and minimal power consumption of 0.7 W, underscores the potential of tinyML to revolutionize disaster assessment and response, opening new avenues for autonomous, intelligent systems in critical, resource-limited environments. 6 ACKNOWLEDGMENT We acknowledge the support of the U.S. Army Grant No. W911NF2120076. \fTinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Hardware tinyML Research Symposium\u201924, April 2024, San Jose, CA" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2403.17001v1", |
| "title": "VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation", |
| "abstract": "Recent innovations on text-to-3D generation have featured Score Distillation\nSampling (SDS), which enables the zero-shot learning of implicit 3D models\n(NeRF) by directly distilling prior knowledge from 2D diffusion models.\nHowever, current SDS-based models still struggle with intricate text prompts\nand commonly result in distorted 3D models with unrealistic textures or\ncross-view inconsistency issues. In this work, we introduce a novel Visual\nPrompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the\nvisual appearance knowledge in 2D visual prompt to boost text-to-3D generation.\nInstead of solely supervising SDS with text prompt, VP3D first capitalizes on\n2D diffusion model to generate a high-quality image from input text, which\nsubsequently acts as visual prompt to strengthen SDS optimization with explicit\nvisual appearance. Meanwhile, we couple the SDS optimization with additional\ndifferentiable reward function that encourages rendering images of 3D models to\nbetter visually align with 2D visual prompt and semantically match with text\nprompt. Through extensive experiments, we show that the 2D Visual Prompt in our\nVP3D significantly eases the learning of visual appearance of 3D models and\nthus leads to higher visual fidelity with more detailed textures. It is also\nappealing in view that when replacing the self-generating visual prompt with a\ngiven reference image, VP3D is able to trigger a new task of stylized\ntext-to-3D generation. Our project page is available at\nhttps://vp3d-cvpr24.github.io.", |
| "authors": "Yang Chen, Yingwei Pan, Haibo Yang, Ting Yao, Tao Mei", |
| "published": "2024-03-25", |
| "updated": "2024-03-25", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.MM" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Distillation", |
| "gt": "VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation", |
| "main_content": "Introduction Generative Artificial Intelligence (especially for vision content generation) has aroused great attention in computer vision field [5, 6, 20, 26], leading to impressive advancements in text-to-image [30\u201332] and text-to-video generation [10, 14, 34]. These accomplishments can be attributed to the availability of large-scale image-text and video-text pair data [1, 33] and the emergence of robust diffusion-based generative models [12, 13, 25, 35]. Recently, researchers Text prompt: \u201cA florist is making a bouquet with fresh flowers\u201d (a) Magic3D (b) ProlificDreamer (c) VP3D (Ours) Visual prompt Figure 1. Exisiting text-to-3D generation techniques (e.g., Magic3D [17] and ProlificDreamer [39]) often suffer from degenerated results (e.g., over-saturated appearances or inaccurate geometries). Our VP3D novelly integrates a visual prompt to strength score distillation sampling, leading to better 3D results. have gone beyond text-driven image/video generation, and begun exploring diffusion models for text-driven content creation of 3D assets (e.g., text-to-3D generation). This direction paves a new way for practical 3D content creation and has a great potential impact for numerous applications like virtual reality, gaming and Metaverse. Compared to image generation, text-to-3D generation, however, is more challenging, due to the complexities associated with intricate 3D geometry and appearance (i.e., object shapes and textures). Moreover, the collection and annotation of 3D data are somewhat resourcefully expensive and thus cannot be easily scaled up to billion level as image data. To tackle this issue, a pioneering text-to-3D work (DreamFusion [27]) presents the first attempt of exploiting an off-the-shelf text-to-image diffusion model to generate promising 3D assets in a zero-shot fashion. The key design behind such success is Score Distillation Sampling (SDS), which directly optimizes the implicit 3D model of arXiv:2403.17001v1 [cs.CV] 25 Mar 2024 \fNeural Radiance Field (NeRF) with prior knowledge distilled from 2D diffusion model. Nevertheless, such distilled prior knowledge is merely driven by the input text prompt, and it is non-trivial to learn high-quality NeRF with distilled SDS supervision. Although several subsequent works [4, 17, 22, 38, 39] further upgrade SDS, this kind of SDSbased solution still results in degenerated 3D models with unrealistic/less-detailed textures, especially when feeding intricate text prompts (as seen in Figure 1 (a-b)). In this work, we propose to mitigate this limitation through a unique design of visual prompt-guided text-to3D diffusion model, namely VP3D. Intuitively, \u201ca picture is worth a thousand words.\u201d That is, a single image can convey human intentions of visual content creation (e.g., the visual appearance or semantic structure) more effectively than textual sentences. This motivates us to introduce additional guidance of visual prompt, and thus decompose the typical single-shot text-to-3D process into two cascaded processes: first text-to-image generation, and then (text plus image)to-3D generation. In particular, VP3D first leverages offthe-shelf text-to-image diffusion models to produce a highfidelity image that reflects extremely realistic appearance with rich details. In the latter process, this synthetic image further acts as 2D visual prompt to supervise SDS optimization of NeRF, coupled with the input text prompt. At the same time, a differentiable reward function is additionally utilized to encourage the rendering images of NeRF to be better aligned with 2D visual prompt (visual appearance consistency) and text prompt (semantic consistency). As illustrated in Figure 1 (c), we show that the novel visual prompt-guided diffusion process in VP3D significantly enhances the visual fidelity of 3D assets with realistic and rich texture details. Meanwhile, when easing the learning of visual appearance of 3D assets via visual prompt guidance, the optimization of NeRF will focus more on the modeling of geometry, leading to better 3D sharps with cross-view consistency. We believe that the ability of unleashing highquality visual knowledge in 2D visual prompt is potentially a new paradigm of text-to-3D generation. As a by-product, we also demonstrate that our VP3D can be readily adapted for a new task of stylized text-to-3D generation. Intriguingly, we simply replace the self-generating image in VP3D with a user-given reference image, and treat it as a new visual prompt to trigger (text plus image)-to-3D generation. In this way, our VP3D is able to produce a stylized 3D asset, which not only semantically aligns with text prompt but also shares similar geometric & visual style as the reference image. 2. Related Work Text-to-3D generation. Significant advancements have been witnessed in text-to-image generation with 2D diffusion models in recent years [3, 12, 13, 25, 30\u201332, 35]. However, extending these capabilities to 3D content generation poses a substantial challenge, primarily due to the absence of large-scale paired text-3D datasets. To mitigate the reliance on extensive training data, recent works try to accomplish zero-shot text-to-3D generation [4, 7, 8, 17, 22, 27, 38, 39, 42]. Specifically, the pioneering work DreamFusion [27] showcased remarkable achievements in text-to-3D generation through pre-trained text-to-image diffusion models. SJC [38] concurrently addressed the out-ofdistribution problem in lifting 2D diffusion models to perform text-to-3D generation. Following these, several subsequent works have strived to enhance text-to-3D generation further. For instance, Latent-NeRF [22] proposed to incorporate a sketch shape to guide the 3D generation directly in the latent space of a latent diffusion model. Magic3D [17] presented a coarse-to-fine strategy that leverages both lowand high-resolution diffusion priors to learn the underlying 3D representation. Control3D [8] proposed to enhance user controllability in text-to-3D generation by incorporating additional hand-drawn sketch conditions. ProlificDreamer [39] presented a principled particle-based variational framework to improve the generation quality. Unlike previous works, we formulate the text-to-3D generation process from a new perspective. We first leverage the off-the-shelf text-to-image diffusion models to generate a high-quality image that faithfully matches the input text prompt. This synthetic reference image then serves as a complementary input alongside the text, synergistically guiding the 3D learning process. Moreover, we showcase the remarkable versatility of this novel architecture by effortlessly extending its capabilities to the realm of stylized text-to-3D generation. The resulting 3D asset not only exhibits semantic alignment with the provided text prompt but also masterfully captures the visual style of the reference image. This capability marks another pivotal distinction between our VP3D and previous text-to-3D approaches. Image-to-3D generation. Recently, prior works RealFusion [21], NeuralLift-360 [40] and NeRDi [9] leverage 2D diffusion models to achieve image-to-3D generation. The following work Make-IT-3D [37] proposed a two-stage optimization framework to improve the generation quality further. Zero-1-to-3 [19] finetuned the Stable Diffusion model to enable generating novel views of the input image. It can then be used as a 3D prior model to achieve high-quality image-to-3D generation. Inspired by this, Magic123 [28] proposed to use 2D and 3D priors simultaneously to generate faithful 3D content from the given image. One-2-3-45 [18] integrated Zero-1-to-3 and a multi-view reconstruction model to accelerate the 3D generation process. It is worth noting that our work is not targeting image-to-3D generation. We utilize a reference image to guide the text-to-3D learning process, instead of directly turning the reference image into 3D content. \f3. VP3D In this section, we elaborate the architecture of our VP3D, which introduces a novel visual prompt-guided text-to-3D diffusion model. An overview of our VP3D architecture is depicted in Figure 2. 3.1. Background Text-to-Image Diffusion Models. Diffusion models are a family of generative models that are trained to gradually transform Gaussian noise into samples from a target distribution [13]. Given a target data distribution q(x), a forward diffusion process is defined to progressively add a small amount of Gaussian noise to the data x0 sampled from q(x). This process follows a Markov chain q(x1:T ) = QT t=1 q(xt|xt\u22121) and produces a sequence of latent variables x1, . . . , xT after T time steps. The marginal distribution of latent variables at time step t is given by q(xt|x) = N(xt; \u03b1tx, \u03c32 t I). Thus the noisy sample xt can be directly generated through the equation xt = \u03b1tx + \u03c32 t \u03f5, where \u03f5 \u223cN(0, I), \u03b1t and \u03c3t are chosen parameters such that \u03b12 t + \u03c32 t = 1. After T noise adding steps, xT is equivalent to an isotropic Gaussian distribution. Then, a reverse generative process is defined to gradually \u201cdenoise\u201d XT to reconstruct the original sample. This can be described by a Markov process p\u03d5(x0:T ) = p(xT ) QT t=1 p\u03d5(xt\u22121|xt), with the conditional probability p\u03d5(xt\u22121|xt) = N(xt\u22121; \u00b5\u03d5(xt, t), \u03a3\u03d5(xt, t)). Commonly, a UNet neural network \u03f5\u03d5(xt; t) with parameters \u03d5 is used to predict the noise that was used to produce xt at time step t. Text-to-image diffusion models build upon the above theory to condition the diffusion process with a given text prompt y using classifier-free guidance (CFG) [12]. The corresponding noise predictor is remodeled by: \\labe l { eq : cfg} \\ ha t {\\boldsymb ol {\\epsilon } }_\\phi (\\mathbf {x}_t; \\mathbf {z}_y, t) = \\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\emptyset ) + s * (\\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\mathbf {z}_y) \\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\emptyset )), (1) where s is a scale that denotes the classifier-free guidance weight, zy is the corresponding text embedding of the text prompt y and \u2205indicates the noise prediction without conditioning. The diffusion model \u03f5\u03d5 is typically optimized by a simplified variant of the variational lower bound of the log data likelihood, which is a Mean Squared Error criterion: \\mathc a l {L}_ \\ mathrm {diff }( \\p h i ) = \\mathbb {E}_{\\mathbf {x},t,\\epsilon }\\Bigl [w(t)\\|\\hat {\\boldsymbol {\\epsilon }}_\\phi (\\mathbf {x}_t; y,t) \\epsilon \\|^2_2 \\Bigr ], \\label {equation:diffloss} (2) where w(t) is a weighting function that depends on the timestep t \u223cU(0, 1) and \u03f5 \u223cN(0, I). Score Distillation Sampling. A recent pioneering work called DreamFusion [27] introduces Score Distillation Sampling (SDS) that enables leveraging the priors of pre-trained text-to-image diffusion models to facilitate text-to-3D generation. Specifically, let \u03b8 be the learnable parameters of a 3D model (e.g., NeRF) and g be a differentiable rendering function that can render an image x = g(\u03b8; c) from the 3D model \u03b8 at a camera viewpoint c. SDS introduces a loss function LSDS to optimize the parameters \u03b8. Its gradient is defined as follows: \\lab e l {eq:sds} \\nabla _ {\\t h eta }\\ mathcal {L}_{SDS} = \\mathbb {E}_{t,\\epsilon }[ w(t)( \\hat {\\boldsymbol {\\epsilon }}_\\phi (\\mathbf {x}_t;t,\\mathbf {z}_y)-\\epsilon )\\frac {\\partial \\mathbf {x}}{\\partial \\theta }], (3) where xt is obtained by perturbing the rendered image x with a Gaussian noise \u03f5 corresponding to the t-th timestep of the forward diffusion process, zy is the conditioned text embedding of given text prompt y. Intuitively, the SDS loss estimates an update direction in which the noised version of rendered image x should be moved towards a denser region in the distribution of real images (aligned with the conditional text prompt y). By randomly sampling views and backpropagating the gradient in Eq. 3 to the parameters \u03b8 through the differential parametric function g, this approach eventually results in a 3D model that resembles the text. 3.2. Visual-prompted Score Distillation Sampling Visual Prompt Generation. As aforementioned, score distillation sampling plays a key role in text-to-3D generation. Nevertheless, empirical observations [11, 27, 39] reveal that SDS still results in degenerated 3D models especially when feeding intricate text prompts. First, SDS-generated results often suffer from over-saturation issues. These issues are, in part, attributed to the necessity of employing a large CFG value (i.e., 100) within the SDS framework [27, 39]. A Large CFG value narrows down the score distillation space to more text-relevant areas. This can mitigate the divergence of diffusion priors in the optimization process, thereby fostering enhanced stability in 3D representation learning. However, this is at the cost of less realistic and diversity generation results, as large CFG values are known to yield over-saturated results [39]. Second, results generated by SDS still face the risk of text-3D misalignment, such as missing key elements in the scene, especially when text prompts contain multiple objects with specific attributes. A fundamental reason behind the aforementioned issues may lie in the substantial distribution gap between text and 3D modalities. Thus it is non-trivial to directly learn a meaningful 3D scene solely based on a single text prompt. This insight motivates us to introduce an additional visual prompt as a bridge to explicitly establish a connection between the text input and the desired 3D output. Particularly, we leverage off-the-shelf text-to-image diffusion models (e.g., Stable Diffusion) to produce a high-fidelity image that faithfully matches the input text prompt and has an extremely realistic appearance. This image is then used as a visual prompt in conjunction with the input text prompt to jointly supervise the 3D generation process. Score Distillation Sampling with Visual Prompt. We now present visual-prompted score distillation sampling that dis\f Text Prompt: \u201ca lighthouse on a rocky shore\u201d noise Visual-prompted SDS NeRF/DMTet Model camera rendering KV Q KV Q U-Net Visual prompt image Back Propagation Add Noise Human feedback Text-to-image Image Reward Visual consistency feedback Figure 2. An overview of the proposed VP3D framework for visual prompted text-to-3D generation. tillation knowledge from a pre-trained diffusion model to optimize a 3D model by considering inputs not only from a text prompt y but also from a visual prompt v. To be clear, we restructure the standard SDS-based text-to-3D pipeline by utilizing an image-conditioned diffusion model [43] to trigger visual prompt-guided text-to-3D generation. Technically, the visual prompt is first converted to a global image embedding zv by the CLIP image encoder [29] and a following projection network. This image embedding represents the rich content and style of the visual prompt and has the same dimension as the text embedding zy used in the pre-trained text-to-image diffusion model (Stable Diffusion). Following SDS, we first add noise \u03f5 to the rendered image of the underlying 3D model according to the random sampled time step t to get a noised image xt. Then xt is input to the diffusion model along with the conditional visual prompt embedding zv and text prompt embedding zy to estimate the added noise as follows: \\smal l \\be gin {align ed } \\t i l d e {\\bol ds ymb o l {\\e p silon }} _\\ phi (\\mathbf {x}_t; t, \\mathbf {z}_y, \\mathbf {z}_v) &= \\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\emptyset , \\emptyset )\\\\ &+ s * (\\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\mathbf {z}_y, \\lambda * \\mathbf {z}_v)) \\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\emptyset , \\emptyset )), \\end {aligned} (4) where s is the classifier-free guidance weight, \u03bb \u2208[0, 1] is the visual prompt condition weight, \u03d5 is the parameter of the pre-trained noise predictor \u03f5\u03d5 and \u03f5\u03d5(xt; t, \u2205, \u2205) denotes the noise prediction without conditioning. In this way, our proposed method explicitly incorporates the visual prompt and text prompt in a unified fashion for text-to-3D generation. Consequently, the final gradient of our introduced visual-prompted score distillation sampling (VP-SDS) loss \u03b8 is expressed as: \\l a bel { eq:vp-sds} \\nabla _ {\\t het a }\\m at hcal {L}_{VP-SDS} = \\mathbb {E}_{t,\\epsilon }[ w(t)(\\tilde {\\boldsymbol {\\epsilon }}_\\phi (\\mathbf {x}_t;t,\\mathbf {z}_y,\\mathbf {z}_v)\\boldsymbol {\\epsilon })\\frac {\\partial \\mathbf {x}}{\\partial \\theta }], (5) where w(t) is a scheduling coefficient. Comparison with SDS. Comparing the update gradient of SDS (Eq. 3) and VP-SDS (Eq. 5), SDS is a special case of our VP-SDS by setting \u03bb = 0 where the visual prompt condition is neglected. In accordance with the theoretical analysis presented in [27, 39], the mode-seeking nature of SDS necessitates a large CFG to ensure that the pre-trained diffusion model \u03f5\u03d5 delivers a \u201csharp\u201d updating direction for the underlying 3D model. Nevertheless, a large CFG, in turn, results in poor-quality samples and thus a \u201cdegraded\u201d update direction. In contrast, VP-SDS leverages the additional visual prompt to narrow down the distillation space of \u03f5\u03d5 into a more compact region that aligns tightly with the visual prompt. Meanwhile, the distillation space is also refined by the visual prompt as it reflects realistic appearances with rich details. Therefore, the updating direction derived from our VP-SDS is not only \u201csharp\u201d but also \u201cfine\u201d, which can obtain much better 3D generation results than SDS. Notably, a recent work ProlificDreamer [39] presents variational score distillation (VSD) to address the aforementioned issues in SDS. However, VSD needs to train an additional diffusion model using LoRA [15] during the optimization process, which incurs a considerable computational overhead compared to SDS. Instead, the additional computational cost of our VP-SDS is nearly negligible, making it computationally more efficient than VSD. View-dependent Visual Prompting. Apart from the oversaturation problem discussed above, existing text-to-3D methods are known to also suffer from the multi-view inconsistency problem (e.g., the multi-face Janus problem). This arises from the fact that the underlying prior diffusion model is exclusively trained on individual 2D images and therefore lacks 3D awareness. To alleviate this issue, existing text-to-3D methods [17, 27, 38, 39] always employ diffusion loss with view-dependent text conditioning, which is to append \u201cfront view\u201d, \u201cside view\u201d, or \u201cback view\u201d to the input text based on the location of the randomly sampled camera. Inspired by this, we devise a view-dependent visual prompting strategy to further mitigate the view incon\fsistency problem in collaboration with our introduced VPSDS. Technically, given the input visual prompt (assuming it is shot from the front view), we use a view-conditioned 2D diffusion model, Zero-1-to-3 [19], to transform it into left-side, right-side and backward views. Then we fed different visual prompts into VP-SDS (Eq. 5) depending on the corresponding sampled camera viewpoint. For instance, when the azimuth angle \u03b3cam \u2208[0\u25e6, 360\u25e6] of the camera position falls in the range near 180\u25e6(0\u25e6denotes the front view), we feed the generated back view counterpart of the input visual prompt into Eq 5. In this way, the inherent 3D geometry information contained in the multi-view visual prompts is encoded into the 3D representation learning through view-dependent VP-SDS, leading to better view consistency in the 3D generation. 3.3. Learning with Reward Feedback To further encourage rendered images of the underlying 3D model that are high fidelity and well aligned with the input visual prompt and text prompt, we devise two types of differentiable reward functions that complement the aforementioned VP-SDS objective. Human Feedback Reward. Recent practice has shown the capability of improving text-to-image models with human feedback [41]. Particularly, it first trains a reward model on a large dataset comprised of human assessments of textimage pairs. Such a reward model thus has the ability to measure the quality of the generated samples in terms of both image fidelity and image-text alignment. Consequently, it can then be used to fine-tune diffusion models to maximize the predicted scores of the reward model through differentiable reward functions, leading to better generation results. Motivated by this, we go one step further to utilize the open-sourced reward model r in ImageReward [41] for text-to-3D generation. Specifically, we introduce a human feedback reward loss as follows: \\label { e q:refl} \\ mathcal {L}_{hf-reward} = \\mathbb {E}_{\\mathbf {c}}[ \\psi (\\mathbf {r}(\\mathbf {x}, y))], (6) where x = g(\u03b8; c) is a rendered image by the underlying 3D model \u03b8 from an arbitrary viewpoint c, y is the conditional text prompt and \u03c8 is a differentiable reward-to-loss map function as in [41]. Intuitively, minimizing the loss in Eq. 6 encourages the rendered image x to get a higher reward score from the reward model r, which means the underlying 3D model should update towards the refined direction where the renderings have high appearance fidelity and faithfully match the input text prompt. Visual Consistency Reward. Given that the above human feedback reward only takes into account the input text prompt, we further devised a visual consistency reward to fully leverage the visual prompt as well, since text prompts cannot capture all appearance details. Technically, we adopt a pre-trained self-supervised vision transformer DINO-ViT [2] to extract the visual features Fdino(v) and Fdino(x) of the input visual prompt v and rendered image x, respectively. Then we penalize the feature-wise difference between them at the visual prompt viewpoint: \\label { e q:visual_c o nsistency} \\mathcal {L}_{vc-reward} = ||F_{dino}(\\mathbf {x})-F_{dino}(\\mathbf {v})||^2. (7) By imposing such visual consistency loss, we encourage the underlying 3D model to adhere to the plausible shape and appearance properties conveyed by the visual prompt. 3.4. 3D Representation and Training Inspired by [17], we adopt a two-stage coarse-to-fine framework for text-to-3D generation with two different 3D scene representations. At the coarse stage, we leverage InstantNGP [24] as 3D representation, which is much faster to optimize compared to the vanilla NeRF [23] and can recover complex geometry. In the fine stage, we leverage DMTet as the 3D representation to further optimize a high-fidelity mesh and texture. Specifically, the 3D shape and texture represented in DMTet are first initialized from the density field and color field of the coarse stage, respectively [17]. During the optimization process in each stage, we first render images from the underlying 3D model through differentiable rasterizers at arbitrary camera poses and optimize the 3D model with a combination of losses: \\la b el {eq: l oss_fine} \\m a thcal {L}_{fine} =\\mathcal {L}_{VP-SDS} + \\lambda _1 \\mathcal {L}_{vc-reward} + \\lambda _2 \\mathcal {L}_{hf-reward}, (8) where \u03bb1, \u03bb2 are the trade-off parameters. 4. Experiments In this section, we evaluate the effectiveness of our VP3D for text-to-3D generation via extensive empirical evaluations. We first show both quantitative and qualitative results of VP3D in comparison to existing techniques on the newly released text-to-3D benchmark (T3Bench [11]). Next, we conduct ablation studies to validate each design in VP3D. Finally, we demonstrate the extended capability of VP3D for stylized text-to-3D generation. 4.1. Experimental Settings Implementation Details. In the coarse and fine stage, the underlying 3D models are both optimized for 5000 iterations using Adam optimizer with 0.001 learning rate. The rendering resolutions are set to 128\u00d7128 and 512\u00d7512 for coarse and fine stage, respectively. We implement the underlying Instant-NGP and DMTet 3D representation mainly based on the Stable-DreamFusion codebase [36]. \u03bb1 is set to 0.1 in the coarse stage and 0.01 in the fine stage. \u03bb2 is linearly increased from 0.001 to 0.01 during the optimization process. The visual prompt condition weight is set to 0.5 in all experiments. \fTable 1. The quantitative results of our method and baselines on T3Bench [11]. Method Single Object Single Object with Surroundings Multiple Objects Quality Alignment Average Quality Alignment Average Quality Alignment Average DreamFusion [27] 24.9 24.0 24.4 19.3 29.8 24.6 17.3 14.8 16.1 SJC [38] 26.3 23.0 24.7 17.3 22.3 19.8 17.7 5.8 11.7 LatentNeRF [22] 34.2 32.0 33.1 23.7 37.5 30.6 21.7 19.5 20.6 Fantasia3D [4] 29.2 23.5 26.4 21.9 32.0 27.0 22.7 14.3 18.5 ProlificDreamer [39] 51.1 47.8 49.4 42.5 47.0 44.8 45.7 25.8 35.8 Magic3D [17] 38.7 35.3 37.0 29.8 41.0 35.4 26.6 24.8 25.7 VP3D (Ours) 54.8 52.2 53.5 45.4 50.8 48.1 49.1 31.5 40.3 Evaluation Protocol. Existing text-to-3D generation works commonly examine their methods over the CLIP RPrecision score [16], which is an automated metric for the consistency of rendered images with respect to the input text. However, this text-image alignment-based metric cannot faithfully represent the overall 3D quality. For example, CLIP-based text-to-3D methods can also achieve high CLIP R-Precision scores even if the resulting 3D scenes are unrealistic and severely distorted [27]. Taking this into account, we instead conduct experiments on a newly open-sourced benchmark: T3Bench [11], which is the first comprehensive text-to-3D benchmark containing 300 diverse text prompts of three categories (single object, single object with surroundings, and multiple objects). T3Bench provides two automatic metrics (quality and alignment) based on the rendered multi-view images to assess the subjective quality and text alignment. The quality metric utilizes a combination of multi-view text-image scores and regional convolution to effectively identify quality and view inconsistency. The alignment metric employs a 3D captioning model and a Large Language Model (i.e., GPT-4) to access text-3D consistency. Following this, we also leverage the quality and alignment metric to quantitatively compare our VP3D against baseline methods. Baselines. To evaluate our method, we compare our VP3D with six state-of-the-art text-to-3D generation methods: DreamFusion [27], SJC [38], LatentNeRF [22], Fantasia3D [4], Magic3D [17] and ProlificDreamer [39]. Specifically, DreamFusion [27] firstly introduces score distillation sampling (SDS) that enables leveraging 2D diffusion model (Imagen [14]) to optimize a NeRF [23]. SJC [38] concurrently addresses the out-of-distribution problem in SDS and utilizes an open-sourced diffusion model (Stable Diffusion) to optimize a voxel NeRF. Latent-NeRF [22] first brings NeRF to the latent space to harmonize with latent diffusion models, then refines it in pixel space. Magic3D [17] extends DreamFusion with a coarse-to-fine framework that first optimizes a low-resolution NeRF model and then a high-resolution DMTet model via SDS. Fantasia3D [4] disentangles the SDS-based 3D learning into geometry and appearance learning. ProlificDreamer [39] upgrades DreamFusion by a variational score distillation (VSD) loss that treats the underlying 3D scene as a random variable instead of a single point as in SDS. 4.2. Quantitative Results The quantitative performance comparisons of different methods for text-to-3D generation are summarized in Table 1. Overall, our VP3D consistently achieves better performances against existing techniques across all evaluation metrics and prompt categories. Remarkably, VP3D achieves an absolute quality-alignment average score improvement of 4.1%, 3.3%, and 4.5% against the best competitor ProlificDreamer across the three text prompt categories, respectively, which validates the effectiveness of our overall proposals. More importantly, while VP3D employs the same NeRF & DMTet 3D representation and coarseto-fine training scheme as the baseline method Magic3D, it significantly outperforms Magic3D by achieving 53.5%, 48.1%, and 40.3% average scores, representing a substantial improvement over Magic3D\u2019s average scores of 37.0%, 35.4%, and 25.7%. The results generally highlight the key advantage of introducing visual prompts in lifting 2D diffusion models to perform text-to-3D generation. Specifically, DreamFusion and SJC enable the zero-shot learning of implicit 3D models by distilling prior knowledge from 2D diffusion models. However, the generated 3D scenes have relatively low quality and alignment scores, especially in complex scenarios where the text prompt contains multiple objects or surroundings. Latent-NeRF employs score distillation in the latent space and then back to pixel space to further refine the 3D model, leading to better results. The aforementioned three methods only utilize implicit 3D representations (NeRFs). In contrast, Magic3D adopts textured mesh DMTet as 3D representation for enabling high-resolution optimization and exhibits better performances across all three prompt categories. Fantasia3D also capitalizes on DMTet for geometry learning and then leverages BRDF for appearance learning in a disentangled manner. While Fantasia3D achieves better average scores than DreamFusion and SJC, it fails to create high-fidelity results in complex scenes (e.g., \u201cmultiple objects\u201d). Pro\fDreamFusion Magic3D Latent-NeRF Fantasia3D SJC ProlificDreamer VP3D (ours) (a) (b) (c) (d) (e) (f) Figure 3. Comparisons on qualitative results of our VP3D with other text-to-3D techniques on T3Bench [11]. The prompts are (a) \u201cA fuzzy pink flamingo lawn ornament\u201d, (b) \u201cA blooming potted orchid with purple flowers\u201d, (c) \u201cA blue butterfly on a pink flower\u201d,(d) \u201cA lighthouse on a rocky shore\u201d,(e) \u201cHot popcorn jump out from the red striped popcorn maker\u201d,(f) \u201cA chef is making pizza dough in the kitchen\u201d. (a-b), (c-d), (e-f) belongs to the Single Object, Single Object with Surr and Multi Objects category in T3Bench, respectively. lificDreamer further boosts the performance by training an additional diffusion model during the optimization process to realize a principled particle-based variational score distillation loss. However, our VP3D still outperforms ProlificDreamer across all evaluation metrics and prompt sets, which confirms the effectiveness of our VP3D. 4.3. Qualitative Results The qualitative comparisons for text-to-3D generation are presented in Figure 3. As can be seen, our VP3D generally produces superior 3D scenes with plausible geometry and realistic textures when compared with baseline methods. Specifically, DreamFusion suffers from a severely over-saturated problem and has difficulty generating complex geometry. Magic3D and Latent-NeRF slightly alleviate these issues through their higher-resolution DMTet and pixel space refinement, respectively. While Fantasia3D and SJC can generate richer textures than DreamFusion, the geometric quality of the generated 3D scenes falls short of expectations. Notably, ProlificDreamer trains an additional diffusion model during the optimization process to perform variational score distillation (VSD) instead of SDS, achieving satisfactory single-object objects. However, the use of VSD at times introduces excessive irrelevant information or geometry noise in more complex scenarios. In contrast, we can clearly observe that the generated 3D scenes by VP3D faithfully match the input text prompt with plausible geometry and realistic appearance, which demonstrates the superiority of VP3D over state-of-the-art methods and its ability to generate high-quality 3D content. \f\u201ca rabbit, high detail 3d model\u201d Visual prompt (c) (b) (a) (d) Text prompt: Figure 4. Stylized text-to-3D generation results of our VP3D. 4.4. Ablation Study Here we investigate how each design in our VP3D influences the overall generation performance. We depict the qualitative results of each ablated run in Figure 5. LSDS is our baseline model that employs vanilla score distillation sampling loss. As can be seen, the generated 3D scene is over-saturated and geometry unreasonable. Instead, when LV P \u2212SDS is employed, the generation quality is clearly enhanced in terms of both geometry and appearance. This highlights the critical effectiveness of our proposed visualprompted score distillation sampling. Nevertheless, the resulting 3D scenes by LV P \u2212SDS are still not satisfying enough. By utilizing additional visual consistency and human feedback reward functions Lvc\u2212reward (Eq. 7) and Lhf\u2212reward (Eq. 6), the generation quality is gradually improved. The results basically validate the effectiveness of these two complementary factors. 4.5. Extension to Stylized Text-to-3D Generation In this section, we demonstrate that another advantage of our VP3D is its remarkable versatility in 3D generation as it can be readily adapted for a new task of stylized text-to-3D generation. The main difference is that the visual prompt is no longer generated from the text prompt but from a userspecified reference image. We also empirically discard the loss in Eq. 6 to eliminate the strictly text-image alignment constraint. In this way, our VP3D can integrate the visual cues contained in the reference image into text-to-3D generation and produce a stylized 3D asset. This asset not only Visual prompt (a) (b) (VP3D) + (VP3D) + + Figure 5. Comparisons on qualitative results by using different ablated runs of our VP3D. The text prompts are (a) \u201cA broken tablespoon lies next to an empty sugar bowl\u201d and (b) \u201cA chameleon perched on a tree branch\u201d. semantically aligns with the text prompt but also reflects visual and geometry properties in the reference image. Figure 4 shows our stylized text-to-3D generation results. Our VP3D can generate diverse and stylized 3D assets by giving different visual prompts to the same text prompt. As shown in Figure 4 (a-b), the generated result semantically is a rabbit that adheres to the text prompt but also inherits some visual cues of the visual prompt. To be clear, the generated 3D rabbits have somewhat consistent geometry pose and appearance texture with the object in the visual prompt. For example, in Figure 4 (b), the generated rabbit mirrors the \u201chugging pose\u201d of the reference image and also has the same style of \u201ccrescent-shaped eyebrows\u201d and \u201cyellow plaid jacket\u201d as in the reference image. In Figure 4 (c-d), we showcase the versatility of our VP3D by seamlessly blending styles from different visual prompts. Take Figure 4 (d) as an instance, we use the leopard image as a visual prompt in the coarse stage and then replace it with an oil painting image in the fine stage. Our VP3D finally resulted in a 3D rabbit that not only has a consistent pose with the leopard but also a colorful oil painting style texture. The stylized 3D generation ability distinct our VP3D from previous text-to-3D approaches and can lead to more creative and diverse 3D content creation. 5. Conclusion In this work, we propose VP3D, a new paradigm for textto-3D generation by leveraging 2D visual prompts. We first capitalize on 2D diffusion models to generate a high-quality image from input text. This image then acts as a visual prompt to strengthen the 3D model learning with our devised visual-prompted score distillation sampling. Meanwhile, we introduce additional human feedback and visual consistency reward functions to encourage the semantic and appearance consistency between the 3D model and input visual & text prompt. Both qualitative and quantitative comparisons on the T3Bench benchmark demonstrate the superiority of our VP3D over existing SOTA techniques." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.18169v2", |
| "title": "MIKO: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery", |
| "abstract": "Social media has become a ubiquitous tool for connecting with others, staying\nupdated with news, expressing opinions, and finding entertainment. However,\nunderstanding the intention behind social media posts remains challenging due\nto the implicitness of intentions in social media posts, the need for\ncross-modality understanding of both text and images, and the presence of noisy\ninformation such as hashtags, misspelled words, and complicated abbreviations.\nTo address these challenges, we present MIKO, a Multimodal Intention Kowledge\nDistillatiOn framework that collaboratively leverages a Large Language Model\n(LLM) and a Multimodal Large Language Model (MLLM) to uncover users'\nintentions. Specifically, we use an MLLM to interpret the image and an LLM to\nextract key information from the text and finally instruct the LLM again to\ngenerate intentions. By applying MIKO to publicly available social media\ndatasets, we construct an intention knowledge base featuring 1,372K intentions\nrooted in 137,287 posts. We conduct a two-stage annotation to verify the\nquality of the generated knowledge and benchmark the performance of widely used\nLLMs for intention generation. We further apply MIKO to a sarcasm detection\ndataset and distill a student model to demonstrate the downstream benefits of\napplying intention knowledge.", |
| "authors": "Feihong Lu, Weiqi Wang, Yangyifei Luo, Ziqin Zhu, Qingyun Sun, Baixuan Xu, Haochen Shi, Shiqi Gao, Qian Li, Yangqiu Song, Jianxin Li", |
| "published": "2024-02-28", |
| "updated": "2024-02-29", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Distillation", |
| "gt": "MIKO: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery", |
| "main_content": "INTRODUCTION Social media platforms serve as a cornerstone in our daily lives, which trigger various data mining and Natural Language Processing (NLP) tasks that require deep understanding of users\u2019 behaviors [3, 14, 22, 47]. However, according to psychological theories [4, 46], the interrelation of intention re\ufb02ecting human motivation signi\ufb01cantly in\ufb02uences behavioral patterns. Intentions are mental states or processes of planning, directing, or aiming towards a desired outcome or goal [5]. It is widely acknowledged in scholarly discourse that intention is interwoven with a form of desire, thereby rendering intentional behavior as inherently valuable or desirable [63]. For example, in Figure 1, users\u2019 intentions are strongly correlated to the contents of their social media posts. Thus, in social media, accurately understanding users\u2019 intentions in their posts has the potential to motivate downstream tasks as they provide a more cognitively shaped observation of the posts. In recent years, there has been a surge in the development and enhancement of :DQW\u0003 D\u0003 PRELOH\u0003 SKRQH\u0003 WKDW\u0003 LV\u0003 ERWK\u0003 JRRG\u0010ORRNLQJ\u0003DQG\u0003FRVW\u0010HIIHFWLYH 9HU\\\u0003DWWUDFWHG\u0003WR\u0003WKH\u0003DSSHDUDQFH\u0003RI\u0003 $SSOH\u0003PRELOH\u0003SKRQH 8VHU\u0003\u0014 8VHU\u0003\u0015 8VHU\u0003,QWHQWLRQ 8VHU\u0003,QWHQWLRQ +HOOR\u0003 IHOORZ\u0003 YWXEHUV\u000f\u0003 'RHV\u0003 DQ\\RQH\u0003 KDYH\u0003 DQ\u0003 L3KRQH\u0003UHFRPPHQGDWLRQ\u0003 IRU\u0003WKH\u0003IDFH\u0003WUDFNLQJ\u0011 8VHU\u0003SRVW RQ\u0003 \u0006\u0003 PDPEDGD\\\u0003 WKH\u0003 \\RXQJ\u0003 ODNHU\u0003 VTXDG\u0003 FRXOGQ\u0003 W\u0003HYHQ\u0003JHW\u0003WR\u0003\u001b\u0014\u0003 SRLQWV\u0003\u0011\u0003WKH\\\u0003GHVHUYH\u0003WKH\u0003 FU\\LQJ\u0003PM\u0003IDFH\u0003\u0011 8VHU\u0003SRVW )HHO\u0003GLVDSSRLQWPHQW\u0003IRU\u0003WKH\u0003\\RXQJ\u0003 /DNHU\u0003VTXDG V\u0003SRRU\u0003SHUIRUPDQFH :DQW\u0003WR\u0003ULGLFXOH\u0003WHDP\u0003PHPEHUV\u0003LQ\u0003 D\u0003KXPRURXV\u0003ZD\\ Figure 1: Examples of users\u2019 intentions in their social media posts. User 1\u2019s intention is to buy a cost-e\ufb00ective iPhone, while User 2\u2019s intention is to be disappointed with the performance of the young Lakers players. intention discovery algorithms, with applications spanning various \ufb01elds such as sentiment analysis[66], online shopping[62] with conceptualizations [57], and social good[1], which aim to improve the performance of downstream tasks by gaining insights into user intentions. Given the existence of the \u201cdark side\u201d of social media, characterized by the dissemination of harmful content [2, 12], the analysis of social media content to discern underlying motives and intentions is an imperative and pressing issue. However, identifying users\u2019 intentions in large-scale social media platforms remains nontrivial. Several challenges stand out throughout this process. First, intentions in the text are often implied rather than explicitly stated, which makes it impossible for heuristically or semantically designed extraction methods to retrieve from opendomain data. Furthermore, the inherently multimodal nature of social media data, which encompasses a rich tapestry of textual, visual, and auditory elements, signi\ufb01cantly magni\ufb01es this complexity. This diversity in user-generated content demands more advanced and nuanced methods of analysis. Last but not least, the prevalent presence of \u201cnoise\u201d in social media posts, including hashtags, misspelled words, and complex abbreviations, poses substantial interpretative challenges for existing analytical models. Despite ongoing research e\ufb00orts, there remains a discernible gap in methodologies for social intention analysis, particularly within the context of social media. As a result, our research is primarily motivated by the exploration of automated techniques for identifying multimodal social intentions within open domains. \fPreprint, , F. Lu, W. Wang, Y. Luo, Z. Zhu, Q. Sun, B. Xu, H. Shi, S. Gao, Q. Li, Y. Song, J. Li Owing to the abundant knowledge and robust reasoning abilities of Large Language Models (LLMs) [8, 37, 41, 42, 52, 53], an increasing number of researchers have shown their superior performances on various tasks[10, 19, 32], such as productrecommendation[39], sentiment analysis[54], and mental health analysis[61]. However, several concerns exist when leveraging them to reveal the intentions of social media posts. First, content generated by LLMs, especially when solely relying on social media posts, can be unreliable. They may generate hallucinatory outputs, such as the generation of uncontrollable, inaccurate content, and the misinterpretation of irrelevant input information. Moreover, social media posts often comprise both textual and visual elements, necessitating an in-depth understanding of each modality and the ability to perform cross-modal reasoning. For instance, as depicted in Figure1, user 2\u2019s intention is to express dissatisfaction and anger with the Lakers\u2019 recent performance, which requires a joint understanding of the text and image in the post. To tackle all issues above, in this paper, we present Miko, a Multimodal Intention Kowledge DistillatiOn framework, to acquire intention knowledge based on large-scale social media data. Specifically, Miko originates from analyzing extensive user behaviors indicative of sustainable intentions, such as various posting activities. Given a social media post and its accompanying image, we use a Multimodal Large Language Model (MLLM) to generate descriptions of the input images based on the textual content of the post. Following this, we then instruct a Large Language Model (LLM) to extract key information from both the input text and image descriptions to minimize the impact of noisy information in text. After both processing steps, we \ufb01nally instruct a powerful LLM, such as ChatGPT [41], to generate potential intentions underlying these posting behaviors as viable candidates. We align our prompts with 9 speci\ufb01c commonsense relations in ATOMIC [48], a popular commonsense knowledge base for social interactions, to make the intentions comprehensive in a commonsense manner. Another openprompted relation is also used to maintain knowledge diversity. We evaluate Miko from bothintrinsic and extrinsic perspectives. Intrinsically, we compile a series of publically available social media datasets and apply Miko to them to obtain the intentions in their social media posts. A two-stage annotation is then conducted to evaluate the plausibility and typicality of the generated contents. We then leverage intentions with top ratings in annotations as benchmark data to evaluate the capability of other generative LLMs. Experiment results show that (1) Miko is capable of generating intentions that are highly plausible and typical to the user\u2019s original post and (2) most LLMs fail at generating high-quality intentions while \ufb01ne-tuning on Miko generated intentions resolve this issue. Extrinsically, we evaluate the downstream bene\ufb01ts of generated intentions by applying them to a sarcasm detection task, showcasing that incorporating intentions in current methods leads to state-of-the-art performances. In summary, this paper\u2019s contributions can be summarized as follows. \u2022 We present Miko , a novel distillation framework designed to automatically obtain intentions behind social media posts with the assistance of LLMs and MLLMs. Miko stands out with its unique design in bridging the gap between understanding text and image in a social media post simultaneously with two large generative models. \u2022 We conduct extensive human annotations to show the superiority of the generated intentions in terms of both plausibility and typically. Further experiments show that most large generative models face challenges when prompted to generate intentions directly while \ufb01ne-tuning them on Miko generated intentions helps signi\ufb01cantly. \u2022 We further conduct experiments to show that intentions generated by Miko can bene\ufb01t the sarcasm detection task, which highlights the importance of distilling intentions in social media understanding tasks. The rest of the paper is organized as follows: In Section 2, we provide a detailed introduction to and analysis of the current related work on social media intentions research and knowledge distillation technology. In Section 3, we describe the de\ufb01nitions of the social intention generation task and the dataset used in this study. In Section 4, we detail the social intention generation framework Miko proposed in this study. In Section 5, we manually annotated the intentions generated by the Miko framework, benchmarked the quality of intentions generated by the Miko and other Large Language Models (LLMs), and demonstrated a case generated by the Miko . Following this, in Section 6, we conducted a detailed evaluation of the impact of generated intentions on the sarcasm detection task. Finally, our conclusions are stated in Section 7. 2 RELATED WORK 2.1 Intentions in Social Media Intention is closely related to psychological states, such as beliefs and desires[4, 46]. It is generally believed that intention involves some form of desire: the behavior of intention is considered good or desirable in a certain sense[63]. This aspect enables intentions to inspire current human behavior, among which users\u2019 posting behavior is a typical behavior driven by intentions. In social media platform, the positive social posts (such as charity, mutual help, etc.) will promote social development and progress, while negative social posts (such as ridicule, abuse, oppositional remarks, etc.) can cause harm to people\u2019s hearts and hinder social peace. Recently, the widespread application of social media in daily life has aroused the interest of scholars, which use intentional knowledge to tackle the task of sentiment analysis[23, 66], hate speech detection[58], recommendation system[16, 21, 31] et al. It aims to enhance downstream task performance by leveraging insights into user intentions. Thus, analyzing social media content to discern underlying motives and intentions is an imperative and valuable issue. In sentiment analysis tasks, understanding user content ideas is crucial, enabling a deeper insight into their emotional states and potential needs. This aspect, as elaboratedin the work of Zhou et al. [66], is fundamental for accurately classifying user sentiments. Through meticulous extraction and analysis of contextual clues and posting \fMiko: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery Preprint, , intentions in user-generated content, sentiment analysis tools signi\ufb01cantly enhance their ability to categorize sentiments into wellde\ufb01ned categories such as positive, negative, or neutral. In recommendation systems, existing works often use user repurchase intentions to analyze customer needs and achieve more accurate recommendations. As [26] says the consumer\u2019s purchase intention is the propensity of consumers to continue participating in retailers\u2019 or suppliers\u2019 commercial activities. Hellier et al. [24] de\ufb01ned it as the individual\u2019s decision to continue purchasing a particular product or service from the same company after considering his/her situation. Zeithaml et al. [64] pointed out two manifestations of repurchase intentions: One is that consumers have the idea of buying a particular product or service; the other is that a consumer actively communicates to create positive word of mouth on the product or actively recommends the product or service to others. However, the task of identifying user intentions within the vast, open-domain web and analyzing the conveyed information presents signi\ufb01cant challenges. These challenges stem from the sheer volume of data produced across numerous websites. It is di\ufb03cult for traditional algorithm models to accurately locate key information and extract the accurate intentions of users. This di\ufb03culty stems from the complexity and diversity of user-generated content, which requires more advanced and nuanced analysis methods. We are the \ufb01rst to propose the open-domain social intention generation framework to extracting accurate and reasonable social intentions from mutilmodal social posts. 2.2 Knowledge Distillation Knowledge distillation [27] is a strategy in which a pre-trained model (known as the teacher model) facilitates the training of a secondary model (termed the student model). With the development of Large Language Models (LLMs), more and more researchers are trying to guide and re\ufb01ne domain-speci\ufb01c knowledge from LLMs into small models, thereby enhancing the generalization capabilities of small models [9, 20, 35, 50, 55, 56]. Liu et al. [36] attempts to distill time series information from LLMs into small models, where the student network is trained to mimic the features of the LLMbased teacher network that is pre-trained on large-scale datasets. Sun et al. [49] design an e\ufb00ective and e\ufb03cient Pre-trained Recommendation Models (PRM) distillation framework in multi-domain recommendation to accelerate the practical usage of PRMs. However, the above-mentioned studies concentrate on extracting direct information from large language models (LLMs) but overlook a hierarchical analysis to identify pertinent information and are primarily applied in speci\ufb01c \ufb01elds without analyzing the motives or intentions of social users. Our framework, referred to as Miko, can be seen as the \ufb01rst attempt to utilize LLMs for the distillation and analysis of social intentions. 3 DEFINITIONS AND DATASETS 3.1 Task De\ufb01nitions In the context of analyzing a post, denoted as \ud461, and its accompanying image as \ud45a, the objective of the intention knowledge distillation task is to extract a set of intentions, represented as \ud458, from both post \ud461and image \ud45a. Aligning with most of the current research in intention analysis, this task is approached as an open domain generation problem. Let \ud461= (\ud4611,\ud4612, ..., \ud461\ud45b) symbolize a sequence of input words in the post,\ud458= (\ud4581,\ud4582, ..., \ud458\ud459) represents the set of intentions that are deduced from various aspects of both the post text and the image, where \ud45band \ud459indicate the length of the post and the top-\ud459most relevant intentions, respectively. 3.2 Datasets On the task of intention generation, we utilize four renowned publicly datasets to address the challenges posed by the diversity of social media posts, providing a more robust and comprehensive analysis of social media interactions. The datasets include Twitter2015 [65], Twitter-2017[40], Twitter100k[28], and Twitter Sarcasm[6]. The datasets comprise 8,357 sentences for Twitter 2015, 4,819 sentences for Twitter 2017, 100,000 sentences for Twitter100k, and 24,635 sentences for Twitter Sarcasm. Notably, due to the absence of image modality in some samples of the Twitter-2017 and Twitter Sarcasm datasets, we excluded these samples to curate a cleaned dataset version and the details are presented in Table 1. 4 METHOD In this section, we present Miko, a Multimodal Intention Knowledge Distillation framework, which is shown in Figure 2. Miko can mainly be summarized in three steps. Given an image and text pair in a social media post, we start with instructing an MLLM to generate the descriptions of images in social media posts in natural language form to bridge vision and text modalities (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.1). Then an LLM is simultaneously instructed to analyze the text in each post by extracting key information according to \ufb01ve pre-de\ufb01ned key dimensions (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.2). Utilizing the extracted middle-step information, we \ufb01nally instruct the LLM again to let it generate the underlying intentions of users\u2019 posts and construct multi-perspective intention pro\ufb01les (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.3). 4.1 Image Captioning When users post, the images attached to the posts often contain their potential posting motivations, which are mainly re\ufb02ected in two aspects. First of all, when the images cannot be directly expressed in text form, such as sarcastic remarks, it is usually because the content that users want to express may violate the speech restrictions of the platform. In this case, images become an alternative means of expression, allowing users to bypass the limitations of text and convey their true intentions. Secondly, users may use images to further explain or strengthen the message of the text, making the original post content richer and clearer. Such images not only supplement the text, but in many cases they help the public understand the intention and emotion behind the post more deeply and accurately. To this end, we utilize the advanced Multimodal Large Language Model, LLava[37] for image captioning. With the help of a special design prompt, LLava is utilized to derive detailed descriptions of image information from the raw imagetext pairs. This approach ensures a richer and more nuanced interpretation of the social media post. The structured prompt we employ is as follows: Basedon the following text \u201c<Post text information>\u201d,please describe the current image in detail. \fPreprint, , F. Lu, W. Wang, Y. Luo, Z. Zhu, Q. Sun, B. Xu, H. Shi, S. Gao, Q. Li, Y. Song, J. Li ,QWHQWLRQV ,PDJH\u0010WH[W\u0003SDLUV 6XFK\u0003WUDJHG\\\u0003RI\u0003D\u0003\\RXQJ\u0003\u0014\u001a\u0003\\HDU\u0003 ROG\u0003 ZKR\u0003 ORVW\u0003 KHU\u0003 OLIH\u0003 DW\u0003 WKH\u0003 6DQGLD\u0003&UHVW\u0003LQ\u0003\u0006$%4 ,PDJH\u02d6 7H[W\u02d6 %HQFKPDUN )LOWHU SOHDVH\u0003GHVFULEH\u0003WKH\u0003FXUUHQW\u0003LPDJH\u0003 LQ\u0003GHWDLO\u0011 7KH\u0003 LPDJH\u0003 VKRZV\u0003 D\u0003 JURXS\u0003 RI\u0003 SHRSOH\u0003JDWKHUHG\u0003DURXQG\u0003D\u0003SHUVRQ\u0003 ZKR\u0003LV\u0003O\\LQJ\u0003RQ\u0003WKH\u0003JURXQG\u0011\u0011\u0011 2EWDLQ 3OHDVH\u0003 H[WUDFW\u0003 WKH\u0003 FRQFHSW\u000f\u0003 DFWLRQ\u000f\u0003REMHFW\u000f\u0003HPRWLRQ\u0003DQG\u0003WKUHH\u0010 WR\u0010ILYH\u0003 NH\\ZRUGV\u0003 EDVHG\u0003 RQ\u0003 WKH\u0003 IROORZLQJ\u0003LQIRUPDWLRQ\u0011 $FWLRQ\u02d6;;\u0003\u0003\u0003\u0003\u0003\u0003\u0003\u0003\u0003&RQFHSW\u02d6;; 2EMHFW\u02d6;;\u0003\u0003\u0003\u0003\u0003\u0003\u0003\u0003\u0003(PRWLRQ\u02d6;;\u0003 .H\\ZRUG\u02d6\u0014\u0011;;\u000f\u0003\u0015\u0011;;\u000f\u0011\u0011\u0011 %DVHG\u0003RQ\u0003WKH\u0003LQIRUPDWLRQ\u0003EHORZ\u000f\u0003 JXHVV\u0003 WKH\u0003 LQWHQWLRQ\u0003 RI\u0003 ZK\\\u0003 WKH\u0003 XVHU\u0003 SRVW\u0003 WKLV\u0003 LQIRUPDWLRQ\u0011\u0003 *HQHUDWH\u0003ILYH\u0003GLIIHUHQW\u0003LQWHQWLRQV\u0003 LI\u0003 SRVVLEOH\u0011\u0003 7KH\u0003 LQIRUPDWLRQ\u0003 DUH\u0003 DV\u0003IROORZV\u0102 ,QWHQWLRQ\u0014\u001d\u0003 $IWHU\u0003 SRVWLQJ\u0003 WKLV\u0003 7ZHHW\u000f\u0003WKH\u0003XVHU\u0003ZDQWV\u0003WR\u0003\u0011\u0011\u0011?? ,QWHQWLRQ\u0015\u001d\u0003 $IWHU\u0003 YLHZLQJ\u0003 WKLV\u0003 7ZHHW\u000f\u0003RWKHUV\u0003ZLOO\u0003\u0011\u0011\u0011?? ,QWHQWLRQ\u0016\u001d\u0003 7KH\u0003 XVHU\u0003 SRVWV\u0003 WKLV\u0003 7ZHHW\u0003EHFDXVH\u0003WKH\u0003XVHU\u0003LV\u0102?? ,QWHQWLRQ\u0017\u001d\u0003 7KH\u0003 XVHU\u0003 SRVWV\u0003 WKLV\u0003 7ZHHW\u0003EHFDXVH\u0003WKH\u0003XVHU\u0003LQWHQGHG\u0003 WR\u0003\u00ab 6WDJH\u0015\u001d\u0003,QWHQWLRQ\u0003'LVWLOODWLRQ 6WDJH\u0014\u001d\u00030XOWL\u0010LQIRUPDWLRQ\u00035HDVRQDLQJ 7ZR\u0010VWDJH\u0003$QQRQDWLRQ ,QWHQWLRQV 0,.2\u0003IUDPHZRUN 0,.2 ,QWHQWLRQV 'DWD $XJPHQWDWLRQ (QKDQFHG\u0003GDWD 3HUIRUPDQFH\u0003,PSURYPHQW ([WULQVLF\u0003 (YDOXDWLRQ ,QWULQVLF\u0003 (YDOXDWLRQ 6RFLDO\u00030HGLD\u0003 SRVWV ,QWHQWLRQV //0 0LVWUDO\u0010Y\u0013\u0011\u0014\u0010\u001a% /ODPD\u0015\u0010\u001a% /ODPD\u0015\u0010\u0014\u0016% /ODYD\u0010Y\u0014\u0011\u0018\u0010\u0014\u0016% 0LVWUDO\u0010Y\u0013\u0011\u0015\u0010\u001a% )DOFRQ\u0010\u001a% %HQFKPDUNLQJ\u0003PRGHOV \u0011\u0011\u0011 6WDJH\u0016\u001d\u00030XOWL\u00109LHZ\u0003,QWHQWLRQ\u0003(IIHFWLYHQHVV\u0003(YDOXDWLRQ Figure 2: The overall architecture of our work, which encompassesthree core components: multi-information reasoning, intention distillation, and multi-view intention e\ufb00ectiveness evaluation. We leverage the LLava and ChatGPT models, employing a novel hierarchical prompt guidance approach to extract image description (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.1), key information (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.2) and intentions (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.3) from user posts. Following this, we annotate the derived intentions based on rationality and credibility, create a benchmark (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.1), and assess the performance of various LLMs (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.3) and the performance with the help of intentions on sarcasm detection task (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.4). Note that for the input of single text information, we do not perform this step of processing. 4.2 Chain-of-Key Information Reasoning Social media posts frequently contain noisy elements like hashtags, misspellings, and complex abbreviations, which could in\ufb02uence the performance of intention analysis. In addition, since LLM faces di\ufb03culties in accurately describing and extracting useful information in the original posts, which may lead to hallucinations, it is necessary to further extract more crucial information from both the original post and the corresponding image descriptions after obtaining the descriptions of the images to eliminate the in\ufb02uence of noise information. We design a key information prompting strategy to guide ChatGPT[41] in obtaining the concept, action, object, emotion, and keywords from di\ufb00erent dimensions of the original post. The structured prompt we employ is as follows: Please extract the concept, action, object, emotion, and three to \ufb01ve keywords based on the following information. Note: remove the person\u2018s name and other information, retain only the key information. The information is <Text information>/<Image description>. 4.3 Intention Distillation Employing LLMs directly for extracting users\u2019 posting intentions can lead to challenges, including super\ufb01cial comprehension and inaccuracies in understanding. To mitigate these issues and improve the capacity of models to accurately and fully grasp the intentions behind social media posts, we have developed an intention distillation strategy, which combines the original post information, image description information, and key information to generate a more accurate and comprehensive open-domain and standardized description of the original posting intention. In addition, users\u2019 posting intentions are open and diverse. Therefore, in order to comprehensively and accurately analyze users\u2019 posting intentions from multiple perspectives, inspired by ATOMIC [48], we categorized social intentions into two distinct types: one representing the e\ufb00ect after posting (\u201ce\ufb00ect\u201d), and the other re\ufb02ecting the intended in\ufb02uence or impact of the user\u2019s social media posts \fMiko: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery Preprint, , )LUHILJKWHU\u0003 NLOOHG\u0003 EDWWLQJ\u0003 SODQH\u0003ILUH\u0003DW\u0003'XEDL\u0003DLUSRUW\u0011\u0003 7KH\u0003 DLUSRUW\u0003 KDV\u0003 UHVXPHG\u0003 DOO\u0003IOLJKWV 6RFLDO\u00030HGLD\u00033RVW ,PDJH\u0003GHVFULSWLRQ\u001d 7KH\u0003LPDJH\u0003VKRZV\u0003D\u0003ODUJH\u0003DLUSODQH\u0003RQ\u0003 D\u0003 UXQZD\\\u0003 DW\u0003 DQ\u0003 DLUSRUW\u000f\u0003 ZLWK\u0003 VPRNH\u0003 ELOORZLQJ\u0003 IURP\u0003 LW\u0011\u0003 7KH\u0003 DLUSODQH\u0003 DSSHDUV\u0003 WR\u0003 EH\u0003 RQ\u0003 ILUH\u000f\u0003 DQG\u0003 D\u0003 ILUHILJKWHU\u0003 LV\u0003 SUHVHQW\u000f\u0003 WU\\LQJ\u0003 WR\u0003 H[WLQJXLVK\u0003 WKH\u0003 IODPHV\u0011\u0003 7KH\u0003 VFHQH\u0003 VXJJHVWV\u0003 WKDW\u0003 WKHUH\u0003 KDV\u0003 EHHQ\u0003 DQ\u0003 LQFLGHQW\u0003 DW\u0003 WKH\u0003 DLUSRUW\u000f\u0003 SRVVLEO\\\u0003 LQYROYLQJ\u0003 WKH\u0003 DLUSODQH\u0011\u0003 'HVSLWH\u0003 WKH\u0003 RQJRLQJ\u0003 HPHUJHQF\\\u000f\u0003 WKH\u0003 DLUSRUW\u0003 KDV\u0003 UHVXPHG\u0003 DOO\u0003 IOLJKWV\u000f\u0003 LQGLFDWLQJ\u0003 WKDW\u0003 WKH\u0003 VLWXDWLRQ\u0003 LV\u0003 EHLQJ\u0003 PDQDJHG\u0003 DQG\u0003 FRQWUROOHG .H\\\u0010LQIRUPDWLRQ &RQFHSW\u001d\u0003$LUSRUW\u0003HPHUJHQF\\\u0003UHVSRQVH\u0011 $FWLRQ\u001d\u0003%DWWOLQJ\u0003D\u0003SODQH\u0003ILUH\u0011 2EMHFW\u001d\u0003$LUSODQH\u0011 (PRWLRQ\u001d\u00037UDJHG\\\u0011 .H\\ZRUGV\u001d\u0003 )LUHILJKWHU\u000f\u0003 'XEDL\u0003 DLUSRUW\u000f\u0003 VPRNH\u000f\u0003IODPHV\u000f\u0003UHVXPHG\u0003IOLJKWV\u0011 3 + + + + + + + / + / xNeed:\u0003$IWHU\u0003SRVWLQJ\u0003WKLV\u00037ZHHW\u000f\u0003WKH\u0003XVHU\u0003ZDQWV\u0003WR\u0003LQIRUP\u0003WKHLU\u0003IROORZHUV\u0003DERXW\u0003WKH\u0003 WUDJLF\u0003LQFLGHQW\u0003DW\u0003'XEDL\u0003DLUSRUW\u0011 xIntent:\u0003$IWHU\u0003YLHZLQJ\u0003WKLV\u00037ZHHW\u000f\u0003RWKHUV\u0003ZLOO\u0003EH\u0003XSGDWHG\u0003RQ\u0003WKH\u0003VLWXDWLRQ\u0003DW\u0003'XEDL\u0003 DLUSRUW\u0003DQG\u0003EH\u0003DZDUH\u0003RI\u0003DQ\\\u0003SRWHQWLDO\u0003GHOD\\V\u0003RU\u0003FDQFHOODWLRQV\u0011 xAttr: 7KH\u0003XVHU\u0003SRVWV\u0003WKLV\u00037ZHHW\u0003EHFDXVH\u0003WKH\u0003XVHU\u0003LV\u0003FRQFHUQHG\u0003DERXW\u0003WKH\u0003VDIHW\\\u0003RI\u0003WKH\u0003 SDVVHQJHUV\u0003DQG\u0003DLUSRUW\u0003SHUVRQQHO\u0003DW\u0003'XEDL\u0003DLUSRUW\u0011 xEffect:\u00037KH\u0003XVHU\u0003SRVWV\u0003WKLV\u00037ZHHW\u0003EHFDXVH\u0003WKH\u0003XVHU\u0003LQWHQGHG\u0003WR\u0003UDLVH\u0003DZDUHQHVV\u0003DERXW\u0003 WKH\u0003LPSRUWDQFH\u0003RI\u0003HPHUJHQF\\\u0003UHVSRQVH\u0003WHDPV\u0003DW\u0003DLUSRUWV\u0011 xReact:\u0003$IWHU\u0003SRVWLQJ\u0003WKLV\u00037ZHHW\u000f\u0003WKH\u0003XVHU\u0003IHHOV\u0003D\u0003VHQVH\u0003RI\u0003UHVSRQVLELOLW\\\u0003WR\u0003LQIRUP\u0003WKH\u0003 SXEOLF\u0003DERXW\u0003WKH\u0003LQFLGHQW\u0011 xWant:\u0003$IWHU\u0003YLHZLQJ\u0003WKLV\u00037ZHHW\u000f\u0003RWKHUV\u0003IHHO\u0003D\u0003VHQVH\u0003RI\u0003FRQFHUQ\u0003IRU\u0003WKH\u0003VDIHW\\\u0003RI\u0003 WKRVH\u0003DW\u0003'XEDL\u0003DLUSRUW\u0011 oEffect:\u0003$IWHU\u0003YLHZLQJ\u0003WKLV\u00037ZHHW\u000f\u0003RWKHUV\u0003ZDQW\u0003WR\u0003NQRZ\u0003PRUH\u0003DERXW\u0003WKH\u0003LQFLGHQW\u0003DQG\u0003 ZKHWKHU\u0003DQ\\\u0003PHDVXUHV\u0003DUH\u0003EHLQJ\u0003WDNHQ\u0003WR\u0003SUHYHQW\u0003IXWXUH\u0003HPHUJHQFLHV\u0011 oReact:\u0003$IWHU\u0003SRVWLQJ\u0003WKLV\u00037ZHHW\u000f\u0003WKH\u0003XVHU\u0003ZLOO\u0003FRQWLQXH\u0003WR\u0003PRQLWRU\u0003WKH\u0003VLWXDWLRQ\u0003DW\u0003 'XEDL\u0003DLUSRUW\u0003DQG\u0003XSGDWH\u0003WKHLU\u0003IROORZHUV\u0003LI\u0003QHHGHG\u0011 oWant:\u0003%HIRUH\u0003SRVWLQJ\u0003WKLV\u00037ZHHW\u000f\u0003WKH\u0003XVHU\u0003QHHGV\u0003WR\u0003YHULI\\\u0003WKH\u0003DFFXUDF\\\u0003RI\u0003WKH\u0003 LQIRUPDWLRQ\u0003DQG\u0003HQVXUH\u0003WKDW\u0003LW\u0003LV\u0003QRW\u0003FDXVLQJ\u0003DQ\\\u0003SDQLF\u0003RU\u0003XQQHFHVVDU\\\u0003DODUP\u0011 Open: 7KH\u0003XVHU\u0003SRVWHG\u0003WKLV\u0003WZHHW\u0003EHFDXVH\u0003WKH\\\u0003ZDQW\u0003WR\u0003SD\\\u0003WULEXWH\u0003WR\u0003WKH\u0003EUDYH\u0003 ILUHILJKWHU\u0003ZKR\u0003ORVW\u0003WKHLU\u0003OLIH\u0003ZKLOH\u0003WU\\LQJ\u0003WR\u0003VDYH\u0003RWKHUV\u0011 ,QWHQWLRQV 7 + + + / + + + / + / Figure 3: An example illustrates the generated image description, key information and intentions. \u201cP\u201d stands for the plausibility and \u201cT\u201d stands for the typicality. Generated tails with good quality (in green) and bad quality (in red) are highlighted. Besides, \u201cH\u201d and \u201cL\u201d indicates the high and low plausibility and typicality scores respectively. Table 1: Statistics of the using datasets.\u201cStatistics\u201d refers to the posts included in each dataset, whereas \u201cSample\u201d denotes a randomly selected subset from each dataset. Data types Dataset Statistics Sample Multimodal Twitter2015 8,257 RT @luxury _ _ travel: 5 must-sees in Ayrshire and Arran, Scotland Twitter2017 4,395 Sergio Ramos: Real Madrid as motivated as two years ago Twitter100k 100,000 Hey, @HelloAlfred. You ruined a pair of my shoes. Not cool. Goodyear? Twitter Sarcasm 24,635 hell yeah !#funny # sleepwell # dreamon # fai on the viewers (\u201cagent\u201d). The structured prompt employed for this is illustrated in Figure.5 and more detailed information is shown in Appendix A (Distillation Prompt). 5 INTRINSIC EVALUATIONS In this study, we conducted intrinsic evaluations on the generated intentions. To assess the quality of intention generation, we randomly selected 1,000 posts and performed manual annotations in \ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.1, and \ufb01nally we extracted accurate and comprehensive intents consistent with human logic and added them to the benchmark. Furthermore, we conducted a comprehensive evaluation of intentions generated by the Miko framework, encompassing aspects such as knowledge quality case study (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.3). Subsequently, based on the intentions obtained in \ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.3, we trained a local LLM model (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.2) and used the benchmark to evaluate the performance of other LLMs in generating intentions, as well as the intention generation performance of the trained model (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.4). 5.1 Two-stage Annotation As the generated intentions can be incorrect or not rational, refer to the approach of folkscope[62], we apply the human annotation to obtain high-quality assertions and then to determine the rationality of the intention generation, which as a benchmark for evaluating the ability of generating intentions when using other models. We use Label Studio[51] to annotate the intention data. In this stage, annotators are provided with generated candidates\u2019 intentions and raw text-image pairs. system to ensure a more reliable and consensus based evaluation. To acquire high-quality intention data as a benchmark for evaluating other models, ourinitial strategy involves assessing their typicality. We randomly selected 1,000 Twitter posts along with their respective intention data from our dataset for human assessment. As detailed in \ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.3, each post encompasses 10 distinct types of intention data. We evaluate the intention information of each post individually, assigning scores based on the following criteria: \u201c1 point for a high typicality\u201d, \u201c0 points for a low typicality\u201d, and \u201c-1 point for a implausible\u201d. \fPreprint, , F. Lu, W. Wang, Y. Luo, Z. Zhu, Q. Sun, B. Xu, H. Shi, S. Gao, Q. Li, Y. Song, J. Li Following the evaluation of the generated intentions\u2019 typicality, it is crucial to assess whether each annotated post needs to be added to the benchmark. This step ensures that the annotations are not only rational but also objective. Moving beyond the basic typicality judgments for aspects\u2019 of the intention, our second step introduces more nuanced and precise measures of typicality, focusing on informativeness and comprehensiveness. In this phase, we conduct a statistical analysis of the data results marked in the previous step, and calculate the total score of di\ufb00erent generated intents for each post. For posts with a total score exceeding 5, we further conduct discrimination manually. Ultimately, we retain those that conform to human logic and possess comprehensive intent information, adding them to the benchmark. This serves as a basis for evaluating other knowledge distillation and intent generation methods. The number of posting intentions from di\ufb00erent perspectives is shown in Table 2. 5.2 Distillation Evaluations 5.2.1 Knowledge Qality. The primary objective of the Knowledge Quality Evaluation is to accurately identify and recognize highquality knowledge. In this context, ourfocus is on assessing whether the intentions generated are indeed of superior quality. For this purpose, we conducted human evaluation on the data labeled in \ud446\ud452\ud450\ud461\ud456\ud45c\ud45b5.1, as shown in Figure 4. It is evident that the majority of instances generated by the Miko framework demonstrate a \u201chigh degree of correlation\u201d with human cognition. This indicates that the intent information produced by Miko largely aligns with the process and manner of human cognition and thinking, which involves initially identifying key information from raw data, followed by conducting a more in-depth analysis of the original content under the guidance of these key information. However, it is noteworthy that, despite the high quality of most intentions, certain categories of intention, such as \u201cxReact\u201d, show some deviation from human understanding. This suggests that even LLMs struggle to fully comprehend users\u2019 feelings and perceptions, marking an important area for future research. 5.2.2 Case Study. We show an example of a raw text-image pair and their corresponding knowledge as well as image descriptions (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.1), key information (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.2) and di\ufb00erent aspects of generated intentions (\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.3) in Figure 3. We use plausibility and typicality to measure the quality of generated information, which can observe that the majority of the generated intentions are both reasonable and comprehensive, aligning with human intuitive understanding. For instance, intentions like \u201cAfter posting this Tweet, the user aims to inform their followers about the tragic incident at Dubai airport\u201d and \u201cUpon viewing this Tweet, others will be updatedon the situation at Dubai airport and become aware of any potential delays or cancellations\u201d are examples of such. As a result, some of the open intentions are very good as well, and only a very small number of examples generate low quality. 5.3 Benckmark Other LLMs We are interested in whether using di\ufb00erent types of language models without using the Miko framework have a signi\ufb01cant impact on the generated intention. Hence we empirically analyze the plausible rate of generation using eleven LLMs: LLama2-7B[53], 3URSRUWLRQ 5HODWLRQV Figure 4: Average typicality score of each aspect of intentions. The vertical axis represents the proportion of three di\ufb00erent categories within manually annotated intentions, while the horizontal axis displays ten di\ufb00erent aspects of intentions. LLama2-13B[53], Mistral-7B-Instruct-v0.1[29], Mistral-7B-Instructv0.2[29], Falcon-7B[44],Flan-T5-xxl-11B[11], GLM3[15], GLM4[15], LLava-v1.5-13B[37] and LLava-v1.6-vicuna-7B[37]. Besides, to enhance the e\ufb03cacy of intention generation from social postsusing a locally deployed model, we leverages the LLama27B as its e\ufb00ectiveness has been demonstrated in several open-source language-only instruction-tuning works[17, 45]. The LLama2-7B model\u2019s selection was motivated by its balance of computational ef\ufb01ciency and linguistic capability, making it a pragmatic choice for local deployment scenarios where resource constraints are a consideration. At this stage, the user intentions\ud458identi\ufb01ed in\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.3 is leveraged to craft instruction pairs to instruction \ufb01netune the LLM. In detail, for each post \ud461and associated image \ud45a, we utilize the LLM furnishes the image description \ud44b\ud463= \ud454(\ud45a) and key information \ud44b\ud456= \ud454(\ud461,\ud45a), where \ud454(\u00b7) represents the text generated by the LLM. Subsequently, we formulate the training instruction \ud44b\ud464 \ud456\ud45b\ud460\ud461\ud45f\ud462\ud450\ud461= (\ud4611 \ud45e,\ud4611 \ud44e, . . . ,\ud461\ud441 \ud45e,\ud461\ud441 \ud44e) for each post, where \ud441denotes the total number of intentions for the post,\ud461\ud45esigni\ufb01es the outcome derived from integrating the \ud461, \ud44b\ud463, \ud44b\ud456, and speci\ufb01c intent-generating prompts. These intentions are arranged sequentially, with all answers treated as responses from the assistant. As shown in Table 3, it is observed that the multimodal large models outperform text-based LLMs such as LLama2-7B, GLM3, and GLM4. This suggests that the inclusion of image information in user posts may reveal latent purposes and psychological activities, thereby enabling the model to analyze and identify users\u2019 posting intentions more accurately. Furthermore, training the LLama27B model with distilled intention knowledge signi\ufb01cantly enhances its capability in intention analysis, underscoring the e\ufb00ectiveness and validity of our extracted intention knowledge in guiding the model\u2019s extraction of intention knowledge accurately. Moreover, an intriguing observation is made: the performance of GLM4 is inferior to that of GLM3. This discrepancy is hypothesized to be due to GLM4\u2019s training on a substantially larger dataset of Chinese language materials, which may result in its reduced pro\ufb01ciency in interpreting English social media posts compared to GLM3. \fMiko: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery Preprint, , Table 2: Statistics on the number of used intentions in the benchmark we constructed. Relation xWant oE\ufb00ect xAttr xIntent xReact oReact oWant oE\ufb00ect xNeed Open Average Total Numbers 853 837 799 818 654 772 828 758 717 832 787 7,868 Table 3: Average BERTscore (reported as percentages) for the 10 di\ufb00erent aspects of the generated intentions. Note that the results presented herein have been adjusted to exclude pre\ufb01xes like \u201cAfter posting this Tweet, the user wants to\u201d. \u201cLoRA Finetuned\u201d indicates a model trained using intentions via instruction \ufb01netuning. Model xWant oE\ufb00ect xAttr xIntent xReact oReact oWant oE\ufb00ect xNeed Open Average LLama2-7B 62.13 60.05 57.39 57.61 54.27 55.99 58.73 53.26 58.92 54.17 57.25 LLama2-13B 62.51 59.72 57.27 55.96 56.00 54.33 56.94 52.28 59.49 52.20 56.67 Mistral-7B-Instruct-v0.1 63.06 60.51 56.48 57.99 52.83 57.12 60.58 52.91 57.85 53.18 57.25 Mistral-7B-Instruct-v0.2 61.47 59.97 55.85 58.94 54.76 56.40 57.90 54.55 58.40 53.15 57.14 Falcon-7B 63.97 58.66 58.01 56.79 55.19 57.09 57.21 52.35 57.10 53.91 57.03 Flan-T5-xxl-11B 63.53 60.25 55.46 57.03 53.01 56.97 56.86 51.61 57.97 54.78 56.75 GLM3 66.09 59.99 60.44 58.16 57.87 58.61 59.09 58.17 57.89 67.83 60.41 GLM4 64.76 59.33 57.17 52.84 53.82 53.79 56.87 56.13 54.77 65.56 57.50 LLava-v1.5-13B 69.24 62.79 56.00 50.99 57.40 59.31 61.05 61.98 57.32 69.67 60.58 LLava-v1.6-vicuna-7B 67.66 61.14 63.03 56.50 58.03 58.51 60.72 56.17 58.91 69.87 61.05 LLama2-7B (LoRA Fine-tuned) 69.60 64.89 66.56 61.39 62.25 62.45 63.08 62.44 60.23 57.67 63.06 In addition, we also evaluated the generalization of the large model after training, and the details of the MMLU [25] evaluation for the Llama2-7B LoRA Fine-tuned models are reported in Table 4. Our \ufb01ndings reveal that after intentional \ufb01ne-tuning, the performance of the model not only remained stable but also surpassed that of the baseline LLama2-7B model across several domains, including STEM, Social Sciences, and Others. This suggests that our training approach has not compromised the model\u2019s inference capabilities or its generalizability. 6 EXTRINSIC EVALUATION To further validate the e\ufb00ectiveness of the generated intentions and their ability to enhance the accuracy of downstream tasks, we have appended the generated intentions to the sarcasm detection task and conducted an evaluation. For the original image-text data in the sarcasm detection data, we initially apply the prompt design from \ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.1 to obtain descriptions of the current input images and to distill the key information(\ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.2) contained in the image-text pairs. Subsequently, we use the raw texts, image descriptions, and key information as inputs, employing the generated intentions designed in \ud446\ud452\ud450\ud461\ud456\ud45c\ud45b4.3 to extract the posting intentions of the users. These intentions are then appended to the raw posts and image descriptions, serving as inputs for training the model and evaluating test data. In this scenario, LLMs can be used to discover and re\ufb01ne the hidden intentions in the original posts, which are closely related to the users\u2019 posting purposes in the Sarcasm Detection task. This approach e\ufb00ectively enhances the task\u2019s accuracy, thereby demonstrating the e\ufb00ectiveness of social intentions. 6.1 Setup We conducted experiments on the twitter sarcasm dataset, which is collected by[6]. This dataset contains English tweets expressing sarcasm labeled as \u201c1\u201d and those expressing non-sarcasm labeled as \u201c0\u201d. For a fair comparison, we meticulously cleaned our dataset, removing instances with missing image modality data. Then, we reproduce our Miko framework on the cleaned dataset to obtain image descriptions and intentions of the source data. For knowledge extraction, we employed LLava[37] for extracting image descriptions and leveraged ChatGPT[41] for intentions extraction. To determine if the methodologies inspired by Miko genuinely improve sarcasm detection accuracy, we adopted the pre-trained BERT-base-uncased model[13] as the textual backbone network. This setup was used to obtain initial embeddings for texts and knowledge. We then enhanced the original text by appending image descriptions and the extracted intentions. This approach enabled us to assess whether the social intention knowledge extracted by Miko contributes additional valuable insights to the sarcasm detection task. \fPreprint, , F. Lu, W. Wang, Y. Luo, Z. Zhu, Q. Sun, B. Xu, H. Shi, S. Gao, Q. Li, Y. Song, J. Li Table 4: Five-shot performance on the Massive Multitask Language Understanding (MMLU) benchmark. Humanities STEM Social Sciences Other Average LLama2-7B 42.9 36.4 51.2 52.2 45.3 LLama2-7B (LoRA Fine-tuned) 40.9 36.8 52.0 53.0 45.3 Table 5: Comparison results for sarcasm detection. \u2020 indicates ResNet backbone and \u2021 indicates ViT backbone. Additionally, \u201cINT\u201d represents the social intention derived from Miko, and \u201cIMGDES\u201d refers to the image descriptions generated via LLava.\u201cText\u201d refers to only use raw posts. Model Acc(%) P(%) R(%) F1(%) Text TextCNN 80.03 74.29 76.39 75.32 Bi-LSTM 81.90 76.66 78.42 77.53 SMSD 80.90 76.46 75.18 75.82 BERT-(Text) 83.85 78.72 82.27 80.22 Image Image 64.76 54.41 70.80 61.53 ViT 67.83 57.93 70.07 63.43 BERT-(IMGDES) 75.15 67.45 72.46 69.86 Multi-Modal HFM\u2020 83.44 76.57 84.15 80.18 D&R Net\u2020 84.02 77.97 83.42 80.60 Att-BERT\u2020 86.05 80.87 85.08 82.92 InCrossMGs\u2021 86.10 81.38 84.36 82.84 CMGCN\u2021 86.54 \u2013 \u2013 82.73 HKE\u2020 87.02 82.97 84.90 83.92 HKE\u2021 87.36 81.84 86.48 84.09 BERT-(Text+IMGDES) 86.89 82.06 85.76 83.87 BERT-(Text+INTE) 87.14 82.43 85.97 84.16 BERT-(Text+IMGDES+INTE) 87.22 82.08 86.81 84.38 6.2 Baselines In our study, we utilize bothtext-based and multimodal approaches as baseline frameworks to evaluate the impact of generated intentions. For text-based methods, we integrate TextCNN [30], BiLSTM [18], and SMSD [59], which employs self-matching networks and low-rank bi-linear pooling for sarcasm detection. Additionally, we adopt BERT [13], a robust baseline in sarcasm detection. In the multi-modal domain, our baselines encompass HFM [7], D&R Net [60], Att-BERT [43], InCrossMGs [33], and a modi\ufb01ed version of CMGCN [34] that excludes external knowledge. HKE [38] signi\ufb01es a hierarchical framework, leveraging both atomic-level congruities through a multi-head cross-attention mechanism and composition-level congruity via graph neural networks, while a post exhibiting low congruity is identi\ufb01ed as sarcastic. 6.3 Results and Analysis In our preliminary evaluation, we assessed the e\ufb03cacy of our proposed framework against established baseline models. The corresponding accuracy (Acc), precision (P), recall (R) and F1 score (F1) are shown in Table 5. The outcomes indicate that the BERT model achieves state-of-the-art performance with the help of intention data. We also have the following \ufb01ndings: 1) Text-based models exhibit superior performance over imagebased methods, highlighting that text is easier to interpret and more information-dense than images. This \ufb01nding con\ufb01rms the validity of our approach in enhancing textual information through the extraction of image descriptions using MLLM. 2) Conversely, multimodalmethod outperformthe unimodal counterparts, which underlining the bene\ufb01t of leveraging information from multi-modalities. Through the fusion and alignment of multimodal information, the detection capabilities of the model are signi\ufb01cantly enhanced. 3) As illustrated in Table 5, the \u201cBERT-(Text+INTE+IMGDES)\u201d yielded the highest performance, which validates the utility of incorporating intentions derived from social media. Social intentions provide a more comprehensive view of users\u2019 psychological states and immediate posting motivations. Therefore, enriching the model with these insights information can signi\ufb01cantly enhance its ability to identify sarcastic remarks. 6.4 Ablation Study In this stage, we conducted an ablation experiment to assess the impact of image descriptions and intention information on sarcasm detection tasks. The experimental outcomes, as depicted in Table 5, lead to several insightful observations. First, image description and intentions contributes signi\ufb01cantly to sarcasm detection. This is evidenced by the enhanced performance of the BERT(Text+IMGDES) compared to their counterparts, BERT, which do not incorporate image descriptions. A noteworthy \ufb01nding is that BERT-(Text+INIT) outperformsthe BERT-(Text+IMGDES) because the intention is based on further re\ufb01nement of the original text, image description, and key information, which contains more information that is useful and consistent with human information activities, this information is more helpful for sarcasm detection tasks. Besides, the integration of both image description and intention resulted in the most e\ufb00ective result, surpassing the stateof-the-art in multimodal sarcasm detection. This emphasizes the e\ufb00ectiveness of extracting intention information from large-scale models to grasp the user\u2019s underlying thoughts, which means that the recognition e\ufb00ect of sarcasm detection data depends on the ability to accurately understand the user\u2019s thoughts and motivations. 7 CONCLUSIONS In this paper, we introduce Miko, an innovative framework tailored for acquiring social intention knowledge from multimodal social media posts. Our approach incorporates a hierarchical methodology to extract essential social information and intentions. This process leverages a large language model and well-designed prompts to e\ufb00ectively capture users\u2019 posting intentions from social posts. \fMiko: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery Preprint, , Furthermore, we meticulously annotate the typical scores of selected assertions, enriching them with human knowledge to establish a robust benchmark. We have conducted comprehensive evaluations to validate the e\ufb00ectiveness and utility of the distilled intention knowledge extracted by our framework. In the future, we aim to broaden the scope of Miko by adapting it to diverse domains, behavioral types, languages, and temporal contexts. This expansion is anticipated to signi\ufb01cantly enhance the capabilities of various social media applications." |
| } |
| ] |
| } |