| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:04:55.662916Z" |
| }, |
| "title": "Low Rank Fusion based Transformers for Multimodal Sequences", |
| "authors": [ |
| { |
| "first": "Saurav", |
| "middle": [], |
| "last": "Sahay", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Anticipatory Computing Lab", |
| "institution": "Intel Labs", |
| "location": { |
| "country": "USA" |
| } |
| }, |
| "email": "saurav.sahay@intel.com" |
| }, |
| { |
| "first": "Eda", |
| "middle": [], |
| "last": "Okur", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Anticipatory Computing Lab", |
| "institution": "Intel Labs", |
| "location": { |
| "country": "USA" |
| } |
| }, |
| "email": "eda.okur@intel.com" |
| }, |
| { |
| "first": "Shachi", |
| "middle": [ |
| "H" |
| ], |
| "last": "Kumar", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Anticipatory Computing Lab", |
| "institution": "Intel Labs", |
| "location": { |
| "country": "USA" |
| } |
| }, |
| "email": "shachi.h.kumar@intel.com" |
| }, |
| { |
| "first": "Lama", |
| "middle": [], |
| "last": "Nachman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Anticipatory Computing Lab", |
| "institution": "Intel Labs", |
| "location": { |
| "country": "USA" |
| } |
| }, |
| "email": "lama.nachman@intel.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Our senses individually work in a coordinated fashion to express our emotional intentions. In this work, we experiment with modeling modality-specific sensory signals to attend to our latent multimodal emotional intentions and vice versa expressed via lowrank multimodal fusion and multimodal transformers. The low-rank factorization of multimodal fusion amongst the modalities helps represent approximate multiplicative latent signal interactions. Motivated by the work of (Tsai et al., 2019) and (Liu et al., 2018), we present our transformer-based cross-fusion architecture without any over-parameterization of the model. The low-rank fusion helps represent the latent signal interactions while the modality-specific attention helps focus on relevant parts of the signal. We present two methods for the Multimodal Sentiment and Emotion Recognition results on CMU-MOSEI, CMU-MOSI, and IEMOCAP datasets and show that our models have lesser parameters, train faster and perform comparably to many larger fusion-based architectures.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Our senses individually work in a coordinated fashion to express our emotional intentions. In this work, we experiment with modeling modality-specific sensory signals to attend to our latent multimodal emotional intentions and vice versa expressed via lowrank multimodal fusion and multimodal transformers. The low-rank factorization of multimodal fusion amongst the modalities helps represent approximate multiplicative latent signal interactions. Motivated by the work of (Tsai et al., 2019) and (Liu et al., 2018), we present our transformer-based cross-fusion architecture without any over-parameterization of the model. The low-rank fusion helps represent the latent signal interactions while the modality-specific attention helps focus on relevant parts of the signal. We present two methods for the Multimodal Sentiment and Emotion Recognition results on CMU-MOSEI, CMU-MOSI, and IEMOCAP datasets and show that our models have lesser parameters, train faster and perform comparably to many larger fusion-based architectures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The field of Emotion Understanding involves computational study of subjective elements such as sentiments, opinions, attitudes, and emotions towards other objects or persons. Subjectivity is an inherent part of emotion understanding that comes from the contextual nature of the natural phenomenon. Defining the metrics and disentangling the objective assessment of the metrics from the subjective signal makes the field quite challenging and exciting. Sentiments and Emotions are attached to the language, audio and visual modalities at different rates of expression and granularity and are useful in deriving social, psychological and behavioral insights about various entities such as movies, products, people or organizations. Emotions are defined as brief organically synchronized evaluations of major events whereas sentiments are considered as more enduring beliefs and dispositions towards objects or persons (Scherer, 1984) . The field of Emotion Understanding has rich literature with many interesting models of understanding (Plutchik, 2001; Ekman, 2009; Posner et al., 2005) . Recent studies on tensor-based multimodal fusion explore regularizing tensor representations and polynomial tensor pooling (Hou et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 916, |
| "end": 931, |
| "text": "(Scherer, 1984)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1035, |
| "end": 1051, |
| "text": "(Plutchik, 2001;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1052, |
| "end": 1064, |
| "text": "Ekman, 2009;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1065, |
| "end": 1085, |
| "text": "Posner et al., 2005)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1211, |
| "end": 1229, |
| "text": "(Hou et al., 2019)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we combine ideas from and (Liu et al., 2018) and explore the use of Transformer (Vaswani et al., 2017) based models for both aligned and unaligned signals without extensive over-parameterization of the models by using multiple modality-specific transformers. We utilize Low Rank Matrix Factorization (LMF) based fusion method for representing multimodal fusion of the modality-specific information. Our main contributions can be summarized as follows:", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 58, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 94, |
| "end": 116, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Recently proposed Multimodal Transformer (MulT) architecture uses at least 9 Transformer based models for crossmodal representation of language, audio and visual modalities (3 parallel modality-specific standard Transformers with self-attention and 6 parallel bimodal Transformers with crossmodal attention). These models utilize several parallel unimodal and bimodal transformers and do not capture the full trimodal signal interplay in any single transformer model in the architecture. In contrast, our method uses fewer Transformer based models and fewer parallel models for the same multimodal representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We look at two methods for leveraging the multimodal fusion into the transformer architecture. In one method (LMF-MulT), the fused multimodal signal is reinforced using The ability to use unaligned sequences for modeling is advantageous since we rely on learning based methods instead of using methods that force the signal synchronization (requiring extra timing information) to mimic the coordinated nature of human multimodal language expression. The LMF method aims to capture all unimodal, bimodal and trimodal interactions amongst the modalities via approximate Tensor Fusion method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We develop and test our approaches on the CMU-MOSI, CMU-MOSEI, and IEMOCAP datasets as reported in . CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) ) is a large dataset of multimodal sentiment analysis and emotion recognition on YouTube video segments. The dataset contains more than 23,500 sentence utterance videos from more than 1000 online YouTube speakers. The dataset has several interesting properties such as being gender balanced, containing various topics and monologue videos from people with different personality traits. The videos are manually transcribed and properly punctuated. Since the dataset comprises of natural audio-visual opinionated expressions of the speakers, it provides an excellent test-bed for research in emotion and sentiment understanding. The videos are cut into continuous segments and the segments are annotated with 7 point scale sentiment labels and 4 point scale emotion categories corresponding to the Ekman's 6 basic emotion classes (Ekman, 2002) . The opinionated expressions in the segments contain visual cues, audio variations in signal as well as textual expressions showing various subtle and non-obvious interactions across the modalities for both sentiment and emotion classification. CMU-MOSI (Zadeh et al., 2016 ) is a smaller dataset (2199 clips) of YouTube videos with sentiment annotations. IEMOCAP (Busso et al., 2008) dataset consists of 10K videos with sentiment and emotion labels. We use the same setup as with 4 emotions (happy, sad, angry, neutral).", |
| "cite_spans": [ |
| { |
| "start": 996, |
| "end": 1009, |
| "text": "(Ekman, 2002)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1265, |
| "end": 1284, |
| "text": "(Zadeh et al., 2016", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1375, |
| "end": 1395, |
| "text": "(Busso et al., 2008)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In Fig 1, we illustrate our ideas by showing the fused signal representation attending to different parts of the unimodal sequences. There's no need to align the signals since the attention computation to different parts of the modalities acts as proxy to the multimodal sequence alignment. The fused signal is computed via Low Rank Matrix Factorization (LMF). The other model we propose uses a swapped configuration where the individual modalities attend to the fused signal in parallel.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 9, |
| "text": "Fig 1,", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we describe our models and methods for Low Rank Fusion of the modalities for use with Multimodal Transformers with cross-modal attention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "LMF is a Tensor Fusion method that models the unimodal, bimodal and trimodal interactions without using an expensive 3-fold Cartesian product (Zadeh et al., 2017) from modality-specific embeddings. Instead, the method leverages unimodal features and weights directly to approximate the full multitensor outer product operation. This low-rank matrix factorization operation easily extends to problems where the interaction space (feature space or number of modalities) is very large. We utilize the method as described in (Liu et al., 2018) . Similar to the prior work, we compress the time-series information of the individual modalities using an LSTM (Hochreiter and Schmidhuber, 1997) and extract the hidden state context vector for modalityspecific fusion. We depict the LMF method in Fig 2 similar to the illustration in (Liu et al., 2018) . This shows how the unimodal tensor sequences are appended with 1s before taking the outer product to Figure 3 : Fused Cross-modal Transformer be equivalent to the tensor representation that captures the unimodal and multimodal interaction information explicitly (top right of Fig 2) . As shown, the compressed representation (h) is computed using batch matrix multiplications of the low-rank modality-specific factors and the appended modality representations. All the low-rank products are further multiplied together to get the fused vector.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 162, |
| "text": "(Zadeh et al., 2017)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 521, |
| "end": 539, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 652, |
| "end": 686, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 826, |
| "end": 844, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 788, |
| "end": 802, |
| "text": "Fig 2 similar", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 948, |
| "end": 956, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1123, |
| "end": 1129, |
| "text": "Fig 2)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Low Rank Fusion", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We build up on the Transformers (Vaswani et al., 2017) based sequence encoding and utilize the ideas from for multiple crossmodal attention blocks followed by self-attention for encoding multimodal sequences for classifi- cation. While the earlier work focuses on latent adaptation of one modality to another, we focus on adaptation of the latent multimodal signal itself using single-head cross-modal attention to individual modalities. This helps us reduce the excessive parameterization of the models by using all combinations of modality to modality cross-modal attention for each modality. Instead, we only utilize a linear number of cross-modal attention for each modality and the fused signal representation. We add Temporal Convolutions after the LMF operation to ensure that the input sequences have a sufficient awareness of the neighboring elements. We show the overall architecture of our two proposed models in Fig 3 and Fig 4. In Fig 3, we show the fused multimodal signal representation after a temporal convolution to enrich the individual modalities via cross-modal transformer attention. In Fig 4, we show the architecture with the least number of Transformer layers where the individual modalities attend to the fused convoluted multimodal signal.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 54, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 924, |
| "end": 951, |
| "text": "Fig 3 and Fig 4. In Fig 3,", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 1107, |
| "end": 1116, |
| "text": "In Fig 4,", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multimodal Transformer", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We present our early experiments to evaluate the performance of proposed models on the standard multimodal datasets used by 1 . We run our models on CMU-MOSI, CMU-MOSEI, and IEMOCAP datasets and present the results for the proposed LMF-MulT and Fusion-Based-CM-Attn-MulT models. Late Fusion (LF) LSTM is a common baseline for all datasets with reported results (pub) together with MulT in . We include the results we obtain (our run) for the MulT model for a direct comparison 2 . Table 1, Table 2 , and Table 3 show the performance of various models on the sentiment analysis and emotion classification datasets. We do not observe any trend suggesting that our methods can achieve better accuracies or F1-scores than the original MulT method . However, we do note that on some occasions, our methods can achieve higher results than the MulT model, in both aligned (see LMF-MulT results for IEMOCAP in Table 3 ) and unaligned (see LMF-MulT results for CMU-MOSEI in Table 2 ) case. We plan to do an exhaustive grid search over the hyper-parameters to understand if our methods can learn to classify the multimodal signal better than the original competitive method. Although the results are comparable, below are the advantages of using our methods:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 481, |
| "end": 497, |
| "text": "Table 1, Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 504, |
| "end": 511, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 902, |
| "end": 909, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 965, |
| "end": 972, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Our LMF-MulT model does not use multiple parallel self-attention transformers for the different modalities and it uses least number of transformers compared to the other two models. Given the same training infrastructure and resources, we observe a consistent speedup in training with this method. See Table 4 for average time per epoch in seconds measured with fixed batch sizes for all three models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 304, |
| "end": 311, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 As summarized in Table 5 , we observe that our models use lesser number of trainable parameters compared to the MulT model, and yet achieve similar performance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 26, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this paper, we present our early investigations towards utilizing Low Rank representations of the multimodal sequences for usage in multimodal transformers with cross-modal attention to the fused signal or the modalities. Our methods build up on the work and apply transformers to fused multimodal signal that aim to capture all inter-modal signals via the Low Rank Matrix Factorization (Liu et al., 2018) . This method is applicable to both aligned and unaligned sequences. Our methods train faster and use fewer parameters to learn classifiers with similar SOTA performance. We are exploring methods to compress the temporal sequences without using the hidden state context vectors from LSTMs that lose the temporal information. We recover the temporal information with a Convolution layer. We believe these models can be deployed in low resource settings with further optimizations. We are also interested in using richer features for the audio, text, and the vision pipeline for other use-cases where we can utilize more resources.", |
| "cite_spans": [ |
| { |
| "start": 390, |
| "end": 408, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We have built this work up on the code-base released for MulT at https://github.com/ yaohungt/Multimodal-Transformer 2 In this work, we have not focused on the further hyperparameter tuning of our models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Iemocap: interactive emotional dyadic motion capture database. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Busso", |
| "suffix": "" |
| }, |
| { |
| "first": "Murtaza", |
| "middle": [], |
| "last": "Bulut", |
| "suffix": "" |
| }, |
| { |
| "first": "Chi-Chun", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Abe", |
| "middle": [], |
| "last": "Kazemzadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Mower", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeannette", |
| "middle": [ |
| "N" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungbok", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Shrikanth", |
| "middle": [ |
| "S" |
| ], |
| "last": "Narayanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "42", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10579-008-9076-6" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N. Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. Iemocap: interactive emotional dyadic motion capture database. Language Re- sources and Evaluation, 42(4):335.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Facial action coding system (facs). A Human Face", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Ekman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Ekman. 2002. Facial action coding system (facs). A Human Face.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage (Revised Edition)", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Ekman", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Ekman. 2009. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage (Revised Edi- tion). WW Norton & Company.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/neco.1997.9.8.1735" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Deep multimodal multilinear fusion with high-order polynomial pooling", |
| "authors": [ |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Hou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiajia", |
| "middle": [], |
| "last": "Tang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianhai", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanzeng", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Qibin", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "32", |
| "issue": "", |
| "pages": "12136--12145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ming Hou, Jiajia Tang, Jianhai Zhang, Wanzeng Kong, and Qibin Zhao. 2019. Deep multimodal mul- tilinear fusion with high-order polynomial pool- ing. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9-Buc, E. Fox, and R. Garnett, editors, Ad- vances in Neural Information Processing Systems 32, pages 12136-12145. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Learning representations from imperfect time series data via tensor rank regularization", |
| "authors": [ |
| { |
| "first": "Zhun", |
| "middle": [], |
| "last": "Paul Pu Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yao-Hung Hubert", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Qibin", |
| "middle": [], |
| "last": "Tsai", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis-Philippe", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1569--1576", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1152" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Pu Liang, Zhun Liu, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2019. Learning representations from im- perfect time series data via tensor rank regulariza- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1569-1576, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Efficient lowrank multimodal fusion with modality-specific factors", |
| "authors": [ |
| { |
| "first": "Zhun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ying", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Varun", |
| "middle": [], |
| "last": "Bharadhwaj Lakshminarasimhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [ |
| "Pu" |
| ], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Amirali", |
| "middle": [], |
| "last": "Bagher Zadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis-Philippe", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/p18-1209" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshmi- narasimhan, Paul Pu Liang, AmirAli Bagher Zadeh, and Louis-Philippe Morency. 2018. Efficient low- rank multimodal fusion with modality-specific fac- tors. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The nature of emotions human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Plutchik", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Plutchik. 2001. The nature of emotions human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American Scientist.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and psychopathology", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Posner", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Bradley S", |
| "middle": [], |
| "last": "Russell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Peterson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "17", |
| "issue": "", |
| "pages": "715--734", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Posner, James A Russell, and Bradley S Pe- terson. 2005. The circumplex model of affect: An integrative approach to affective neuroscience, cog- nitive development, and psychopathology. Develop- ment and psychopathology, 17(03):715-734.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Emotion as a multicomponent process: A model and some cross-cultural data. Review of Personality & Social Psychology", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Klaus", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Scherer", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klaus R Scherer. 1984. Emotion as a multicomponent process: A model and some cross-cultural data. Re- view of Personality & Social Psychology.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Multimodal transformer for unaligned multimodal language sequences", |
| "authors": [ |
| { |
| "first": "Yao-Hung Hubert", |
| "middle": [], |
| "last": "Tsai", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaojie", |
| "middle": [], |
| "last": "Bai", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [ |
| "Pu" |
| ], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "Zico" |
| ], |
| "last": "Kolter", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis-Philippe", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "30", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Tensor fusion network for multimodal sentiment analysis", |
| "authors": [ |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Zadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Minghai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Soujanya", |
| "middle": [], |
| "last": "Poria", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cam- bria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. CoRR, abs/1707.07250.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph", |
| "authors": [ |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Zadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [ |
| "Pu" |
| ], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jon", |
| "middle": [], |
| "last": "Vanbriesen", |
| "suffix": "" |
| }, |
| { |
| "first": "Soujanya", |
| "middle": [], |
| "last": "Poria", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Cambria", |
| "suffix": "" |
| }, |
| { |
| "first": "Minghai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis-Philippe", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amir Zadeh, Paul Pu Liang, Jon Vanbriesen, Soujanya Poria, Erik Cambria, Minghai Chen, and Louis- Philippe Morency. 2018. Multimodal language anal- ysis in the wild: Cmu-mosei dataset and inter- pretable dynamic fusion graph. In Association for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages", |
| "authors": [ |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Zadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Rowan", |
| "middle": [], |
| "last": "Zellers", |
| "suffix": "" |
| }, |
| { |
| "first": "Eli", |
| "middle": [], |
| "last": "Pincus", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis-Philippe", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "IEEE Intelligent Systems", |
| "volume": "31", |
| "issue": "6", |
| "pages": "82--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis- Philippe Morency. 2016. Multimodal sentiment in- tensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82-88.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Modality-specific Fused Attention attention from the 3 modalities. In the other method (Fusion-Based-CM-Attn), the individual modalities are reinforced in parallel via the fused signal.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "Low Rank Matrix Factorization", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "Low Rank Fusion Transformer", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "html": null, |
| "text": "Performance Results for Multimodal Sentiment Analysis on CMU-MOSI dataset with aligned and unaligned multimodal sequences.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Metric</td><td>Acc h 7</td><td>Acc h 2</td><td colspan=\"2\">F1 h MAE l Corr h</td></tr><tr><td colspan=\"4\">(Aligned) CMU-MOSEI Sentiment</td><td/></tr><tr><td>LF-LSTM (pub)</td><td>48.8</td><td>80.6</td><td>80.6 0.619</td><td>0.659</td></tr><tr><td>MulT (Tsai et al., 2019) (pub)</td><td>51.8</td><td>82.5</td><td>82.3 0.580</td><td>0.703</td></tr><tr><td>MulT (Tsai et al., 2019) (our run)</td><td>49.3</td><td>80.5</td><td>81.1 0.625</td><td>0.663</td></tr><tr><td>Fusion-Based-CM-Attn-MulT (ours)</td><td>49.6</td><td>79.9</td><td>80.7 0.616</td><td>0.673</td></tr><tr><td>LMF-MulT (ours)</td><td>50.2</td><td>80.3</td><td>80.3 0.616</td><td>0.662</td></tr><tr><td colspan=\"4\">(Unaligned) CMU-MOSEI Sentiment</td><td/></tr><tr><td>LF-LSTM (pub)</td><td>48.8</td><td>77.5</td><td>78.2 0.624</td><td>0.656</td></tr><tr><td>MulT (Tsai et al., 2019) (pub)</td><td>50.7</td><td>81.6</td><td>81.6 0.591</td><td>0.694</td></tr><tr><td>MulT (Tsai et al., 2019) (our run)</td><td>50.4</td><td>80.7</td><td>80.6 0.617</td><td>0.677</td></tr><tr><td>Fusion-Based-CM-Attn-MulT (ours)</td><td>49.3</td><td>79.4</td><td>79.2 0.613</td><td>0.674</td></tr><tr><td>LMF-MulT (ours)</td><td>49.3</td><td>80.8</td><td>81.3 0.620</td><td>0.668</td></tr></table>" |
| }, |
| "TABREF2": { |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "html": null, |
| "text": "F1 h Acc h F1 h Acc h F1 h Acc h F1 h", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Emotion</td><td/><td colspan=\"2\">Happy</td><td>Sad</td><td/><td colspan=\"2\">Angry</td><td colspan=\"2\">Neutral</td></tr><tr><td>Metric</td><td colspan=\"5\">Acc h (Aligned) IEMOCAP Emotions</td><td/><td/><td/></tr><tr><td>LF-LSTM (pub)</td><td/><td>85.1</td><td>86.3</td><td>78.9</td><td>81.7</td><td>84.7</td><td>83.0</td><td>67.1</td><td>67.6</td></tr><tr><td>MulT (Tsai et al., 2019) (pub)</td><td/><td>90.7</td><td>88.6</td><td>86.7</td><td>86.0</td><td>87.4</td><td>87.0</td><td>72.4</td><td>70.7</td></tr><tr><td>MulT (Tsai et al., 2019) (our run)</td><td/><td>86.4</td><td>82.9</td><td>82.3</td><td>82.4</td><td>85.3</td><td>85.8</td><td>71.2</td><td>70.0</td></tr><tr><td colspan=\"2\">Fusion-Based-CM-Attn-MulT (ours)</td><td>85.6</td><td>83.7</td><td>83.6</td><td>83.7</td><td>84.6</td><td>85.0</td><td>70.4</td><td>69.9</td></tr><tr><td>LMF-MulT (ours)</td><td/><td>85.3</td><td>84.1</td><td>84.1</td><td>83.4</td><td>85.7</td><td>86.2</td><td>71.2</td><td>70.8</td></tr><tr><td colspan=\"6\">(Unaligned) IEMOCAP Emotions</td><td/><td/><td/></tr><tr><td>LF-LSTM (pub)</td><td/><td>72.5</td><td>71.8</td><td>72.9</td><td>70.4</td><td>68.6</td><td>67.9</td><td>59.6</td><td>56.2</td></tr><tr><td>MulT (Tsai et al., 2019) (pub)</td><td/><td>84.8</td><td>81.9</td><td>77.7</td><td>74.1</td><td>73.9</td><td>70.2</td><td>62.5</td><td>59.7</td></tr><tr><td>MulT (Tsai et al., 2019) (our run)</td><td/><td>85.6</td><td>79.0</td><td>79.4</td><td>70.3</td><td>75.8</td><td>65.4</td><td>59.2</td><td>44.0</td></tr><tr><td colspan=\"2\">Fusion-Based-CM-Attn-MulT (ours)</td><td>85.6</td><td>79.0</td><td>79.4</td><td>70.3</td><td>75.8</td><td>65.4</td><td>59.3</td><td>44.2</td></tr><tr><td>LMF-MulT (ours)</td><td/><td>85.6</td><td>79.0</td><td>79.4</td><td>70.3</td><td>75.8</td><td>65.4</td><td>59.2</td><td>44.0</td></tr></table>" |
| }, |
| "TABREF4": { |
| "html": null, |
| "text": "Performance Results for Multimodal Emotion Recognition on IEMOCAP dataset with aligned and unaligned multimodal sequences.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Dataset</td><td colspan=\"2\">CMU-MOSI</td><td colspan=\"2\">CMU-MOSEI</td><td colspan=\"2\">IEMOCAP</td></tr><tr><td>Model</td><td colspan=\"6\">Aligned Unaligned Aligned Unaligned Aligned Unaligned</td></tr><tr><td>MulT (Tsai et al., 2019)</td><td>18.87</td><td>19.25</td><td>191.40</td><td>216.32</td><td>36.20</td><td>37.93</td></tr><tr><td>Fusion-Based-CM-Attn (ours)</td><td>14.53</td><td>15.80</td><td>140.95</td><td>175.68</td><td>26.10</td><td>29.16</td></tr><tr><td>LMF-MulT (ours)</td><td>11.01</td><td>12.03</td><td>106.15</td><td>137.35</td><td>20.57</td><td>23.53</td></tr></table>" |
| }, |
| "TABREF5": { |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td colspan=\"3\">: Average Time/Epoch (sec)</td><td/></tr><tr><td>Dataset</td><td colspan=\"3\">CMU-MOSI CMU-MOSEI IEMOCAP</td></tr><tr><td>MulT (Tsai et al., 2019)</td><td>1071211</td><td>1073731</td><td>1074998</td></tr><tr><td>Fusion-Based-CM-Attn (ours)</td><td>512121</td><td>531441</td><td>532078</td></tr><tr><td>LMF-MulT (ours)</td><td>836121</td><td>855441</td><td>856078</td></tr></table>" |
| }, |
| "TABREF6": { |
| "html": null, |
| "text": "Number of Model Parameters", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |