ACL-OCL / Base_JSON /prefixP /json /P19 /P19-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P19-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:26:03.897656Z"
},
"title": "Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing",
"authors": [
{
"first": "Sijie",
"middle": [],
"last": "Mai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Songlong",
"middle": [],
"last": "Xing",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a general strategy named 'divide, conquer and combine' for multimodal fusion. Instead of directly fusing features at holistic level, we conduct fusion hierarchically so that both local and global interactions are considered for a comprehensive interpretation of multimodal embeddings. In the 'divide' and 'conquer' stages, we conduct local fusion by exploring the interaction of a portion of the aligned feature vectors across various modalities lying within a sliding window, which ensures that each part of multimodal embeddings are explored sufficiently. On its basis, global fusion is conducted in the 'combine' stage to explore the interconnection across local interactions, via an Attentive Bi-directional Skipconnected LSTM that directly connects distant local interactions and integrates two levels of attention mechanism. In this way, local interactions can exchange information sufficiently and thus obtain an overall view of multimodal information. Our method achieves state-ofthe-art performance on multimodal affective computing with higher efficiency.",
"pdf_parse": {
"paper_id": "P19-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a general strategy named 'divide, conquer and combine' for multimodal fusion. Instead of directly fusing features at holistic level, we conduct fusion hierarchically so that both local and global interactions are considered for a comprehensive interpretation of multimodal embeddings. In the 'divide' and 'conquer' stages, we conduct local fusion by exploring the interaction of a portion of the aligned feature vectors across various modalities lying within a sliding window, which ensures that each part of multimodal embeddings are explored sufficiently. On its basis, global fusion is conducted in the 'combine' stage to explore the interconnection across local interactions, via an Attentive Bi-directional Skipconnected LSTM that directly connects distant local interactions and integrates two levels of attention mechanism. In this way, local interactions can exchange information sufficiently and thus obtain an overall view of multimodal information. Our method achieves state-ofthe-art performance on multimodal affective computing with higher efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multimodal machine learning, as prior research shows (Baltru\u0161aitis et al., 2019) , always yields higher performance in multimodal tasks compared to the situation where only one modality is involved. In this paper, we aim at the multimodal machine learning problem, with an emphasis on multimodal affective computing where the task is to infer human's opinion from given language, visual and acoustic modalities (Poria et al., 2017a) .",
"cite_spans": [
{
"start": 53,
"end": 80,
"text": "(Baltru\u0161aitis et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 411,
"end": 432,
"text": "(Poria et al., 2017a)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finding a feasible and effective solution to learning inter-modality dynamics has been an intriguing and important problem in multimodal learning (Baltru\u0161aitis et al., 2019) , where intermodality dynamics represent complementary information contained in more than one involved modality to be detected and analyzed for a more accurate comprehension. For this purpose, a large body of prior work mostly treats the feature vectors of the modalities as the smallest units and fuse them at holistic level (Barezi et al., 2018; Poria et al., 2016a Poria et al., , 2017b . Typically, propose a tensor-based fusion method which fuses feature vectors of three modalities using Cartesian product. Despite the effectiveness this type of methods have achieved, they give little consideration to acknowledging the variations across different portions of a feature vector which may contain disparate aspects of information and thus fail to render the fusion procedure more specialized. Additionally, they conduct fusion within one step, which can be intractable in some scenarios where the fusion method is susceptible to high computational complexity. Recently, Convolution Neural Networks (CN-N) have achieved compelling successes in computer vision (Krizhevsky et al., 2012; Mehta et al., 2019) . One of core spirits in CNN lies in the use of convolutional operation to process feature maps, which is a series of local operations with kernels sliding through the object. Inspired by it, we propose local fusion to explore local interactions in multimodal embeddings, which is in spirit similar to convolution but basically is a general strate-gy towards multimodal fusion with multiple concrete fusion methods to choose from. Specifically, as shown in Fig. 1 , we align feature vectors of three modalities to obtain multimodal embeddings and apply a sliding window to slide through them. The parallel portions of feature vectors within each window are then fused by a specific fusion method. By considering local interactions we achieve three advantages: 1) render fusion procedure more specialized since each portion of modality embeddings contains specific aspect of information intuitively; 2) assign proper weights to different portions; 3) reduce computational complexity and parameters substantially by dividing holistic fusion into multiple local ones. Many approaches can be adapted into our strategy for local fusion, and we empirically apply outer product, following . While using outer product (bilinear pooling) always brings heavy time and space complexity (Lin et al., 2015; Wu et al., 2017) , we show that our method can achieve much higher efficiency.",
"cite_spans": [
{
"start": 146,
"end": 173,
"text": "(Baltru\u0161aitis et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 500,
"end": 521,
"text": "(Barezi et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 522,
"end": 541,
"text": "Poria et al., 2016a",
"ref_id": "BIBREF38"
},
{
"start": 542,
"end": 563,
"text": "Poria et al., , 2017b",
"ref_id": "BIBREF36"
},
{
"start": 1139,
"end": 1183,
"text": "Recently, Convolution Neural Networks (CN-N)",
"ref_id": null
},
{
"start": 1238,
"end": 1263,
"text": "(Krizhevsky et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 1264,
"end": 1283,
"text": "Mehta et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 2559,
"end": 2577,
"text": "(Lin et al., 2015;",
"ref_id": "BIBREF24"
},
{
"start": 2578,
"end": 2594,
"text": "Wu et al., 2017)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [
{
"start": 1741,
"end": 1747,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nonetheless, local fusion alone is not adequate for a comprehensive analysis of opinion. In fact, local interactions may contain complementary information to each other, which should be drawn upon for overall comprehension. Moreover, a small-sized sliding window may not be able to cover a complete interaction. Thus, we propose global fusion to explore interconnections of local interactions to mitigate these problems. In practice, RNN variants (Goudreau et al., 1994) , especially LSTM (Hochreiter and Schmidhuber, 1997) , are suitable for global fusion for their impressive power in modeling interrelations. However, in vanilla RNN architecture, only consecutive time steps are linked through hidden states, which may not be adequate for conveying information to local interactions that are far apart. Recently, some works have focused upon introducing residual learning into RNNs (Tao and Liu, 2018; Wang and Wang, 2018; Wang and Tian, 2016; He et al., 2016) . Motivated by these efforts, we propose an Attentive Bi-directional Skip-connected LSTM (ABS-LSTM) that introduces bidirectional skip connection of memory cells and hidden states into LSTM, which is effective in ensuring sufficient flow of information in multi-way and handling long-term dependency problem (Bengio et al., 1994) . In the transmission process of ABS-LSTM, the previous interactions are not equally correlated to the current local interaction, i.e., they vary in the amount of complementary information to be delivered. In addition, given that the local interactions, which do not contain equally valuable information, are used as input into ABS-LSTM across time steps, it is understandable that the produced states do not contribute equally to recognizing emotion. Thus, we incorporate two levels of attention mechanism into ABS-LSTM, i.e., Regional Interdependence Attention and Global Interaction Attention. The former takes effect in the process of delivering complementary information between local interactions, identifying the various correlation of previous t local interactions to the current one. The latter serves the purpose of allocating more attention to states that are more informative so as to aid a more accurate prediction.",
"cite_spans": [
{
"start": 447,
"end": 470,
"text": "(Goudreau et al., 1994)",
"ref_id": "BIBREF10"
},
{
"start": 489,
"end": 523,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF16"
},
{
"start": 885,
"end": 904,
"text": "(Tao and Liu, 2018;",
"ref_id": "BIBREF43"
},
{
"start": 905,
"end": 925,
"text": "Wang and Wang, 2018;",
"ref_id": "BIBREF45"
},
{
"start": 926,
"end": 946,
"text": "Wang and Tian, 2016;",
"ref_id": "BIBREF48"
},
{
"start": 947,
"end": 963,
"text": "He et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 1272,
"end": 1293,
"text": "(Bengio et al., 1994)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To sum up, we propose a Hierarchical Feature Fusion Network (HFFN) for multimodal affective analysis. The main contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a generic hierarchical fusion strategy, termed 'divide, conquer and combine', to explore both local and global interactions in multiple stages each focusing on different dynamics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Instead of conducting fusion on a holistic level, we innovate to leverage a sliding window to explore inter-modality dynamics locally. In this way, our model can take into account the variations across portions in a feature vector. Such setting also brings about an impressive bonus, i.e., significant drop in computational complexity compared to other tensorbased methods, which is proven empirically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose global fusion to obtain an overall view of multimodal embeddings via a specifically designed ABS-LSTM, in which we integrate two levels of attention mechanism: Regional Interdependence Attention and Global Interaction Attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous research on affective analysis focuses on text modality (Liu and Zhang, 2012; Cambria and Hussain, 2015) , which is a hot research topic in the NLP community. However, recent research suggests that information from text is not sufficient for mining opinion of humans (Poria et al., 2017a; D'Mello and Kory, 2015; Cambria, 2016) , espe-cially under the situation where sarcasm or ambiguity occurs. Nevertheless, if the accompanying information such as speaker's facial expressions and tones are presented, it would be much easier to figure out the real sentiment (Pham et al., 2019 (Pham et al., , 2018 . Therefore, multimodal affective analysis has attracted increasing attention, whose major challenge is how to fuse features from various modalities. Earlier feature fusion strategies can be roughly categorized into feature-level and decision-level fusion. The former seeks to extract features of various modalities and conduct fusion at input level, by mapping them into the same embedding space simply using concatenation (Wollmer et al., 2013; Rozgic et al., 2012; Morency et al., 2011; Poria et al., 2016a Poria et al., , 2017b Gu et al., 2017) . The latter, by contrast, draws tentative decisions based on involved modalities separately and weighted-average the decisions, realizing cross-modal fusion (Wu and Liang, 2010; Nojavanasghari et al., 2016; Zadeh et al., 2016a; . These two lines of work do not effectively model cross-modal or modalityspecific dynamics ).",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "(Liu and Zhang, 2012;",
"ref_id": "BIBREF25"
},
{
"start": 87,
"end": 113,
"text": "Cambria and Hussain, 2015)",
"ref_id": "BIBREF5"
},
{
"start": 276,
"end": 297,
"text": "(Poria et al., 2017a;",
"ref_id": "BIBREF35"
},
{
"start": 298,
"end": 321,
"text": "D'Mello and Kory, 2015;",
"ref_id": "BIBREF8"
},
{
"start": 322,
"end": 336,
"text": "Cambria, 2016)",
"ref_id": "BIBREF4"
},
{
"start": 571,
"end": 589,
"text": "(Pham et al., 2019",
"ref_id": "BIBREF33"
},
{
"start": 590,
"end": 610,
"text": "(Pham et al., , 2018",
"ref_id": "BIBREF34"
},
{
"start": 1035,
"end": 1057,
"text": "(Wollmer et al., 2013;",
"ref_id": "BIBREF49"
},
{
"start": 1058,
"end": 1078,
"text": "Rozgic et al., 2012;",
"ref_id": "BIBREF41"
},
{
"start": 1079,
"end": 1100,
"text": "Morency et al., 2011;",
"ref_id": "BIBREF30"
},
{
"start": 1101,
"end": 1120,
"text": "Poria et al., 2016a",
"ref_id": "BIBREF38"
},
{
"start": 1121,
"end": 1142,
"text": "Poria et al., , 2017b",
"ref_id": "BIBREF36"
},
{
"start": 1143,
"end": 1159,
"text": "Gu et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 1318,
"end": 1338,
"text": "(Wu and Liang, 2010;",
"ref_id": "BIBREF50"
},
{
"start": 1339,
"end": 1367,
"text": "Nojavanasghari et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 1368,
"end": 1388,
"text": "Zadeh et al., 2016a;",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, word-level fusion methods have received substantial research attention and been widely acknowledged for effective exploration of time-dependent interactions (Wang et al., 2019; Zadeh et al., 2018a,b,c; Gu et al., 2018a; Rajagopalan et al., 2016) . For example, and Gu et al. (2018b) leverage word-level alignment between modalities and explore timerestricted cross-modal dynamics. Liang et al. (2018a) propose Recurrent Multistage Fusion Network (RMFN) which decomposes multimodal fusion into three stages and uses LSTM to perform local fusion. RMFN adopts the strategy of 'divide and conquer', while our method extends it by adding 'combine' part to learn the relations between local interactions. Liang et al. (2018b) conducts emotion recognition using local-global emotion intensity rankings and Bayesian ranking algorithms. However, the 'local' and 'global' here is totally different from ours, with its 'local' referring to an utterance of a video while our 'local' represents a feature chunk of an utterance.",
"cite_spans": [
{
"start": 167,
"end": 186,
"text": "(Wang et al., 2019;",
"ref_id": "BIBREF47"
},
{
"start": 187,
"end": 211,
"text": "Zadeh et al., 2018a,b,c;",
"ref_id": null
},
{
"start": 212,
"end": 229,
"text": "Gu et al., 2018a;",
"ref_id": "BIBREF12"
},
{
"start": 230,
"end": 255,
"text": "Rajagopalan et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 275,
"end": 292,
"text": "Gu et al. (2018b)",
"ref_id": "BIBREF13"
},
{
"start": 391,
"end": 411,
"text": "Liang et al. (2018a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Tensor fusion has also become increasingly popular. Tensor Fusion Network (TFN) adopts outer product to conduct fusion at holistic level, which is later extended by and Barezi et al. (2018) that try to improve efficiency and reduce redundant information by decomposing weights of high-dimensional fused tensors. HFFN mainly applies outer product as local fusion methods, and it improves efficiency by dividing modality embeddings into multiple local chunks before fusion which prevents highdimensional fused tensor from being created. Actually, HFFN can adopt any fusion strategy in local fusion stage other than only outer product, showing high flexibility and applicability.",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "Barezi et al. (2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As shown in Fig. 2 , HFFN consists of: 1) Local Fusion Module (LFM) for fusing features of different modalities at every local chunk; 2) Global Fusion Module (GFM) for exploring global intermodality dynamics; 3) Emotion Inference Module (EIM) for obtaining the predicted emotion.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3"
},
{
"text": "At the local fusion stage, we apply a sliding window that slides through the aligned feature vectors synchronously. At each step of operation, local fusion is conducted for the portions of feature vectors within the window. In this way, features across all modalities at the same window are able to fully interact with one another to obtain locally confined interactions in a more specialized way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "Assume that we have three modalities' feature vectors as input, namely language l \u2208 R k , visual v \u2208 R k and acoustic a \u2208 R k (we only consider the situation where all modalities share the same feature length k since they can be easily mapped into the same embedding space via some transformations). In 'divide' stage, we align these feature vectors to form the multimodal embedding M \u2208 R 3\u00d7k and leverage a sliding window of size 3 \u00d7 d to explore inter-modality dynamics. Through the sliding window, each feature vector can be seen as segmented into multiple portions, each termed as a local portion. The segmentation procedure for feature vector of one modality is equivalent to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "m i =[m s\u2022(i\u22121)+1 , m s\u2022(i\u22121)+2 , ..., m s\u2022(i\u22121)+d ] (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "where m \u2208 {l, v, a} is the modality m, d is the window size, s is the stride and m i denotes the i th local portion of modality m (i \u2208 [1, n], n is the number of local portions for each modality). Obviously, for each modality, we have n = k\u2212d s + 1 local portions in total, provided that k \u2212d is divisible by s. Otherwise the feature vectors are padded with 0s to guarantee divisibility and in this case we For descriptive convenience we also term the parallel local portions corresponding to all modalities within the sliding window as a local chunk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "Many fusion methods can be chosen for fusing features within each local chunk to explore intermodality dynamics in 'conquer' stage. In practice, we apply outer product for it provides the best results in our experiments. Firstly, each local portion is padded with 1s to retain interactions of any subset of modalities as in :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "m i = [m i , 1], 1 \u2264 i \u2264 n, m \u2208 {l, v, a} (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "Then we perform outer product from feature vectors padded with 1s, defined as :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "X f i = m m i , m i \u2208 R d+1 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "where denotes tensor outer product of a set of vectors. The final local fused tensor for i th local chunk is X f i \u2208 R (d+1) 3 which represents the i th local interaction. We group all n local fused tensors to obtain the overall fused tensor sequence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "X f = [X f 1 ; X f 2 ; ... ; X f n ] \u2208 R n\u00d7(d+1) 3 . A tensor fusion diagram is shown in LFM module of Fig. 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "Compared with other models adopting outer product , our model achieves a marked improvement in efficiency by dividing holistic tensor fusion into multiple local ones, which is shown in Section 4.3.3. Actually, we can apply other fusion methods that are suitable for local information extraction, which demonstrates the broad applicability of our strategy and is left for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divide and Conquer: Local Fusion",
"sec_num": "3.1"
},
{
"text": "In the 'combine' stage, we model global interactions by exploring interconnections (complementary information) and context-dependency across local fused tensors to obtain an overall interpretation of interactions comprehensively. In addition, the limited and fixed size of sliding window may lead to division of the complete process of expressing emotion into different local portions, in which case sufficient flow of information between local chunks is warranted to compensate for this problem. Therefore, we design ABS-LSTM, an RNN variant, to make sense of the cross-modality dynamics from an integral perspective. In ABS-LSTM, we introduce bidirectional residual connection of memory cells and hidden states as well as integrate attention mechanisms to transmit information and learn overall representations more effectively, as shown in Fig. 2 . Now that we obtain the local fused tensor sequence X f in LFM, global interaction learning can be expressed as:",
"cite_spans": [],
"ref_spans": [
{
"start": 843,
"end": 849,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Combine: Global Fusion",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X g = ABS-LSTM(X f )",
"eq_num": "(4)"
}
],
"section": "Combine: Global Fusion",
"sec_num": "3.2"
},
{
"text": "where ABS-LSTM is activated by tanh nonlinear function, X g = [X g 1 ; X g 2 ; ...; X g n ] \u2208 R n\u00d72o is the global fused tensor sequence, and 2o is the dimensionality of ABS-LSTM's output. A detailed illustration of ABS-LSTM is shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combine: Global Fusion",
"sec_num": "3.2"
},
{
"text": "ABS-LSTM is specifically designed for modeling the interconnections of local fused tensors to distill complementary information. Since local in-teractions within a certain distance range are mutually correlated, it is necessary for ABS-LSTM to operate in a bidirectional way. As opposed to conventional bidirectional RNNs, ABS-LSTM has a set of identical parameters for both forward/backward passes which ensures a smaller number of parameters. Further, ABS-LSTM directly connects the current interaction with its several neighbors so that information can be sufficiently exchanged. Given its ability to bidirectionally transmit information in multiple connections, it is powerful in modeling long-term dependency, which is crucial for long sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "Firstly we illustrate the pipeline of ABS-LSTM in forward pass stage. Assume that t previous local interactions are directly connected to the current one (t is set to 3 in our experiment), it is beneficial to identify the various correlation between previous t interactions and the current interaction. To this end, we integrate Regional Interdependence Attention (RIA) into ABS-LSTM, so that previous local interactions containing more complementary information to the current one are given more importance in the information transmission process. The equations for previous information fusion of cells and states for l th interaction in forward pass are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s c l\u2212i =tanh(W c ( \u2212 \u2192 c l\u2212i \u2295 X f l ) ), 1 \u2264 i \u2264 t s h l\u2212i = tanh(W h ( \u2212 \u2192 h l\u2212i \u2295 X f l ) ), 1 \u2264 i \u2264 t (5) s c = [ s c l\u2212t 2 , s c l\u2212t+1 2 , ..., s c l\u22121 2 ], s h = [ s h l\u2212t 2 , s h l\u2212t+1 2 , ..., s h l\u22121 2 ]",
"eq_num": "(6)"
}
],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b3 c = sof tmax(s c ),\u03b3 h = sof tmax(s h ) (7) c l = a( t i=1 \u03b3 c l\u2212i \u2212 \u2192 c l\u2212i ),h l = a( t i=1 \u03b3 h l\u2212i \u2212 \u2192 h l\u2212i )",
"eq_num": "(8)"
}
],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "where denotes vector concatenation and W h , W c \u2208 R o\u00d7(o+ (d+1) 3 ) are parameter matrices that determine the importance of previous cells \u2212 \u2192 c l\u2212i and states \u2212 \u2192 c l\u2212i , respectively. Eq. 5 maps the cell and state at the (l \u2212 i) th time step into two o-dimensional vectors respectively. Instead of merely using \u2212 \u2192 c l\u2212i or \u2212 \u2192 h l\u2212i to obtain their importance towards local interaction at current time step, we also utilize current time step's input X f l to reflect the correlation between the cell and states of (l \u2212 i) th interaction and current l th time step's input, which provides a better measurement of attention score by learning inter-dependency correlation between interactions. We take the 2norm of each vector in Eq. 6 as the importance score of each previous cell and state and then form a t-dimensional importance score vector for all states and cells, respectively. In Eq. 7 we use sof tmax layer to normalize both vectors and obtain the final attention scores, which, according to Eq. 8, are used as weights for the combination of previous t local interactions. The function a in Eq. 8 is a nonlinear activation function that helps to improve expressive power of ABS-LSTM, which we empirically choose ReLU . Overall, Eq. 5 to Eq. 8 realize transmission of information from previous multiple local interactions to the current one, using the first level of attention mechanism, i.e., RIA, which is able to properly distribute attention across the previous t local interactions to focus on the ones that contain information most relevant to the current local interaction.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 64,
"text": "(d+1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "After the combination of previous information, we further define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "f l = \u03c3(W f 1 X f l + W f 2 h l ) (9) i l = \u03c3(W i 1 X f l + W i 2 h l ) (10) \u2212 \u2192 c l =f l c l +i l tanh(W m 1 X f l +W m 2 h l ) (11) \u2212 \u2192 h l = \u03c3(W o 1 X f l + W o 2 h l ) tanh( \u2212 \u2192 c l ) (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "where \u03c3 denotes sigmoid function. Eq. 9 -12 denote the routine procedure of LSTM except that \u2212 \u2192 h l\u22121 and \u2212 \u2192 c l\u22121 are replaced with h l and c l , respectively. The output of l th time step in forward pass stage is \u2212 \u2192 h l (1 \u2264 l \u2264 n). To make ABS-LSTM bidirectional, in backward pass stage, we reverse input X f so that the last interaction arrives in first place and again feed it into Eq. 5 -12, whose output becomes \u2190 \u2212 h l . The output of ABS-LSTM at l th time step is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "h l = \u2190 \u2212 h l \u2212 \u2192 h l \u2208 R 2o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "Global Interaction Attention (GIA): Inherently, LSTM has the capability to 'memorize', and uses the memory to sequentially model long-term dependency. Thus, the hidden states output by ABS-LSTM synthesize the information from current time step's input interaction and that from previous input, respectively. In this sense, at each time step new information is processed and previous information still exists but is 'diluted' in the hidden state (due to the forget gate). Therefore, as some local interactions that are more informative, e.g. revealing a sharp tone or sheer alteration of facial expressions, are input to ABS-LSTM, the produced states should be given more importance over others since they have just synthesized an informative interaction and not yet been 'diluted'. Hence, it is justifiable to employ a specifically designed attention mechanism, termed Global Interaction Attention (GIA), to properly assign importance across states. GIA is formulated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 h =ReLU(W h h l + b h )",
"eq_num": "(13)"
}
],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 x = ReLU(W x X f l + b x )",
"eq_num": "(14)"
}
],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a l = tanh((W h 2 \u03c9 h ) \u2022 h l + W x 2 \u03c9 x )",
"eq_num": "(15)"
}
],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "where W h \u2208 R o\u00d72o and W x \u2208 R o\u00d7(d+1) 3 are two parameter matrices and b h , b x \u2208 R o are two bias vectors to be learned. W h 2 , W x 2 \u2208 R 1\u00d7o are two parameter matrices that determine final importance scores. Through affine transforms and nonlinearities in Eq. 13 and Eq. 14, the l th state h l and the corresponding input X f l are embedded into two o-dimensional vectors \u03c9 h and \u03c9 x that contain information regarding importance of l th state and local interaction, respectively. In Eq. 15, W h 2 and \u03c9 h first form a scalar via matrix multiplication, which reflects the importance of the l th hidden state to be used as its weight. Meanwhile, we pre-multiply \u03c9 x by W x 2 and obtain a scalar to be added to each entry of weighted state, which functions as a bias containing input information. By this means, the attended state at current time step is able to focus more on the information from current interaction instead of the previous ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "Considering that X f l and h l are two intrinsically disparate sources of information, we only formulate the impact of X f l as a scalar that biases the state, rather than as a vector which has much more complex influence to the state and empirically degrades performance. In this way, if X f l is more important, the l th attended state h a l will receive a more significant shift towards a higher position with respect to all high-dimensional coordinates, and thus h a l is more attended. In a sense, every element of the original state undergoes a transformation, with a specifically determined weight and a fixed bias across all entries. GIA enables ABS-LSTM to enhance the states of greater importance, aiding a more accurate classification. The final output of ABS-LSTM is the concatenation of attended states:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "X g = n l=1 h a l \u2208 R n\u00d72o .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABS-LSTM",
"sec_num": "3.2.1"
},
{
"text": "After obtaining the global interactions, the final emotion is obtained by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Inference Module",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E = f (W e1 X g + b e1 )",
"eq_num": "(16)"
}
],
"section": "Emotion Inference Module",
"sec_num": "3.3"
},
{
"text": "I = sof tmax(W e2 E) (17) where f contains a tanh activation function and a dropout layer of dropout rate 0.5, W e1 \u2208 R 50\u00d7n\u20222o , b e1 \u2208 R 50 and W e2 \u2208 R N \u00d750 are the learnable parameters, and I \u2208 R N is the final emotion inference (N is the number of categories).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Inference Module",
"sec_num": "3.3"
},
{
"text": "4.1 datasets CMU-MOSI (Zadeh et al., 2016b) includes 93 videos with each video padded to 62 utterances. We consider positive and negative sentiments in our paper. We use 49 videos for training, 13 for validation and 31 for testing. CMU-MOSEI (Zadeh et al., 2018c) has 2928 videos, and each video is padded to 98 utterances. Each utterance has been scored on two perspectives: sentiment intensity (ranges between [-3, 3] ) and emotion (six classes). We consider positive, negative and neutral sentiments in the paper. We utilize 1800, 450 and 678 videos respectively for training, validation and testing. IEMOCAP (Busso et al., 2008) contains 151 videos and each video has at most 110 utterances. IEMOCAP contains following labels: anger, happiness, sadness, neutral, excitement, frustration, fear, surprise and other. We take the first four emotions so as to compare with previous models. The training, validation and testing sets contain 96, 24 and 31 videos respectively.",
"cite_spans": [
{
"start": 22,
"end": 43,
"text": "(Zadeh et al., 2016b)",
"ref_id": "BIBREF58"
},
{
"start": 242,
"end": 263,
"text": "(Zadeh et al., 2018c)",
"ref_id": "BIBREF56"
},
{
"start": 412,
"end": 419,
"text": "[-3, 3]",
"ref_id": null
},
{
"start": 612,
"end": 632,
"text": "(Busso et al., 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "HFFN is implemented using the framework of Keras, with tensorflow as backend. The input dimensionality k for CMU-MOSI and CMU-MOSEI datasets is 50, while for IEMOCAP, k is set to 100. We use RMSprop for optimizing the network, with cosine proximity as objective function. The output dimension 2o of ABS-LSTM is set to 6 for CMU-MOSI and CMU-MOSEI but 2 for IEMOCAP. Note that ABS-LSTM is activated by tanh and followed by a dropout layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental details",
"sec_num": "4.2"
},
{
"text": "For feature pre-extraction, our setting on CMU-MOSI and IEMOCAP datasets are identical to that in (Poria et al., 2017b) 1 . The features are extracted from each utterances separately. For language feature, a text-CNN is applied. Each word is first embedded into a vector using word2vec tool (Mikolov et al., 2013) . Then the vectorized representations for all words in an utterance are concatenated, which afterwards is processed by CNNs (Karpathy et al., 2014) . For acoustic feature, an open-source tool openS-MILE (Eyben, 2010) is utilized to generate high dimensional vectors comprised of low-level descriptors (LLD). 3D-CNN (Ji et al., 2013) is applied for visual feature pre-extraction. It learns relevant features from each frame and the alterations across consecutive frames. By contrast, on CMU-MOSEI dataset we follow the setting as in 2 . GloVe (Pennington et al., 2014) , Facet (iMotions, 2017) and COVAREP (Degottex et al., 2014) are applied for extracting language, visual and acoustic features respectively. Word-level alignment is performed using P2FA (Yuan and Liberman, 2008) across modalities. Eventually the unimodal features are generated as the average of their feature values over word time interval .",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Poria et al., 2017b)",
"ref_id": "BIBREF36"
},
{
"start": 291,
"end": 313,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF29"
},
{
"start": 438,
"end": 461,
"text": "(Karpathy et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 629,
"end": 646,
"text": "(Ji et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 856,
"end": 881,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 919,
"end": 942,
"text": "(Degottex et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 1068,
"end": 1093,
"text": "(Yuan and Liberman, 2008)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental details",
"sec_num": "4.2"
},
{
"text": "Subsequent to pre-extraction, similar to BC-LSTM (Poria et al., 2017b) , we devise a Unimodal Feature Extraction Network (UFEN): R u\u00d7d j \u2192 R u\u00d7k , which consists of a bidirectional LSTM layer followed by a fully connected (FC) layer, for each separate modality. Here, u denotes the number of utterances that constitute a video and d j is the dimensionality of raw feature vector for j th modality. Through UFEN, feature vectors of all modalities are mapped into the same embedding space (have the same dimensionality k). UFEN for each modality, is individually trained followed by a FC layer: R k \u2192 R N using Adadelta (Zeiler, 2012) as optimizer and with categorical crossentropy as loss function. The precessed feature vectors of each utterance will be sent into HFFN.",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "(Poria et al., 2017b)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental details",
"sec_num": "4.2"
},
{
"text": "We compare HFFN with following multimodal algorithms: RMFN (Liang et al., 2018a) , MFN (Zadeh et al., 2018a) , MCTN (Pham et al., 2019) , BC-LSTM (Poria et al., 2017b) , TFN , MARN (Zadeh et al., 2018b) , LMF ), MFM (Tsai et al., 2019 , MR-RF (Barezi et al., 2018) , FAF (Gu et al., 2018b) , RAVEN (Wang et al., 2019) , GMFN (Zadeh et al., 2018c) , Memn2n (Sukhbaatar et al., 2015) , MM-B2 , CHFusion (Majumder et al., 2018) , SVM Trees (Rozgic et al., 2012) , CMN , C-MKL (Poria et al., 2016b) and CAT-LSTM (Poria et al., 2017c) .",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Liang et al., 2018a)",
"ref_id": "BIBREF22"
},
{
"start": 87,
"end": 108,
"text": "(Zadeh et al., 2018a)",
"ref_id": "BIBREF22"
},
{
"start": 116,
"end": 135,
"text": "(Pham et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 146,
"end": 167,
"text": "(Poria et al., 2017b)",
"ref_id": "BIBREF36"
},
{
"start": 181,
"end": 202,
"text": "(Zadeh et al., 2018b)",
"ref_id": null
},
{
"start": 209,
"end": 234,
"text": "), MFM (Tsai et al., 2019",
"ref_id": null
},
{
"start": 237,
"end": 264,
"text": "MR-RF (Barezi et al., 2018)",
"ref_id": null
},
{
"start": 271,
"end": 289,
"text": "(Gu et al., 2018b)",
"ref_id": "BIBREF13"
},
{
"start": 298,
"end": 317,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 325,
"end": 346,
"text": "(Zadeh et al., 2018c)",
"ref_id": "BIBREF56"
},
{
"start": 356,
"end": 381,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF42"
},
{
"start": 401,
"end": 424,
"text": "(Majumder et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 437,
"end": 458,
"text": "(Rozgic et al., 2012)",
"ref_id": "BIBREF41"
},
{
"start": 473,
"end": 494,
"text": "(Poria et al., 2016b)",
"ref_id": "BIBREF39"
},
{
"start": 508,
"end": 529,
"text": "(Poria et al., 2017c)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Baselines",
"sec_num": "4.3.1"
},
{
"text": "As presented in Table 1 , HFFN shows improvement over typical approaches, setting new stateof-the-art record. Compared with the tensor fusion approaches TFN , M-RRF (Barezi et al., 2018) and LMF , HFFN achieves improvement by about 4%, which demonstrates its superiority. It is reasonable because these methods conduct tensor fusion at holistic level and ignore modeling local interactions, while ours has a well-designed LFM module. Compared to the word-level fusion approaches RAVEN (Wang et al., 2019) , RMFN (Liang et al., 2018a) and FAF (Gu et al., 2018b) , etc., HFFN achieves improvement by about 2%. We argue that it is because they ignore explicitly connecting locally-constrained interactions to obtain a general view of multimodal signals, while we explore global interactions by applying ABS-LSTM.",
"cite_spans": [
{
"start": 485,
"end": 504,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 512,
"end": 533,
"text": "(Liang et al., 2018a)",
"ref_id": "BIBREF22"
},
{
"start": 542,
"end": 560,
"text": "(Gu et al., 2018b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Comparison with Baselines",
"sec_num": "4.3.1"
},
{
"text": "The results on IEMOCAP and CMU-MOSEI datasets are shown in Table 2 and Table 3 , respectively. We can conclude from Table 2 that HFFN achieves consistent improvements on accuracy and F1 score in IEMOCAP 4-way and individual emotion recognition tasks compared with other methods. Specifically, HFFN outperforms other methods by a significant margin on the recognition of Angry and Neutral emotions. For CMU-MOSEI dataset, as shown in Table 3 , the accuracy of HFFN is lower than that of BC-LSTM and CAT-LSTM, but it achieves the highest F1 s- core with slight margin. HFFN still achieves stateof-the-art performance on these two datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 78,
"text": "Table 2 and Table 3",
"ref_id": "TABREF2"
},
{
"start": 116,
"end": 123,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 433,
"end": 440,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparison with Baselines",
"sec_num": "4.3.1"
},
{
"text": "To explore the underlying information of each modality, we carry out an experiment to compare the performance among unimodal, bimodal and trimodal models. For unimodal models, we can infer from Table 4 that language modality is the most predictive for emotion prediction, outperforming acoustic and visual modalities with significant margin. When coupled with acoustic and visual modalities, the trimodal HFFN performs best, whose result is 1% \u223c 2% better than the language-HFFN, indicting that acoustic and visual modalities actually play auxiliary roles while language is dominant. However, in our model, when conducting outer product, all three modalities are treated equally, which is probably not the optimal choice. In the future, we aim to develop a fusion technique paying more attention to the language modality, while the other two modalities only serve as accessory sources of information.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Discussion on Modality Importance",
"sec_num": "4.3.2"
},
{
"text": "Interestingly, the bimodal HFFNs do not necessarily outperform the language-HFFN. Contrarily, sometimes it even lowers the performance when language is combining with acoustic or visual modality. Nevertheless, when three modalities are available, the performance is undoubtedly the best. It indicates that a great deal of information hidden in a single modality can be interpreted only by combining all the three modalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Modality Importance",
"sec_num": "4.3.2"
},
{
"text": "Contrast experiments are conducted to analyze the efficiency of TFN , BC-LSTM (Poria et al., 2017b) 3 and HFFN. We compare the number of parameters and FLOPs after fusion (the FLOPs index is used to measure time complexity), and the inputs for all methods are the same to make a fair comparison. The trainable layers in TFN include two FC layers of 32 ReLU activation units and a decision layer: R 32 \u2192 R 2 . We adopt this setting to match the code released by the authors 4 . BC-LSTM's trainable layers contain a bidirectional LSTM with input and output dimension being 3 \u2022 50 and 600 respectively, and two FC layers of 500 and 2 units respectively. Table 5 shows that in terms of the number of parameters, TFN is around 511 times larger than our HFFN, even under the situation where we adopt a more complex module after tensor fusion, demonstrating the high efficiency of HFFN. Note that if TFN adopts the original setting as stated in where the FC layers have 128 units, it would even have more parameters than our version of TFN. Compared to BC-LSTM, HFFN has about 166 times fewer parameters and the FLOPs of HFFN is over 79 times fewer than that of BC-LSTM. Moreover, BC-LSTM is over 6 times faster than TFN in time complexity measured by FLOPs and the number of parameters is over 3 times smaller. These results demonstrate that outer product applied in TFN results in heavy computational complexity and a substantial number of param- eters compared with other methods such as BC-LSTM, while HFFN can avoid these two problems and is even more efficient than other approaches adopting low-complexity fusion methods.",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "(Poria et al., 2017b)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 651,
"end": 658,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Comparative Analysis on Efficiency",
"sec_num": "4.3.3"
},
{
"text": "To demonstrate the superiority of ABS-LSTM on learning global interactions and the impact of the proposed attention mechanism, we conduct an experiment to compare the performance of model under different settings of global fusion. We can infer from Table 6 that ABS-LSTM reaches best results among all tested LSTM variants. Besides, vanilla LSTM achieves lowest performance, showing the necessity of delivering information bidirectionally. Bidirectional LSTM slightly outperforms no-attention variant of ABS-LSTM, possibly due to the use of two sets of independent learnable parameters for forward and backward passes, respectively, which allows more flexibility. However, as ABS-LSTM with attention outperforms bidirectional LSTM, it demonstrates the efficacy of ABS-LSTM. In terms of the effectiveness of attention mechanisms, interestingly, both RIA and GIA, when used alone, only bring about slight improvement (0.2%\u223c0.3%) compared to the no-attention version of ABS-LSTM. However, it further boosts the performance when RIA and GIA are concurrently used, achieving more improvement than that caused by RIA and GIA alone added together. This shows some potential positive link between the two levels of attention mechanism. Specifically, RIA can provide more refined information during transmission between local interactions, so that the output states to be processed by GIA are more focused on useful information and freer of noise, maximizing the effect of GIA.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Discussion on Global Fusion",
"sec_num": "4.3.4"
},
{
"text": "To investigate the influence of the size d and the stride s of sliding window on learning local interactions, we conduct experiments on IEMOCAP where s changes incrementally from 1 to 10 and d takes on four values, namely 1, 2, 5 and 10. The results are shown in Fig. 3 . It can be observed that for all values of d, the accuracy fluctuates within a limited range as the stride s changes incrementally, showing robustness with respect to the stride. Overall, the model fares best when d is set to 2, demonstrating that a moderate size of sliding window is important for ensuring high performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 269,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion on Sliding Window",
"sec_num": "4.3.5"
},
{
"text": "We conjecture that the reason behind the decline in performance when d is assigned an overly large value (greater than 2), is that the effect of local fusion is lessened, leading to less specialized exploration of feature portions. This in turn verifies the central importance of local fusion in our strategy. In addition, an unreasonably small d may lead to disintegration of the feature correlation that could be capitalized on and scatter complete information, thus hurting overall performance. Furthermore, it is surprising that when the stride s is greater than d (some dimensions of feature vectors are left out in local fusion), the accuracy does not significantly suffer. This shows that there may be a deal of redundant information in the feature vectors, implying that more advanced extraction techniques are needed for more refined representations, which we will explore as part of future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Sliding Window",
"sec_num": "4.3.5"
},
{
"text": "We propose an efficient and effective framework HFFN that adopts a novel fusion strategy called 'divide, conquer and combine'. HFFN learns local interactions at each local chunk and explores global interactions by conveying information across local interactions using ABS-LSTM that integrates two levels of attention mechanism. Our fusion strategy is generic for other concrete fusion methods. In future work, we intend to explore multiple local fusion methods within our framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/soujanyaporia/multimodalsentiment-analysis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/A2Zadeh/CMU-MultimodalSDK",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/soujanyaporia/multimodalsentiment-analysis 4 https://github.com/Justin1904/TensorFusionNetworks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multimodal machine learning: A survey and taxonomy",
"authors": [
{
"first": "Tadas",
"middle": [],
"last": "Baltru\u0161aitis",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Ahuja",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "41",
"issue": "2",
"pages": "423--443",
"other_ids": {
"DOI": [
"10.1109/TPAMI.2018.2798607"
]
},
"num": null,
"urls": [],
"raw_text": "Tadas Baltru\u0161aitis, Chaitanya Ahuja, and Louis- Philippe Morency. 2019. Multimodal machine learning: A survey and taxonomy. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 41(2):423-443.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modality-based factorization for multimodal fusion",
"authors": [
{
"first": "J",
"middle": [],
"last": "Elham",
"suffix": ""
},
{
"first": "Peyman",
"middle": [],
"last": "Barezi",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Momeni",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Wood",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elham J. Barezi, Peyman Momeni, Ian Wood, and Pas- cale Fung. 2018. Modality-based factorization for multimodal fusion. CoRR, abs/1811.12624.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning long-term dependencies with gradient descent is difficult",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Transactions on Neural Networks",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y Bengio, P Simard, and P Frasconi. 1994. Learn- ing long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Iemocap: interactive emotional dyadic motion capture database. Language Resources and Evaluation",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Busso",
"suffix": ""
},
{
"first": "Murtaza",
"middle": [],
"last": "Bulut",
"suffix": ""
},
{
"first": "Chi",
"middle": [
"Chun"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Mower",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jeannette",
"middle": [
"N"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Sungbok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [
"S"
],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "42",
"issue": "",
"pages": "335--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Busso, Murtaza Bulut, Chi Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N. Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. Iemocap: interactive emotion- al dyadic motion capture database. Language Re- sources and Evaluation, 42(4):335-359.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Affective computing and sentiment analysis",
"authors": [
{
"first": "E",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Intelligent Systems",
"volume": "31",
"issue": "02",
"pages": "102--107",
"other_ids": {
"DOI": [
"10.1109/MIS.2016.31"
]
},
"num": null,
"urls": [],
"raw_text": "E. Cambria. 2016. Affective computing and sentiment analysis. IEEE Intelligent Systems, 31(02):102-107.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Senticnet. Sentic Computing",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "23--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria and Amir Hussain. 2015. Senticnet. Sen- tic Computing, pages 23-71.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multimodal sentiment analysis with word-level fusion and reinforcement learning",
"authors": [
{
"first": "Minghai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Tadas",
"middle": [],
"last": "Baltrus\u01ceitis",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "19th ACM International Conference on Multimodal Interaction (ICMI'17)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrus\u01ceitis, Amir Zadeh, and Louis Philippe Morency. 2017. Multimodal sentiment analysis with word-level fusion and reinforcement learning. 19th ACM International Conference on Multimodal Inter- action (ICMI'17).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Covarep: A collaborative voice analysis repository for speech technologies",
"authors": [
{
"first": "Gilles",
"middle": [],
"last": "Degottex",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Kane",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Drugman",
"suffix": ""
},
{
"first": "Tuomo",
"middle": [],
"last": "Raitio",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Scherer",
"suffix": ""
}
],
"year": 2014,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. Covarep: A col- laborative voice analysis repository for speech tech- nologies. In ICASSP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A review and meta-analysis of multimodal affect detection systems",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sidney",
"suffix": ""
},
{
"first": "Jacqueline",
"middle": [],
"last": "D'mello",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kory",
"suffix": ""
}
],
"year": 2015,
"venue": "Acm Computing Surveys",
"volume": "47",
"issue": "3",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidney K. D'Mello and Jacqueline Kory. 2015. A re- view and meta-analysis of multimodal affect detec- tion systems. Acm Computing Surveys, 47(3):1-36.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Opensmile:the munich versatile and fast open-source audio feature extractor",
"authors": [
{
"first": "Florian",
"middle": [
"Eyben"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM International Conference on Multimedia",
"volume": "",
"issue": "",
"pages": "1459--1462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Eyben. 2010. Opensmile:the munich versa- tile and fast open-source audio feature extractor. In ACM International Conference on Multimedia, pages 1459-1462.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "First-order versus second-order singlelayer recurrent neural networks",
"authors": [
{
"first": "M W",
"middle": [],
"last": "Goudreau",
"suffix": ""
},
{
"first": "C L",
"middle": [],
"last": "Giles",
"suffix": ""
},
{
"first": "S T",
"middle": [],
"last": "Chakradhar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Transactions on Neural Networks",
"volume": "5",
"issue": "3",
"pages": "511--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M W Goudreau, C L Giles, S T Chakradhar, and . Chen, D. 1994. First-order versus second-order single- layer recurrent neural networks. IEEE Transactions on Neural Networks, 5(3):511-513.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speech intention classification with multimodal deep learning",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shuhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Marsic",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Canadian Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Gu, Xinyu Li, Shuhong Chen, Jianyu Zhang, and Ivan Marsic. 2017. Speech intention classification with multimodal deep learning. Proceedings of Canadian Conference on Artificial Intelligence.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Human conversation analysis using attentive multimodal networks with hierarchical encoder-decoder",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kaixiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Kangning",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shuhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Moliang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Marsic",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Multimedia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Gu, Xinyu Li, Kaixiang Huang, Shiyu Fu, Kangn- ing Yang, Shuhong Chen, Moliang Zhou, and Ivan Marsic. 2018a. Human conversation analysis us- ing attentive multimodal networks with hierarchical encoder-decoder. In ACM Multimedia.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multimodal affective analysis using hierarchical attention strategy with word-level alignment",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Kangning",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Shuhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Marsic",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, and Ivan Marsic. 2018b. Multimodal af- fective analysis using hierarchical attention strategy with word-level alignment. ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conversational memory network for emotion recognition in dyadic dialogue videos",
"authors": [
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Zimmermann",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2122--2132",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1193"
]
},
"num": null,
"urls": [],
"raw_text": "Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018. Conversational memory net- work for emotion recognition in dyadic dialogue videos. pages 2122-2132.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "Jrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Facial expression analysis",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "iMotions. 2017. Facial expression analysis.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "35",
"issue": "",
"pages": "221--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Ji, M. Yang, and K. Yu. 2013. 3d convolutional neu- ral networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine In- telligence, 35(1):221-231.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Large-scale video classification with convolutional neural networks",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Toderici",
"suffix": ""
},
{
"first": "Sanketh",
"middle": [],
"last": "Shetty",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Sukthankar",
"suffix": ""
},
{
"first": "Fei Fei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1725--1732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Fei Fei Li. 2014. Large-scale video classification with convo- lutional neural networks. In Computer Vision and Pattern Recognition, pages 1725-1732.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hin- ton. 2012. Imagenet classification with deep convo- lutional neural networks. In International Confer- ence on Neural Information Processing Systems.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Strong and simple baselines for multimodal utterance embeddings",
"authors": [
{
"first": "Paul Pu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Chong Lim",
"suffix": ""
},
{
"first": "Y",
"middle": [
"H"
],
"last": "Tsai",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [
"R"
],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Pu Liang, Yao Chong Lim, Y. H. Tsai, Ruslan R. Salakhutdinov, and Louis-Philippe Morency. 2019. Strong and simple baselines for multimodal utter- ance embeddings. In NAACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multimodal language analysis with recurrent multistage fusion",
"authors": [
{
"first": "Ziyin",
"middle": [],
"last": "Paul Pu Liang",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Zadeh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Pu Liang, Ziyin Liu, Amir Zadeh, and Louis Philippe Morency. 2018a. Multimodal lan- guage analysis with recurrent multistage fusion. EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Multimodal local-global ranking fusion for emotion recognition",
"authors": [],
"year": 2018,
"venue": "International Conference on Multimodal Interaction (ICMI'18)",
"volume": "",
"issue": "",
"pages": "472--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Pu Liang, Amir Zadeh, and Louis Philippe Moren- cy. 2018b. Multimodal local-global ranking fusion for emotion recognition. 2018 International Con- ference on Multimodal Interaction (ICMI'18), pages 472-476.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bilinear cnn models for fine-grained visual recognition",
"authors": [
{
"first": "Aruni",
"middle": [],
"last": "Tsung Yu Lin",
"suffix": ""
},
{
"first": "Subhransu",
"middle": [],
"last": "Roychowdhury",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maji",
"suffix": ""
}
],
"year": 2015,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "1449--1457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung Yu Lin, Aruni Roychowdhury, and Subhransu Maji. 2015. Bilinear cnn models for fine-grained vi- sual recognition. ICCV, pages 1449-1457.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A Survey of Opinion Mining and Sentiment Analysis",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "415--463",
"other_ids": {
"DOI": [
"10.1007/978-1-4614-3223-4_13"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Lei Zhang. 2012. A Survey of Opin- ion Mining and Sentiment Analysis, pages 415-463. Springer US, Boston, MA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Efficient lowrank multimodal fusion with modality-specific factors",
"authors": [
{
"first": "Zhun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "2247--2256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhun Liu, Ying Shen, Paul Pu Liang, Amir Zadeh, and Louis Philippe Morency. 2018. Efficient low- rank multimodal fusion with modality-specific fac- tors. ACL, pages 2247-2256.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowledge-based systems",
"authors": [
{
"first": "N",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N Majumder, D Hazarika, Alexander Gelbukh, Erik Cambria, and Soujanya Poria. 2018. Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowledge-based systems.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Espnetv2: A lightweight, power efficient, and general purpose convolutional neural network",
"authors": [
{
"first": "Sachin",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Rastegari",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Shapiro",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition(CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi. 2019. Espnetv2: A light- weight, power efficient, and general purpose con- volutional neural network. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. Computer Science.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Towards multimodal sentiment analysis: harvesting opinions from the web",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Louis Philippe Morency",
"suffix": ""
},
{
"first": "Payal",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Doshi",
"suffix": ""
}
],
"year": 2011,
"venue": "International Conference on Multimodal Interfaces",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Philippe Morency, Rada Mihalcea, and Payal Doshi. 2011. Towards multimodal sentiment anal- ysis: harvesting opinions from the web. In Interna- tional Conference on Multimodal Interfaces, pages 169-176.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Deep multimodal fusion for persuasiveness prediction",
"authors": [
{
"first": "Behnaz",
"middle": [],
"last": "Nojavanasghari",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Gopinath",
"suffix": ""
},
{
"first": "Jayanth",
"middle": [],
"last": "Koushik",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACM International Conference on Multimodal Interaction",
"volume": "",
"issue": "",
"pages": "284--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Behnaz Nojavanasghari, Deepak Gopinath, Jayanth K- oushik, and Louis Philippe Morency. 2016. Deep multimodal fusion for persuasiveness prediction. In Proceedings of ACM International Conference on Multimodal Interaction, pages 284-288.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Found in translation: Learning robust joint representations by cyclic translations between modalities",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Manzini",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Pham, Paul Pu Liang, Thomas Manzini, Louis Philippe Morency, and Pocz\u01d2s Barnab\u01ces. 2019. Found in translation: Learning robust join- t representations by cyclic translations between modalities. AAAI.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Seq2seq2sentiment: Multimodal sequence to sequence models for sentiment analysis",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Manzini",
"middle": [],
"last": "Thomos",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Pocz\u01d2s",
"middle": [],
"last": "Pu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barnab\u01ces",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL 2018 Grand Challenge and Workshop on Human Multimodal Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Pham, Manzini Thomos, Liang Paul Pu, and Pocz\u01d2s Barnab\u01ces. 2018. Seq2seq2sentiment: Mul- timodal sequence to sequence models for sentiment analysis. In ACL 2018 Grand Challenge and Work- shop on Human Multimodal Language.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A review of affective computing: From unimodal analysis to multimodal fusion",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Bajpai",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2017,
"venue": "formation Fusion",
"volume": "37",
"issue": "",
"pages": "98--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017a. A review of affective computing: From unimodal analysis to multimodal fusion. In- formation Fusion, 37:98-125.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Context-dependent sentiment analysis in user-generated videos",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "873--883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis Philippe Morency. 2017b. Context-dependent sentimen- t analysis in user-generated videos. In ACL, pages 873-883.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Multi-level multiple attentions for contextual multimodal sentiment analysis",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Mazumder",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of IEEE International Conference on Data Mining (ICDM)",
"volume": "",
"issue": "",
"pages": "1033--1038",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Mazumder, Amir Zadeh, and Louis Philippe Morency. 2017c. Multi-level multiple attentions for contextual multimodal sentiment analysis. In Pro- ceedings of IEEE International Conference on Data Mining (ICDM), pages 1033-1038.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Convolutional mkl based multimodal emotion recognition and sentiment analysis",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Iti",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of IEEE International Conference on Data Mining (ICDM)",
"volume": "",
"issue": "",
"pages": "439--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016a. Convolutional mkl based multimodal emotion recognition and sentiment anal- ysis. In Proceedings of IEEE International Confer- ence on Data Mining (ICDM), pages 439-448.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Convolutional mkl based multimodal emotion recognition and sentiment analysis",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Iti",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "439--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016b. Convolutional mkl based multimodal emotion recognition and sentiment anal- ysis. In IEEE International Conference on Data Mining, pages 439-448.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Extending long short-term memory for multi-view structured learning",
"authors": [
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Shyam Sundar Rajagopalan",
"suffix": ""
},
{
"first": "Tadas",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Baltrus\u01ceitis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goecke",
"suffix": ""
}
],
"year": 2016,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "338--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Sundar Rajagopalan, Louis Philippe Moren- cy, Tadas Baltrus\u01cditis, and Roland Goecke. 2016. Extending long short-term memory for multi-view structured learning. ECCV, pages 338-353.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Ensemble of svm trees for multimodal emotion recognition",
"authors": [
{
"first": "V",
"middle": [],
"last": "Rozgic",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ananthakrishnan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
}
],
"year": 2012,
"venue": "Signal and Information Processing Association Summit and Conference",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Rozgic, S. Ananthakrishnan, S. Saleem, R. Kumar, and R. Prasad. 2012. Ensemble of svm trees for mul- timodal emotion recognition. In Signal and Infor- mation Processing Association Summit and Confer- ence, pages 1-4.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory net- works. In Advances in neural information process- ing systems, pages 2440-2448.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Advanced lstm: A study about better time dependency modeling in emotion recognition",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Tao and Gang Liu. 2018. Advanced lstm: A study about better time dependency modeling in emotion recognition. Proceedings of Acoustics, Speech and Signal Processing (ICASSP).",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Learning factorized multimodal representations",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis Philippe Morency, and Ruslan Salakhutdinov. 2019. Learning factorized multimodal representa- tions. ICLR.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Rra: Recurrent residual attention for sequence learning",
"authors": [
{
"first": "Cheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng Wang and Cheng Wang. 2018. Rra: Recurrent residual attention for sequence learning. AAAI.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Select-additive learning: Improving generalization in multimodal sentiment analysis",
"authors": [
{
"first": "Haohan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Aaksha",
"middle": [],
"last": "Meghawat",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "949--954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haohan Wang, Aaksha Meghawat, Louis Philippe Morency, and Eric P Xing. 2016. Select-additive learning: Improving generalization in multimodal sentiment analysis. pages 949-954.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Words can shift: Dynamically adjusting word representations using nonverbal behaviors",
"authors": [
{
"first": "Yansen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, and Amir Zadeh. 2019. Words can shift: Dynami- cally adjusting word representations using nonverbal behaviors. AAAI.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Recurrent residual learning for sequence classification",
"authors": [
{
"first": "Yiren",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "938--943",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiren Wang and Fei Tian. 2016. Recurrent residu- al learning for sequence classification. In EMNLP, pages 938-943.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Youtube movie reviews: Sentiment analysis in an audio-visual context",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Wollmer",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Weninger",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Knaup",
"suffix": ""
},
{
"first": "Bjorn",
"middle": [],
"last": "Schuller",
"suffix": ""
},
{
"first": "Congkai",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Intelligent Systems",
"volume": "28",
"issue": "3",
"pages": "46--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Wollmer, Felix Weninger, Tobias Knaup, B- jorn Schuller, Congkai Sun, Kenji Sagae, and Louis Philippe Morency. 2013. Youtube movie re- views: Sentiment analysis in an audio-visual con- text. IEEE Intelligent Systems, 28(3):46-53.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels",
"authors": [
{
"first": "Chung Hsien",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei Bin",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Transactions on Affective Computing",
"volume": "2",
"issue": "1",
"pages": "10--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung Hsien Wu and Wei Bin Liang. 2010. Emo- tion recognition of affective speech based on mul- tiple classifiers using acoustic-prosodic information and semantic labels. IEEE Transactions on Affective Computing, 2(1):10-21.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Where to focus: Deep attention-based spatially recurrent bilinear networks for fine-grained visual recognition",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Wu, Yang Wang, Lin Wu, Yang Wang, Lin Wu, and Yang Wang. 2017. Where to focus: Deep attention-based spatially recurrent bilinear networks for fine-grained visual recognition. arXiv Preprint: 1709.05769.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Speaker identification on the SCOTUS corpus",
"authors": [
{
"first": "Jiahong",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Liberman",
"suffix": ""
}
],
"year": 2008,
"venue": "Acoustical Society of America Journal",
"volume": "123",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1121/1.2935783"
]
},
"num": null,
"urls": [],
"raw_text": "Jiahong Yuan and Mark Liberman. 2008. Speaker i- dentification on the SCOTUS corpus. Acoustical So- ciety of America Journal, 123:3878.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Soujanya Poria, Erik Cambria, and Louis Philippe Morency",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Minghai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1114--1125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis Philippe Morency. 2017. Ten- sor fusion network for multimodal sentiment analy- sis. EMNLP, pages 1114-1125.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Soujanya Poria, Erik Cambria, and Louis Philippe Morency. 2018a. Memory fusion network for multiview sequential learning",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Mazumder",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Sou- janya Poria, Erik Cambria, and Louis Philippe Morency. 2018a. Memory fusion network for multi- view sequential learning. AAAI.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Prateek Vij, Erik Cambria, and Louis Philippe Morency. 2018b. Multi-attention recurrent network for human communication comprehension",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, and Louis Philippe Morency. 2018b. Multi-attention recurrent network for human communication comprehension. AAAI.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. A-CL",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Vanbriesen",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Edmund",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Minghai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2236--2246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Jonathan Vanbriesen, Sou- janya Poria, Edmund Tong, Erik Cambria, Minghai Chen, and Louis Philippe Morency. 2018c. Mul- timodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. A- CL, pages 2236-2246.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Mosi: Multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Eli",
"middle": [],
"last": "Pincus",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "31",
"issue": "",
"pages": "82--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis Philippe Morency. 2016a. Mosi: Multimodal corpus of sentiment intensity and subjectivity anal- ysis in online opinion videos. IEEE Intelligent Sys- tems, 31(6):82-88.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Eli",
"middle": [],
"last": "Pincus",
"suffix": ""
},
{
"first": "Louis",
"middle": [
"Philippe"
],
"last": "Morency",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Intelligent Systems",
"volume": "31",
"issue": "6",
"pages": "82--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis Philippe Morency. 2016b. Multimodal sen- timent intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82-88.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Adadelta: An adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701v1"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. Adadelta: An adaptive learn- ing rate method. preprint, arXiv:1212.5701v1.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Schematic Diagram of our fusion strategy. Here the window size and stride are both set to 2.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "The Detailed Structure of HFFN. have n = k\u2212d s + 2 local portions. In practice, both d and s can be set freely (see Section 4.3.5).",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Influence of window size d and stride s.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Models</td><td>Acc</td><td>F1 score</td></tr><tr><td>TFN</td><td>59.40</td><td>57.33</td></tr><tr><td>LMF</td><td>60.27</td><td>53.87</td></tr><tr><td>CHFusion</td><td>58.45</td><td>56.90</td></tr><tr><td>BC-LSTM</td><td>60.77</td><td>59.04</td></tr><tr><td>CAT-LSTM</td><td>60.72</td><td>58.83</td></tr><tr><td colspan=\"2\">HFFN(d, s = 2, 2) 60.37</td><td>59.07</td></tr></table>",
"num": null,
"text": "Performance on CMU-MOSI dataset."
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">CMU-MOSI</td><td colspan=\"2\">IEMOCAP</td></tr><tr><td>Methods</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td></tr><tr><td>L</td><td colspan=\"4\">78.59 78.52 81.46 81.54</td></tr><tr><td>A</td><td colspan=\"4\">48.14 48.30 38.08 38.17</td></tr><tr><td>V</td><td colspan=\"4\">56.97 57.48 34.52 29.15</td></tr><tr><td>L+A</td><td colspan=\"4\">78.06 78.29 80.38 80.60</td></tr><tr><td>L+V</td><td colspan=\"4\">79.39 79.38 80.05 80.26</td></tr><tr><td>A+V</td><td colspan=\"4\">55.17 55.76 55.17 55.79</td></tr><tr><td colspan=\"5\">L+A+V 80.19 80.34 82.37 82.42</td></tr></table>",
"num": null,
"text": "Performance of HFFN on IEMOCAP dataset. Here F1-means F1 score."
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Unimodal, Bimodal and Trimodal Results of HFFN. Here, L, A and V denotes language, acoustic and visual modalities, respectively."
},
"TABREF7": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Comparison of Efficiency."
},
"TABREF9": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Discussion on LSTM Variants."
}
}
}
}