ACL-OCL / Base_JSON /prefixK /json /K18 /K18-1025.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:39.233554Z"
},
"title": "Multi-modal Sequence Fusion via Recursive Attention for Emotion Recognition",
"authors": [
{
"first": "Rory",
"middle": [],
"last": "Beard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Ritwik",
"middle": [],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Raymond",
"middle": [
"W M"
],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "P",
"middle": [
"G"
],
"last": "Keerthana Gopalakrishnan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Luka",
"middle": [],
"last": "Eerens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Pawel",
"middle": [],
"last": "Swietojanski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Miksik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Emotech",
"middle": [],
"last": "Labs",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Natural human communication is nuanced and inherently multi-modal. Humans possess specialised sensoria for processing vocal, visual, and linguistic, and para-linguistic information, but form an intricately fused percept of the multi-modal data stream to provide a holistic representation. Analysis of emotional content in face-to-face communication is a cognitive task to which humans are particularly attuned, given its sociological importance, and poses a difficult challenge for machine emulation due to the subtlety and expressive variability of cross-modal cues. Inspired by the empirical success of recent so-called End-To-End Memory Networks (Sukhbaatar et al., 2015), we propose an approach based on recursive multi-attention with a shared external memory updated over multiple gated iterations of analysis. We evaluate our model across several large multimodal datasets and show that global contextualised memory with gated memory update can effectively achieve emotion recognition.",
"pdf_parse": {
"paper_id": "K18-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "Natural human communication is nuanced and inherently multi-modal. Humans possess specialised sensoria for processing vocal, visual, and linguistic, and para-linguistic information, but form an intricately fused percept of the multi-modal data stream to provide a holistic representation. Analysis of emotional content in face-to-face communication is a cognitive task to which humans are particularly attuned, given its sociological importance, and poses a difficult challenge for machine emulation due to the subtlety and expressive variability of cross-modal cues. Inspired by the empirical success of recent so-called End-To-End Memory Networks (Sukhbaatar et al., 2015), we propose an approach based on recursive multi-attention with a shared external memory updated over multiple gated iterations of analysis. We evaluate our model across several large multimodal datasets and show that global contextualised memory with gated memory update can effectively achieve emotion recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multi-modal sequential data pose interesting challenges for learning machines that seek to derive representations. This constitutes an increasingly relevant sub-field of multi-view learning (Ngiam et al., 2011; Baltrusaitis et al., 2017) . Examples of such modalities include visual, audio and textual data. Uni-modal observations are typically complementary to each other and hence they can reveal a fuller and more context-rich picture with better generalisation ability when used together. Through its complementary perspective, each view can unburden sub-modules specific to another modality of some of its modelling onus, which might otherwise learn implicit hidden * Equal contribution.",
"cite_spans": [
{
"start": 190,
"end": 210,
"text": "(Ngiam et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 211,
"end": 237,
"text": "Baltrusaitis et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "causes that are over-fitted to training data idiosyncrasies in order to explain the training labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, multi-modal data introduces many difficulties to model designing and training due to the distinct inherent dynamics of each modality. For instance, combining modalities with different temporal resolution is an open problem. Other challenges include deciding where and how modalities are combined, leveraging the weak discriminative power of training label and the presence of variability and noise or dealing with complex situations such as modelling the emotion of sarcasm, where cues among modalities contradict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we address multi-modal sequence fusion for automatic emotion recognition. We believe, that a strong model should enable: (i) Specialisation of modality-specific submodules exploiting the inherent properties of its data stream, tapping into the mode-specific dynamics and characteristic patterns. (ii) Weak (soft) data alignment dividing heterogeneous sequences into segments with co-occuring events across modalities without alignment to a common time axis. This overcomes limitations of hard alignments which often introduce spurious modelling assumptions and data inefficiencies (e.g. re-sampling) which must be performed again from scratch if views are added or removed. (iii) Information exchange for both view-specific information and statistical strength for learning shared representations. (iv) Scalability of the approach to many modalities using (a) parallelisable computation over modalities, and (b) a parameter set size growing (at most) linearly with the number of modalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the present work, we detail a recursively attentive modelling approach. Our model fulfills the desiderata above and performs multiple sweeps of globally-contextualised analysis so that one modality-specific representation cues the at-tention of the next and vice-versa. We evaluate our approach on three large-scale multi-modal datasets to verify its suitability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most approaches to multi-modal analysis (Ngiam et al., 2011) focus on designing feature representations, co-learning mechanisms to transfer information between modalities, and fusion techniques to perform a prediction or classification. These models typically perform either \"early\" (input data are concatenated and pushed through a common model) or \"late\" (outputs of the last layer are combined together through linear or non-linear weighting) fusion. In contrast, our model does not fall into any of these categories directly as it is \"iterative\" in the sense that there are multiple fusions per decision, with an evolving belief state -the memory. In addition to that, our model is also \"active\" since feature extraction from one modality can influence the nature of the feature extraction from another modality in the next time step via the shared memory.",
"cite_spans": [
{
"start": 40,
"end": 60,
"text": "(Ngiam et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Multi-modal analysis",
"sec_num": "2"
},
{
"text": "For instance, Kim et al. (2013) used lowlevel hand crafted features such as pitch, energy and mel-frequency filter banks (MFBs) capturing prosodic and spectral acoustic information and Facial Animation Parameters (FAP) describing the movement of face using distances between facial landmarks. In contrast, our model allows for an end-to-end training of feature representation. Zhang et al. (2017) learnt motion cues in videos using 3D-CNNs from both spatial and temporal dimensions. They performed deep multi-modal fusion using a deep belief network that learnt non-linear relations across modalities and then used a linear SVM to classify emotions. Similarly, Vielzeuf et al. (2017) explored VGG-LSTM and 3DCNN-LSTM architectures and introduced a weighted score to prioritise the most relevant windows during learning. In our approach, exchange of information between different modalities is not limited to the last layer of the model, but due to memory component, each modality can influence every other in the following time steps.",
"cite_spans": [
{
"start": 14,
"end": 31,
"text": "Kim et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 377,
"end": 396,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF26"
},
{
"start": 661,
"end": 683,
"text": "Vielzeuf et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Multi-modal analysis",
"sec_num": "2"
},
{
"text": "Co-training and co-regularisation approaches of multi-view learning (Xu et al., 2013; Sindhwani and Niyogi, 2005) seek to leverage unlabelled data via a semi-supervised loss that encodes a consensus and complementarity principles. The for-mer encodes the assertion that predictions made be each view-specific learner should largely agree, and the latter encodes the assumption that each view contains useful information that is hidden from others, until exchange of information is allowed to occur.",
"cite_spans": [
{
"start": 68,
"end": 85,
"text": "(Xu et al., 2013;",
"ref_id": "BIBREF23"
},
{
"start": 86,
"end": 113,
"text": "Sindhwani and Niyogi, 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Multi-modal analysis",
"sec_num": "2"
},
{
"text": "End-To-End Memory Networks (Sukhbaatar et al., 2015) represent a fully differentiable alternative to the strong supervision-dependent Memory Networks (Weston, 2017) . To bolster attention-based recurrent approaches to language modelling and question answering, they introduced a mechanism performing multiple hops of updates to a \"memory\" representation to provide context for next sweep of attention computation.",
"cite_spans": [
{
"start": 27,
"end": 52,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 150,
"end": 164,
"text": "(Weston, 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Networks",
"sec_num": "2.2"
},
{
"text": "Dynamic Memory Networks (DMN) (Xiong et al., 2016) integrate an attention mechanism with a memory module and multi-modal bilinear pooling to combine features across views and predict attention over images for visual question answering task. Nam et al. (2017) iterated on this design to allow the memory update mechanism to reason over previous dual-attention outputs, instead of forgetting this information, in the subsequent sweep. The present work extends the multiattention framework to leverage neural-based information flow control by dynamically routing it with neural gating mechanisms.",
"cite_spans": [
{
"start": 30,
"end": 50,
"text": "(Xiong et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 241,
"end": 258,
"text": "Nam et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Networks",
"sec_num": "2.2"
},
{
"text": "The very recent work (Zadeh et al., 2018a ) also approaches multi-view learning with recourse to a system of recurrent encoders and attention mediated by global memory fusion. However, fusion takes place at the encoder cell level, requires hard alignment, and is performed online in one sweep so it cannot be informed by upstream context. The analysis window of the global memory is limited to the current and previous cell memories of each LSTM encoder, whereas our approach abstracts the shared memory update dynamics away from the ties of the encoding dynamics. Therefore our approach enables post-fusion and retrospective reanalysis of the entire cell memory history of all encoders at each analysis iteration.",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Zadeh et al., 2018a",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Networks",
"sec_num": "2.2"
},
{
"text": "Our approach is tailored to videos of single speakers, each divided into segments that roughly span one uttered sentence. We treat each segment as an independent datum constituting an individual multi-modal event with its own annotation, such that there is no temporal dependence across any two segments. In the following exposition, each of the various mechanisms we describe (encoding, attention, fusion, and memory update) act on each segment in isolation of all others. We will use the terms \"view\" and \"modality\" interchangeably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Recurrent Neural Networks",
"sec_num": "3"
},
{
"text": "We refer to our recursively attentive analysis model as a Recursive Recurrent Neural Network (RRNN) since it resembles an RNN, but the hidden state and the next cell input are coupled in a recursion. At each step of the cell update there is no new incoming information; rather the same original inputs are re-weighted by a new attention query to form the new cell inputs (see discussion in Section 3.5 for more details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Recurrent Neural Networks",
"sec_num": "3"
},
{
"text": "The major modelling assumption herein, is that a single, independent recurrent encoding of each segment of each modality is sufficient to capture a range of semantic representations that can be tapped by several shared external memory queries. Each memory query is formed in a separate stage of an iterated analysis over the recurrent codes. Concretely, modality-specific attentionweighted summaries (a (\u03c4 ) , v (\u03c4 ) , t (\u03c4 ) ) at analysis iteration \u03c4 contribute to the update of a shared dense memory/context vector m (\u03c4 ) , which in turn serves as a differentiable attention query at iteration \u03c4 + 1 (cf. Fig. 1 ). This provides a recursive mechanism for sharing information within and across sequences, so the recurrent representations of one view can be revisited in light of cross-modal cues gleaned from previous sweeps of other views. This is an efficient alternative to reencoding each view on every sweep, and is more modular and generalisable than routing information across views at the recurrent cell level.",
"cite_spans": [
{
"start": 403,
"end": 407,
"text": "(\u03c4 )",
"ref_id": null
},
{
"start": 519,
"end": 523,
"text": "(\u03c4 )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 607,
"end": 613,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "For each multi-modal sequence segment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "x n = {x n a , x n v , x n t },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "a view-specific encoding is realised via a set of independent bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) ",
"cite_spans": [
{
"start": 83,
"end": 117,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", run over segments n \u2208 [1, N ]: h f wd s [n, k s ] = LST M (x n s [k s ], h f wd s [n, k s \u2212 1]) (1) h bwd s [n, k s ] = LST M (x n s [k s ], h bwd s [n, k s + 1]) (2) h s [n, k s ] = [h f wd s [n, k s ]; h bwd s [n, k s ]]",
"eq_num": "(3)"
}
],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "Here, s \u2208 {a, v, t} denotes respectively audio, vi- Figure 1 : Schematic overview of the proposed neural architecture. Shared memory m \u03c4 is updated with with the contextualised embeddings from a \u03c4 , v \u03c4 and t \u03c4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "Visual Attention Audio Attention Textual Attention h a h v h t m a v t W \u03c4 \u03c4 \u03c4 m \u03c4 + 1 \u03c4 \u03c4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "sual and textual modalities, and k s \u2208 {1, ..., K s } are view-specific state indices. The number of recurrent steps is view-specific (i.e. K a = K v = K t ) and is governed by the feature representation and sampling rate for the given view, e.g. number of word (embeddings) in a the text contained within a time-stamped segment. This is in contrast to Zadeh et al. (2018a) , where the information in different views was grounded to a common time axis or the number of steps in an early stage, either via up-sampling or downsampling. Thus the extracted representations in our approach preserve the inherent time-scales of each modality and avoid the need for hard alignment, satisfying desiderata (i) and (ii) outlined in Section 1. Note that the input sequences x (n) s may refer to either raw or pre-processed data (see Section 4 for details). In the remainder, we drop the segment id n to reduce notational clutter.",
"cite_spans": [
{
"start": 353,
"end": 373,
"text": "Zadeh et al. (2018a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Independent recurrent encoding",
"sec_num": "3.1"
},
{
"text": "We used a view-specific attention-based weighting mechanism to compute a contextualised embedding c s for a view s. Encoder output h s is stacked along time to form matrices H s \u2208 R (D\u00d7Ks) . A shared dense memory m (\u03c4 =0) is initialised by summing the time-average of the H s across three modalities. M (\u03c4 ) is then constructed by repeating the shared memory, m (\u03c4 ) , K s times such that it has the same size as the corresponding context H s , i.e. H s , M \u2208 R (D\u00d7Ks) . An alignment function then scores how well H s and M (\u03c4 ) are matched",
"cite_spans": [
{
"start": 182,
"end": 188,
"text": "(D\u00d7Ks)",
"ref_id": null
},
{
"start": 462,
"end": 468,
"text": "(D\u00d7Ks)",
"ref_id": null
},
{
"start": 524,
"end": 528,
"text": "(\u03c4 )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 (\u03c4 ) s = align(H s , M (\u03c4 ) ).",
"eq_num": "(4)"
}
],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "The alignment mechanism entails a feedforward neural network with H s and M (\u03c4 ) as inputs. A softmax is applied on the network output to derive the attention strength \u03b1. This architecture resembles that in ; concretely",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R (\u03c4 ) = tanh W (\u03c4 ) s1 H s tanh W (\u03c4 ) s2 M (\u03c4 ) , (5) \u03b1 (\u03c4 ) s = w (\u03c4 ) s3 T R (\u03c4 ) ,",
"eq_num": "(6)"
}
],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 (\u03c4 ) s [k s ] =\u03b1 (\u03c4 ) s [k s ] l\u03b1 (\u03c4 ) s [l] .",
"eq_num": "(7)"
}
],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "In Eq. 5, W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "s (where s \u2208 {s1, s2}) are square or fat matrices in the first layer of the alignment network, containing parameters governing the selfinfluence within view s and influence from the shared memory M. For the majority of our experiments, we used the multiplicative method of Nam et al. (2017) to combine the two activation terms, but similar results were also obtained with the concatenative approach of . In eq. 6",
"cite_spans": [
{
"start": 273,
"end": 290,
"text": "Nam et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": ", w (\u03c4 ) s3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "T is a vector projecting an un-normalised attention weight R onto an alignment vector\u03b1, which has the same dimensions as K s . Finally, eq. (7) applies the softmax operation along the time step k s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "Parameters W s1 , W s2 , w s3 for deriving attention strength \u03b1 s are in general distinct parameters for each memory update step, \u03c4 . However, they could also be tied across steps. In the standard attention schemes, attention weight \u03b1 s is a vector spanning across K s . Note, that w (\u03c4 ) s3 in eq. (6) could be replaced by a matrix-form W (\u03c4 ) s3 to produce a multi-head attention weight (Vaswani et al., 2017) . Alternatively, the transposition of network inputs can be performed such that attention scales each dimension, D, instead of each time step k. This can be seen as a variant of key-value attention (Daniluk et al., 2017) , where the values differ from their keys by a linear transformation with weights governed by the alignment scores.",
"cite_spans": [
{
"start": 389,
"end": 411,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 610,
"end": 632,
"text": "(Daniluk et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "Each globally-contextualised view representation c s is defined as the convex combination of the view-specific encoder outputs weighted by attention strength",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c (\u03c4 +1) s = k \u03b1 (\u03c4 ) s [k s ]h s [k s ].",
"eq_num": "(8)"
}
],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "3.3 Shared memory update",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "The previous section described how the current shared memory state is used to modulate the attention-based re-analysis of the (encoded) inputs. Here we detail how the outcome of the reanalysis is used to update the shared memory state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "In contrast to the memory update employed in Nam et al. (2017) , our approach includes a set of coupled gating mechanisms outlined below, and depicted schematically in Fig. 2 :",
"cite_spans": [
{
"start": 45,
"end": 62,
"text": "Nam et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 168,
"end": 174,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g (\u03c4 ) w = \u03c3 W wm m (\u03c4 \u22121) +W ww w (\u03c4 ) + b w (9) g (\u03c4 ) c = \u03c3 W cm m (\u03c4 \u22121) +W cw w (\u03c4 ) + b c (10) g (\u03c4 ) s = \u03c3 W sm m (\u03c4 \u22121) +W ss c (\u03c4 ) s + b s \u2200s \u2208 {a, v, t}",
"eq_num": "(11)"
}
],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u (\u03c4 ) = tanh W um m (\u03c4 \u22121) + W uw g (\u03c4 ) w w (\u03c4 ) + b u (12) m (\u03c4 ) = 1 \u2212 g (\u03c4 ) c m (\u03c4 \u22121) + g (\u03c4 ) c u (\u03c4 ) ,",
"eq_num": "(13)"
}
],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "w (\u03c4 ) = [a (\u03c4 ) ; v (\u03c4 ) ; t (\u03c4 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "], m (0) = 0 and \u03c3() denotes an element-wise sigmoid non-linearity. The function of the view context gate defined in eq. (9) and invoked in eq. (12), is to block corrupted or uninformative view segments from influencing the proposed shared memory update content, u (\u03c4 ) . The attention mechanism, outlined in eq. (5)-(7), cannot fulfill this task alone since the full attention divided over a view segment must sum to 1 even if no part of that segment is pertinent/salient. The utility of this gating will be empirically demonstrated in noise-injection experiments in Section 5. The new memory content u (\u03c4 ) is written to the memory state according to eq. (12), subject to the action of the memory update gate defined in eq. (10). This update gate determines how much of the past global information should be passed on to contextualise subsequent stages of re-analysis. If parameters W s1 , W s2 , w s3 are untied across each re-analysis step, this update gate additionally accommodates short-cut or \"highway\" routing (Srivastava et al., 2015) of regression error gradients from the end of the multi-hop procedure back through the parameters of the earlier attention sweeps. ",
"cite_spans": [
{
"start": 265,
"end": 269,
"text": "(\u03c4 )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Globally-contextualised attention",
"sec_num": "3.2"
},
{
"text": "After \u03c4 iterations of fusion and re-analysis, the resulting memory state m (\u03c4 ) is passed through a final fully-connected layer to yield the output corresponding to a particular task (regression predictions or logits in case of classification). In our experiments we found that increasing \u03c4 yields meaningful performance gains (up to \u03c4 = 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Projection",
"sec_num": "3.4"
},
{
"text": "The proposed gated memory update corresponds to maintaining an external recurrent cell memory that is recurrent in the consecutive analysis hops, \u03c4 , rather than the actual time-steps of the given modality, k s . This allows the relevant memories of older hops to persist for use in the subsequent analysis hops.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive RNN: another perspective",
"sec_num": "3.5"
},
{
"text": "The memory update equations (9)-(13) strongly resemble the GRU cell update ; we treat concatenated view context vectors as the GRUs inputs, one at each analysis hop, \u03c4 . When viewed as a recurrent encoding of inputs {h s }, we refer to this architecture as a recursive recurrent neural net (RRNN), due to the recursive relationship between the cell's recurrent state and the attention-based re-weighting of the inputs. From this perspective, the attention mechanism forms a sub-component of the RRNN cell.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive RNN: another perspective",
"sec_num": "3.5"
},
{
"text": "The key distinction from a typical GRU cell is that the reset or relevance gate g w in a GRU typically gates the recurrent state (m (\u03c4 ) in our case), whereas we use it to gate the input, allowing for uninformative view contexts to be excluded from the memory update. Gating the recurrent state is essential for avoiding vanishing gradients over long sequences, which is not such a concern for our recursion lengths of \u2248 3. One could of course reinstate the gating of the recurrent state, should recursions grow to more appreciable lengths. A further distinction is that here the GRU \"inputs\" (view contexts {a (\u03c4 ) , v (\u03c4 ) , t (\u03c4 ) } in our case) are computed online as the memory state recurs, unlike the standard case where they are data or preextracted features available before the RNN begins to operate. Figure 3 depicts 2 consecutive RRNN cells, illustrating the recycling of the same cell inputs. Figure 2 shows the details of a single cell, which subsumes the globally-contextualised attention mechanism detailed in Section 3.2.",
"cite_spans": [
{
"start": 132,
"end": 136,
"text": "(\u03c4 )",
"ref_id": null
},
{
"start": 611,
"end": 615,
"text": "(\u03c4 )",
"ref_id": null
},
{
"start": 620,
"end": 624,
"text": "(\u03c4 )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 811,
"end": 819,
"text": "Figure 3",
"ref_id": null
},
{
"start": 906,
"end": 914,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Recursive RNN: another perspective",
"sec_num": "3.5"
},
{
"text": "Datasets. We evaluated our approach on CREMA-D (Cao et al., 2014) , RAVDESS (Livingstone and Russo, 2012) and CMU-MOSEI (Zadeh et al., 2018b) datasets for multimodal emotion analysis. The first two datasets provide audio and visual modalities while CMU-MOSEI adds also text transcriptions. The CREMA-D dataset contains \u223c7400 clips of 91 actors covering 6 emotions. The RAVDESS is a speech and song database comprising of \u223c7300 files of 24 actors covering 8 emotional classes (including two canonical classes for \"neutral\" and \"calm\"). The CMU-MOSEI dataset consists of \u223c3300 long clips segmented into \u223c23000 short clips. In addition to audio and visual data, it contains also text transcriptions allowing evaluation of tri-modal models.",
"cite_spans": [
{
"start": 47,
"end": 65,
"text": "(Cao et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 120,
"end": 141,
"text": "(Zadeh et al., 2018b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "These datasets are annotated by a continuousvalued vector corresponding to multi-class emotion labels. The ground-truth labels were generated by multiple human transcribers with score normalisation and agreement analysis. For further details, refer to respective references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Test conditions and baselines. Since each dataset consists of different emotion classification schema, we trained and evaluated all models separately for each of them. The training was performed in an end-to-end manner with L2 loss defined over multi-class emotion labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "To establish a baseline, we evaluated a naive classifier predicting the test-set empirical mean intensities (with MSE loss function) for each output regression dimension. Similar baselines were obtained for other loss functions by training a model with just one parameter per output dimension on that loss, where the model has an access to the training labels but not the training inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Evaluation. For CREMA-D and RAVDESS, we report the accuracy scores as these datasets contain labels for multiclass classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "For CMU-MOSEI, we report the result of the 6-way emotion recognition. Recursive models as described in Sec. 3 predicted the 6-dimensional emotion vectors. Their values represent the emotion intensity of the six emotion classes and are continuous-valued. Following Zadeh et al. (2018b) , these predictions were evaluated against the reference emotions using the criteria of mean square error (MSE) and mean absolute error (MAE), summing across 6 classes. In addition, an acceptance threshold 0.1 was set for each dimension/emotion, and weighted accuracy (Tong et al., 2017) was computed.",
"cite_spans": [
{
"start": 264,
"end": 284,
"text": "Zadeh et al. (2018b)",
"ref_id": "BIBREF25"
},
{
"start": 553,
"end": 572,
"text": "(Tong et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Complementary views across modality. All experiments in this paper use independent recurrent encoding (Sec. 3.1). The encoding scheme differs for every modality. COVAREP (Degottex et al., 2014) was used for the audio modality. OpenFace (Amos et al., 2016) and FACET (iMotion, 2017) were used for visual one and Glove (Pennington et al., 2014) was used for encoding the text features.",
"cite_spans": [
{
"start": 236,
"end": 255,
"text": "(Amos et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 260,
"end": 281,
"text": "FACET (iMotion, 2017)",
"ref_id": null
},
{
"start": 317,
"end": 342,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Independent recurrent encoding used bidirectional view-specific encoders with 2\u00d7128 dimensional outputs on CREMA-D and RAVDESS and 2 \u00d7 512 on CMU-MOSEI. The complementary effects of multiple views from different modalities would be illustrated by controlling the available input views to different systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Attention. Global contextualised attention (GCA) was implemented for the emotion recognition systems. Global and view-specific memory were projected to the alignment space (Eq. (5)). The attention weights were computed (Eq. (6)-Eq. (7)) and the contextual view representation was derived (Eq. (8)). For more details, refer to Sec. 3.2. The encoder-decoder used a 128 dimensional (or 512 for CMU-MOSEI) fully-connected layer. A final linear layer mapped the decoder output to multi-class targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "GCA was compared to standard \"early\" and \"late\" fusion strategies. In early fusion, encoders Figure 4 : Visualisation of view-specific attention across time. Attention in the text modality focuses on the words \"very\" and \"delicate\" as cues for emotion recogntion. Also, the difference in oscillation rates between the audio and visual modalities is noted.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 101,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "outputs across all views are resampled to their highest temporal resolution (i.e. audio, at 100Hz), and resulting (aligned) outputs are concatenated across views. We used similar encoder-decoder structure to one described in Sec 3.2 ( Fig. 1) , except that the three parallel blocks for modalities were reduced to one. In late fusion, the final-step encoder outputs from all modalities were independently processed by 1-layer feed-forward networks (Sec 3.4) and view-specific multi-class targets were combined using linear weighting.",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Fig. 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Memory updates and ablation study. GCA was enhanced with the extra gating functions (cf. Eq. (9)-(13), Sec. 3.3) . The extended system was compared with the GCA system on CMU-MOSEI data. To this end, we perform an ablation study using the test data corrupted by additive Gaussian white noise added to the visual modality. Table 1 and 2 show the results of emotion recognition on the CREMA-D and RAVDESS dataset respectively. Audio, visual and the joint use of bi-modal information were compared using identification accuracy. Models trained on the visual modality consistently outperformed models that use solely audio data. Highest accuracy was achieved when the audio and visual modality were jointly modelled, giving 65% and 58.33% on the two datasets. Interestingly, the joint bimodal system outperformed human performance on CREMA-D (Cao et al., 2014 ) by 1.4%. On CMU-MOSEI, the errors between the reference and hypothesis six-dimensional emotion vectors were computed and the results were shown in Table 3 .",
"cite_spans": [
{
"start": 838,
"end": 855,
"text": "(Cao et al., 2014",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 322,
"end": 329,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1005,
"end": 1012,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "The use of visual modality resulted in the lowest mean square error (MSE). Meanwhile, when evaluated by mean absolute error (MAE) and weighted accuracy (WA), text modality gave the best performance. Basic techniques in combining information among modalities was not very effective, as indicated by the neglible gain in early and late fusion model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Globally contextualised attention (GCA) gave an MSE of 0.4696. Gating on global and viewspecific memory updates led to further improvements to 0.4691. The improvement in terms of MAE is even more significant (from 0.9412 to 0.8705). Figure 4 visualises the attention weights in different modalities on a CMU-MOSEI test sentence. The x-axis denotes time t and y-axis is the magnitude of attention \u03b1 s (t) in different views s \u2208 {a, v, t}. The transcribed text was added alongside the attention profile of the textual modality to align the attention weights with the recording. It can be seen that the GCA emotion recognition system was trained to attend dynamically to features of varying importance across the time, unlike systems performing early or late fusion. Attention weights of text modality show a clear jump for the words \"very\" and \"delicate\". The word \"very\", combined with an adjective, is often a strong cue to sentiment analysis, resulting in a spike in attention. The subject in this clip was speaking mostly in a neutral tone, with a nod and slight frowning towards the beginning of the sentence. This may correspond to the first peak in the attention trajectory of visual data. The weight of audio modality exhibited a higher oscillation rate compared to the counterpart on visual data. COVAREP features had 4\u00d7 higher temporal frequency than FACET. Finally, we verified contribution of the gating system to the GCA using the corrupted visual data. When the GCA system is used without the gating mechanism, corrupted data results in increased MSE (from 0.4696 to 0.5034) and MAE (from 0.9412 to 0.9920). This is in contrast to the full system with gating (GCA + Gating in Table 3 ). The system cancels the effects of additive visual noise, which is evidenced by the small gap in MSE (0.4691 vs 0.4742) and MAE (0.8705 vs 0.8857) between clean and noisy data.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 241,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1688,
"end": 1695,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We have presented an approach for combining sequential, heterogeneous data. An external memory state is updated recursively, using globallycontextualised attention over a set of recurrent view-specific state histories. Our model was tested on the challenging tasks of emotion recognition from audio, visual, and textual data on three largescale datasets. The complementary effect of joint modelling of emotions using multi-modal data was consistently shown across experiments with multiple datasets. Importantly this approach eschews hard alignment of the data streams, allowing view-specific encoders to respect the inher-ent dynamics of its input sequence. Encoder state histories are fused into cross-modal features via an attention mechanism that is modulated by a shared, external memory. The control of information flow in this fusion is further enhanced by using a GRU-like gating mechanism, which can persist shared memory through multiple iterations while blocking corrupted or uninformative viewspecific features. In future study, it would be interesting to investigate more structured fusion operations such as sparse tensor multilinear maps (Benyounes et al., 2017) .",
"cite_spans": [
{
"start": 1153,
"end": 1177,
"text": "(Benyounes et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Openface: A generalpurpose face recognition library with mobile applications",
"authors": [
{
"first": "Brandon",
"middle": [],
"last": "Amos",
"suffix": ""
},
{
"first": "Bartosz",
"middle": [],
"last": "Ludwiczuk",
"suffix": ""
},
{
"first": "Mahadev",
"middle": [],
"last": "Satyanarayanan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brandon Amos, Bartosz Ludwiczuk, and Mahadev Satyanarayanan. 2016. Openface: A general- purpose face recognition library with mobile appli- cations. Technical report, CMU-CS-16-118, CMU School of Computer Science.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multimodal machine learning: A survey and taxonomy",
"authors": [
{
"first": "Tadas",
"middle": [],
"last": "Baltrusaitis",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Ahuja",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadas Baltrusaitis, Chaitanya Ahuja, and Louis- Philippe Morency. 2017. Multimodal machine learning: A survey and taxonomy. CoRR, abs/1705.09406.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "MUTAN: multimodal tucker fusion for visual question answering",
"authors": [
{
"first": "Hedi",
"middle": [],
"last": "Ben-Younes",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Cad\u00e8ne",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Cord",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Thome",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hedi Ben-younes, R\u00e9mi Cad\u00e8ne, Matthieu Cord, and Nicolas Thome. 2017. MUTAN: multimodal tucker fusion for visual question answering. CoRR, abs/1705.06676.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "CREMA-D: Crowd-sourced emotional multimodal actors dataset",
"authors": [
{
"first": "Houwei",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "David",
"middle": [
"G"
],
"last": "Cooper",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"K"
],
"last": "Keutmann",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE Transactions on Affective Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houwei Cao, David G. Cooper, and Michael K. Keut- mann. 2014. CREMA-D: Crowd-sourced emotional multimodal actors dataset. IEEE Transactions on Affective Computing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Alar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bahdanau",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, \u00c7 alar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Frustratingly short attention spans in neural language modeling",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Daniluk",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Daniluk, Tim Rockt\u00e4schel, Johannes Welbl, and Sebastian Riedel. 2017. Frustratingly short at- tention spans in neural language modeling. CoRR, abs/1702.04521.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "COVAREP -a collaborative voice analysis repository for speech technologies",
"authors": [
{
"first": "Gilles",
"middle": [],
"last": "Degottex",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Kane",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Drugman",
"suffix": ""
},
{
"first": "Tuomo",
"middle": [],
"last": "Raitio",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Scherer",
"suffix": ""
}
],
"year": 2014,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. COVAREP -a collaborative voice analysis repository for speech technologies. In ICASSP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long shortterm memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. In Neural Computation.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Facial expression analysis",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "iMotion. 2017. Facial expression analysis.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep learning for robust feature generation in audiovisual emotion recognition",
"authors": [
{
"first": "Yelin",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"Mower"
],
"last": "Provost",
"suffix": ""
}
],
"year": 2013,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yelin Kim, Honglak Lee, and Emily Mower Provost. 2013. Deep learning for robust feature generation in audiovisual emotion recognition. In ICASSP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in north american english",
"authors": [
{
"first": "S",
"middle": [
"R"
],
"last": "Livingstone",
"suffix": ""
},
{
"first": "F",
"middle": [
"A"
],
"last": "Russo",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. R. Livingstone and F. A. Russo. 2012. The ryer- son audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of fa- cial and vocal expressions in north american english. PloS one.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dual attention networks for multimodal reasoning and matching",
"authors": [
{
"first": "Hyeonseob",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Jung-Woo",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Jeonghee",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. 2017. Dual attention networks for multimodal rea- soning and matching. CVPR.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multimodal deep learning",
"authors": [
{
"first": "Jiquan",
"middle": [],
"last": "Ngiam",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Mingyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Juhan",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Mul- timodal deep learning. In ICML.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A coregularized approach to semi-supervised learning with multiple views",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Sindhwani",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Niyogi",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ICML Workshop on Learning with Multiple Views",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Sindhwani and Partha Niyogi. 2005. A co- regularized approach to semi-supervised learning with multiple views. In Proceedings of the ICML Workshop on Learning with Multiple Views.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Training very deep networks",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Rupesh Kumar Srivastava",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rupesh Kumar Srivastava, Klaus Greff, and J\u00fcrgen Schmidhuber. 2015. Training very deep networks. In NIPS.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In NIPS.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Combining human trafficking with multimodal deep models",
"authors": [
{
"first": "Edmund",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Cara",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edmund Tong, Amir Zadeh, Cara Jones, and Louis- Philippe Morency. 2017. Combining human traf- ficking with multimodal deep models. ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Temporal multimodal fusion for video emotion classification in the wild",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Vielzeuf",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Pateux",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Jurie",
"suffix": ""
}
],
"year": 2017,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin Vielzeuf, St\u00e9phane Pateux, and Fr\u00e9d\u00e9ric Jurie. 2017. Temporal multimodal fusion for video emotion classification in the wild. CoRR, abs/1709.07200.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Memory networks for recommendation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston. 2017. Memory networks for recommen- dation. In RecSys.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dynamic memory networks for visual and textual question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In ICML.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A survey on multi-view learning",
"authors": [
{
"first": "Chang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Dacheng",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang Xu, Dacheng Tao, and Chao Xu. 2013. A sur- vey on multi-view learning. CoRR, abs/1304.5634.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multiview sequential learning",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Mazumder",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multi- view sequential learning. CoRR, abs/1802.00927.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Vanbriesen",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Emdund",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Minghai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Jonathan Vanbriesen, Sou- janya Poria, Emdund Tong, Erik Cambria, Minghai Chen, and Louis-Philippe Morency. 2018b. Multi- modal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. In ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning Affective Features with a Hybrid Deep Model for Audio-Visual Emotion Recognition",
"authors": [
{
"first": "Shiqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shiliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Transactions on Circuits and Systems for Video Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqing Zhang, Shiliang Zhang, Tiejun Huang, Wen Gao, and Qi Tian. 2017. Learning Affective Fea- tures with a Hybrid Deep Model for Audio-Visual Emotion Recognition. IEEE Transactions on Cir- cuits and Systems for Video Technology.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "A detailed schematic of the proposed RRNN cell (left) and its legend (right). The routing above the dashed black line resembles that of a (non-recursive) GRU cell, where the concatenated attention output constitutes the cell's input. In this case, the cell's input at time \u03c4 is available only once the cell's state at time \u03c4 \u2212 1 has been computed. When the static representations {h a , h v , h t } are instead viewed as the cell's input, then the cell forms a recursive RNN, which subsumes the attention mechanism as a cell sub-component. Two consecutive cells of a Recursive Recurrent Neural Network. Note that the cells share a common input, in contrast with a typical RNN which has a separate input to each cell.",
"uris": null
},
"TABREF1": {
"num": null,
"text": "Results on the CREMA-D dataset across 8 emotions",
"content": "<table><tr><td>Modality</td><td>Feature</td><td colspan=\"3\">Encoder Attention Accuracy</td></tr><tr><td>Audio</td><td>COVAREP</td><td>LSTM</td><td>Nil</td><td>41.25</td></tr><tr><td>Vision</td><td>OpenFace</td><td>LSTM</td><td>Nil</td><td>52.08</td></tr><tr><td colspan=\"2\">Audio + Vision COVAREP, OpenFace</td><td>LSTM</td><td>GCA</td><td>58.33</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"num": null,
"text": "Results on the RAVDESS dataset across 8 emotions for normal speech mode",
"content": "<table><tr><td/><td/><td/><td/><td>text</td></tr><tr><td/><td/><td/><td/><td>audio</td></tr><tr><td/><td/><td colspan=\"2\">very delicate</td><td>vision</td></tr><tr><td/><td/><td>a</td><td/></tr><tr><td/><td/><td>been</td><td/></tr><tr><td colspan=\"2\">finances</td><td>has</td><td>issue</td></tr><tr><td>I think</td><td colspan=\"2\">for some reasons</td><td>between</td><td>couples. Most</td></tr><tr><td>1</td><td colspan=\"3\">2 Time (seconds) 3</td><td>4</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"text": "Results on CMU-MOSEI dataset",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}