ACL-OCL / Base_JSON /prefixC /json /challengehml /2020.challengehml-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:04:51.579371Z"
},
"title": "A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis",
"authors": [
{
"first": "Jean-Benoit",
"middle": [],
"last": "Delbrouck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lab University of Mons",
"location": {
"country": "Belgium"
}
},
"email": "jean-benoit.delbrouck@umons.ac.be"
},
{
"first": "No\u00e9",
"middle": [],
"last": "Tits",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lab University of Mons",
"location": {
"country": "Belgium"
}
},
"email": "noe.tits@umons.ac.be"
},
{
"first": "Mathilde",
"middle": [],
"last": "Brousmiche",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lab University of Mons",
"location": {
"country": "Belgium"
}
},
"email": "mathilde.brousmiche@umons.ac.be"
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Dupont",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lab University of Mons",
"location": {
"country": "Belgium"
}
},
"email": "stephane.dupont@umons.ac.be"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Understanding expressed sentiment and emotions are two crucial factors in human multimodal language. This paper describes a Transformer-based joint-encoding (TBJE) for the task of Emotion Recognition and Sentiment Analysis. In addition to use the Transformer architecture, our approach relies on a modular co-attention and a glimpse layer to jointly encode one or more modalities. The proposed solution has also been submitted to the ACL20: Second Grand-Challenge on Multimodal Language to be evaluated on the CMU-MOSEI dataset. The code to replicate the presented experiments is open-source 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Understanding expressed sentiment and emotions are two crucial factors in human multimodal language. This paper describes a Transformer-based joint-encoding (TBJE) for the task of Emotion Recognition and Sentiment Analysis. In addition to use the Transformer architecture, our approach relies on a modular co-attention and a glimpse layer to jointly encode one or more modalities. The proposed solution has also been submitted to the ACL20: Second Grand-Challenge on Multimodal Language to be evaluated on the CMU-MOSEI dataset. The code to replicate the presented experiments is open-source 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Predicting affective states from multimedia is a challenging task. Emotion recognition task has existed working on different types of signals, typically audio, video and text. Deep Learning techniques allow the development of novel paradigms to use these different signals in one model to leverage joint information extraction from different sources. This paper aims to bring a solution based on ideas taken from Machine Translation (Transformers, Vaswani et al. (2017) ) and Visual Question Answering (Modular co-attention, Yu et al. (2019) ). Our contribution is not only very computationally efficient, it is also a viable solution for Sentiment Analysis and Emotion Recognition. Our results can compare with, and sometimes surpass, the current state-of-the-art for both tasks on the CMU-MOSEI dataset (Zadeh et al., 2018b) .",
"cite_spans": [
{
"start": 433,
"end": 469,
"text": "(Transformers, Vaswani et al. (2017)",
"ref_id": null
},
{
"start": 525,
"end": 541,
"text": "Yu et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 805,
"end": 826,
"text": "(Zadeh et al., 2018b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is structured as follows: first, in section 2, we quickly go over the related work that have been evaluated on the MOSEI dataset, we 1 https://github.com/jbdel/MOSEI_UMONS then proceed to describe our model in Section 3, we then explain how we extract our modality features from raw videos in Section 4 and finally, we present the dataset used for our experiments and their respective results in section 5 and 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the years, many creative solutions have been proposed by the research community in the field of Sentiment Analysis and Emotion Recognition. In this section, we proceed to describe different models that have been evaluated on the CMU-MOSEI dataset. To the best of our knowledge, none of these ideas uses a Tansformer-based solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The Memory Fusion Network (MFN, Zadeh et al. (2018a) ) synchronizes multimodal sequences using a multi-view gated memory that stores intraview and cross-view interactions through time.",
"cite_spans": [
{
"start": 26,
"end": 52,
"text": "(MFN, Zadeh et al. (2018a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Graph-MFN (Zadeh et al., 2018b) consists of a Dynamic Fusion Graph (DFG) built upon MFN. DFG is a fusion technique that tackles the nature of cross-modal dynamics in multimodal language. The fusion is a network that learns to models the n-modal interactions and can dynamically alter its structure to choose the proper fusion graph based on the importance of each n-modal dynamics during inference. Sahay et al. (2018) use Tensor Fusion Network (TFN), i.e. an outer product of the modalities. This operation can be performed either on a whole sequence or frame by frame. The first one lead to an exponential increase of the feature space when modalities are added that is computationally ex-pensive. The second approach was thus preferred. They showed an improvement over an early fusion baseline.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "(Zadeh et al., 2018b)",
"ref_id": "BIBREF14"
},
{
"start": 399,
"end": 418,
"text": "Sahay et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Recently, Shenoy and Sardana (2020) propose a solution based on a context-aware RNN, Multilogue-Net, for Multi-modal Emotion Detection and Sentiment Analysis in conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "This section aims to describe the two model variants evaluated in our experiment: a monomodal variant and a multimodal variant. The monomodal variant is used to classify emotions and sentiments based solely on L (Linguistic), on V (Visual) or on A (Acoustic). The multimodal version is used for any combination of modalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Our model is based on the Transformer model (Vaswani et al., 2017) , a new encoding architecture that fully eschews recurrence for sequence encoding and instead relies entirely on an attention mechanism and Feed-Forward Neural Networks (FFN) to draw global dependencies between input and output. The Transformer allows for significantly more parallelization compared to the Recurrent Neural Network (RNN) that generates a sequence of hidden states h t , as a function of the previous hidden state h t\u22121 and the input for position t.",
"cite_spans": [
{
"start": 44,
"end": 66,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The monomodal encoder is composed of a stack of B identical blocks but with their own set of training parameters. Each block has two sub-layers. There is a residual connection around each of the two sublayers, followed by layer normalization (Ba et al., 2016) . The output of each sub-layer can be written like this:",
"cite_spans": [
{
"start": 242,
"end": 259,
"text": "(Ba et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monomodal Transformer Encoding",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "LayerNorm(x + Sublayer(x))",
"eq_num": "(1)"
}
],
"section": "Monomodal Transformer Encoding",
"sec_num": "3.1"
},
{
"text": "where Sublayer(x) is the function implemented by the sub-layer itself. In traditional Transformers, the two sub-layers are respectively a multi-head self-attention mechanism and a simple Multi-Layer Perceptron (MLP). The attention mechanism consists of a Key K and Query Q that interacts together to output a attention map applied to Context C:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monomodal Transformer Encoding",
"sec_num": "3.1"
},
{
"text": "Attention(Q, K, C) = softmax( QK \u221a k )C (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monomodal Transformer Encoding",
"sec_num": "3.1"
},
{
"text": "In the case of self-attention, K, Q and C are the same input. If this input is of size N \u00d7 k, the operation QK results in a squared attention matrix containing the affinity between each row N . Expression \u221a k is a scaling factor. The multi-head attention (MHA) is the idea of stacking several selfattention attending the information from different representation sub-spaces at different positions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monomodal Transformer Encoding",
"sec_num": "3.1"
},
{
"text": "MHA(Q, K, C) = Concat(head 1 , ..., head h )W o where head i = Attention(QW Q i , KW K i , CW C i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monomodal Transformer Encoding",
"sec_num": "3.1"
},
{
"text": "(3) A subspace is defined as slice of the feature dimension k. In the case of four heads, a slice would be of size k 4 . The idea is to produce different sets of attention weights for different feature sub-spaces. After encoding through the blocks, outputx can be used by a projection layer for classification. In Figure 1, x can be any modality feature as described in Section 4. ",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 320,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Monomodal Transformer Encoding",
"sec_num": "3.1"
},
{
"text": "The idea of a multimodal transformer consists in adding a dedicated transformer (section 3.1) for each modality we work with. While our contribution follows this procedure, we also propose three ideas to enhance it: a joint-encoding, a modular co-attention (Yu et al., 2019 ) and a glimpse layer at the end of each block.",
"cite_spans": [
{
"start": 257,
"end": 273,
"text": "(Yu et al., 2019",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "The modular co-attention consists of modulating the self-attention of a modality, let's call it y, by a primary modality x. To do so, we switch the key K and context C of the self-attention from y to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "x. The operation QK results in an attention map that acts like an affinity matrix between the rows of modality matrix x and y. This computed alignment is applied over the context C (now x) and finally we add the residual connection y. The following equation describes the new attention sub-layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "y = LayerNorm(y + MHA(y, x, x)) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "In this scenario, for the operation QK to work as well as the residual connection (the addition), the feature sizes of x and y must be equal. This can be adjusted with the different transformation matrices of the MHA module. Because the encoding is joint, each modality is encoded at the same time (i.e. we don't unroll the encoding blocks for one modality before moving on to another modality). This way, the MHA attention of modality y for block b is done by the representation of x at block b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "Finally, we add a last layer at the end of each modality block, called the glimpse layer, where the modality is projected in a new space of representation. A glimpse layer consists of stacking G soft attention layers and stacking their outputs. Each soft attention is seen as a glimpse. Formally, we define the soft attention (SoA) i with input matrix M \u2208 R N \u00d7k by a MLP and a weighted sum:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "a i = softmax(v a i (W m M )) SoA i (M ) = m i = N j=0 a ij M j (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "where W m if a transformation matrix of size 2k\u00d7k, v a i is of size 1 \u00d7 2k and m i a vector of size k. Then we can define the glimpse mechanism for matrix M of glimpse size G m as the stacking of all glimpses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "G M = Stacking(m 1 , . . . , m G m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "Note that before the parameter W m , whose role is to embed the matrix M in a higher dimension, is shared between all glimpses (this operation is therefore only computed once) while the set of vectors {v a i } computing the attention weights from this bigger space is dedicated for each glimpse. In our contribution, we always chose G m = N so the sizes allow us to perform a final residual connections M = LayerNorm(M + G M ). The Figure 2 depicts the encoding for two features where modality x is modulating the modality y. This encoding can be ported to any number of modalities by duplicating the architecture. In our case, it is always the linguistic modality that modulates the others.",
"cite_spans": [],
"ref_spans": [
{
"start": 432,
"end": 440,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Multimodal Transformer Encoding",
"sec_num": "3.2"
},
{
"text": "After all the Transformer blocks were computed, a modality goes into a final glimpse layer of size 1. The result is therefore only one vector. The vectors of each modality are summed element-wise, let's call the results of this sum s, and are then projected over possible answers according to the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification layer",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y \u223c p = W a (LayerNorm(s))",
"eq_num": "(6)"
}
],
"section": "Classification layer",
"sec_num": "3.3"
},
{
"text": "If there is only one modality, the sum operation is omitted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification layer",
"sec_num": "3.3"
},
{
"text": "This section aims to explain how we pre-compute the features for each modality. These features are the inputs of the Transformer blocks. Note that the features extraction is done independently for each example of the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature extractions",
"sec_num": "4"
},
{
"text": "Each utterance is tokenized and lowercase. We also remove special characters and punctuation. We build our vocabulary against the train-set and end up with a glossary of 14.176 unique words. We embed each word in a vector of 300 dimensions using GloVe (Pennington et al., 2014) . If a word from the validation or test-set is not in present our vocabulary, we replace it with the unknown token \"unk\".",
"cite_spans": [
{
"start": 252,
"end": 277,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic",
"sec_num": "4.1"
},
{
"text": "The acoustic part of the signal of the video contains a lot of speech. Speech is used in conversations to communicate information with words but also contains a lot of information that are non linguistic such as nonverbal expressions (laughs, breaths, sighs) and prosody features (intonation, speaking rate). These are important data in an emotion recognition task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic",
"sec_num": "4.2"
},
{
"text": "Acoustic features widely use in the speech processing field such as F0, formants, MFCCs, spectral slopes consist of handcrafted sets of high-level features that are useful when an interpretation is needed, but generally discard a lot of information. Instead, we decide to use low-level features for speech recognition and synthesis, the mel-spectrograms. Since the breakthrough of deep learning systems, the mel-spectrograms have become a suitable choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic",
"sec_num": "4.2"
},
{
"text": "The spectrum of a signal is obtained with Fourier analysis that decompose a signal in a sum of sinusoids. The amplitudes of the sinusoids constitute the amplitude spectrum. A spectrogram is the concatenation over time of spectra of windows of the signal. Mel-spectrogram is a compressed version of spectrograms, using the fact the human ear is more sensitive to low frequencies than high frequencies. This representation thus attributes more resolution for low frequencies than high frequencies using mel filter banks. A mel-spectrogram is typically used as an intermediate step for text-to-speech synthesis (Tachibana et al., 2018) in state-of-the-art systems as audio representation, so we believe it is a good compromise between dimensionality and representation capacity.",
"cite_spans": [
{
"start": 608,
"end": 632,
"text": "(Tachibana et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic",
"sec_num": "4.2"
},
{
"text": "Our mel-spectrograms were extracted with the same procedure as in (Tachibana et al., 2018) with librosa (McFee et al., 2015) library with 80 filter banks (the embedding size is therefore 80). A tem-poral reduction by selecting one frame every 16 frames was the applied.",
"cite_spans": [
{
"start": 66,
"end": 90,
"text": "(Tachibana et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 104,
"end": 124,
"text": "(McFee et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic",
"sec_num": "4.2"
},
{
"text": "Inspired by the success of convolutional neural networks (CNNs) in different tasks, we chose to extract visual features with a pre-trained CNN. Current models for video classification use CNNs with 3D convolutional kernels to process the temporal information of the video together with spatial information (Tran et al., 2015) . The 3D CNNs learn spatio-temporal features but are much more expensive than 2D CNNs and prone to overfitting. To reduce complexity, Tran et al. (2018) explicitly factorizes 3D convolution into two separate and successive operations, a 2D spatial convolution and a 1D temporal convolution. We chose this model, named R(2+1)D-152, to extract video features for the emotion recognition task. The model is pretrained on Sports-1M and Kinetics.",
"cite_spans": [
{
"start": 306,
"end": 325,
"text": "(Tran et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 460,
"end": 478,
"text": "Tran et al. (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual",
"sec_num": "4.3"
},
{
"text": "The model takes as input a clip of 32 RGB frames of the video. Each frame is scaled to the size of 128 x 171 and then cropped a window of size 112 x 112. The features are extracted by taking the output of the spatiotemporal pooling. The feature vector for the entire video is obtained by sliding a window of 32 RGB frames with a stride of 8 frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual",
"sec_num": "4.3"
},
{
"text": "We chose not to crop out the face region of the video and keep the entire image as input to the network. Indeed, the video is already centered on the person and we expect that the movement of the body such as the hands can be a good indicator for the emotion recognition and sentiment analysis tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual",
"sec_num": "4.3"
},
{
"text": "We test our joint-encoding solution on a novel dataset for multimodal sentiment and emotion recognition called CMU-Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI, Zadeh et al. (2018b) ). It consists of 23,453 annotated sentences from 1000 distinct speakers. Each sentence is annotated for sentiment on a [-3,3] scale from highly negative (-3) to highly positive (+3) and for emotion by 6 classes : happiness, sadness, anger, fear, disgust, surprise. In the scope of our experiment, the emotions are Table 1 : Results on the test-set. Note that the F1-scores for emotions are weighted to be consistent with the previous state-of-the-art. Also, we do not compare accuracies for emotions, as previous works use a weighted variant while we use standard accuracy. G-MFN is the Graph-MFN model and Mu-Net is the Multilogue-Net model. either present or not present (binary classification), but two emotions can be present at the same time, making it a multi-label problem. The Figure 3 shows the distribution of sentiment and emotions in CMU-MOSEI dataset. The distribution shows a natural skew towards more frequently used emotions. The most common category is happiness with more than 12,000 positive sample points. The least prevalent emotion is fear with almost 1900 positive sample. It also shows a slight shift in favor of positive sentiment.",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "Zadeh et al. (2018b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 514,
"end": 521,
"text": "Table 1",
"ref_id": null
},
{
"start": 985,
"end": 993,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5"
},
{
"text": "In this section, we report the results of our model variants described in Section 3. We first explain our experimental setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We train our models using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e \u2212 4 and a mini-batch size of 32. If the accuracy score on the validation set does not increase for a given epoch, we apply a learning-rate decay of factor 0.2. We decay our learning rate up to 2 times. Afterwards, we use an early-stop of 3 epochs. Results presented in this paper are from the averaged predictions of 5 models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "6.1"
},
{
"text": "Unless stated otherwise, we use 6 Transformer blocks of hidden-size 512, regardless of the modality encoded. The self-attention has 4 multi-heads and the MLP has one hidden layer of 1024. We apply dropout of 0.1 on the output of each block (equation 4) and of 0.5 on the input of the classification layer (s in equation 6). For the acoustic and visual features, we truncate the features for spatial dimensions above 40. We also use that number for the number of glimpses. This choice is made base on Figure 4 6.2 Results",
"cite_spans": [],
"ref_spans": [
{
"start": 500,
"end": 508,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "6.1"
},
{
"text": "The Table 1 show the scores of our different modality combinations. We do not compare accuracies for emotions with previous works as they used a weighted accuracy variant while we use standard accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 11,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "6.1"
},
{
"text": "We notice that our L+A (linguistic + acoustic) is the best model. Unfortunately, adding the visual input did not increase the results, showing that it is still the most difficult modality to integrate into a multimodal pipeline. For the sentiment task, the improvement is more tangible for the 7-class, showing that our L+A model learns better representations for more complex classification problems compared to our monomodal model L using only the linguistic input. We also surpass the previous state-of-the-art for this task. For the emotions, we can see that Multilogue-Net gives better prediction for some classes, such as happy, sad, angry and disgust. We postulate that this is because Multilogue is a context-aware method while our model does not take into account the previous or next sentence to predict the current utterance. This might affect our accuracy and f1-score on the emotion task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "6.1"
},
{
"text": "The following Table 2 depicts the results of our solution sent to the Second Grand-Challenge on Multimodal Language. It has been evaluated on the private test-fold released for the challenge and can serve as a baseline for future research. Note that in this table, the F1-scores are unweighted, as should be future results for a fair comparison and interpretation of the results. ",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "6.1"
},
{
"text": "We presented a computationally efficient and robust model for Sentiment Analysis and Emotion Recognition evaluated on CMU-MOSEI. Though we showed strong results on accuracy, we can see that there is still a lot of room for improvement on the F1-scores, especially for the emotion classes that are less present in the dataset. To the best of our knowledge, the results presented by our transformer-based joint-encoding are the strongest scores for the sentiment task on the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "The following list identifies other features we computed as input for our model that lead to weaker performances:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "\u2022 We tried the OpenFace 2.0 features (Baltrusaitis et al., 2018) . This strategy computes facial landmark, the features are specialized for facial behavior analysis;",
"cite_spans": [
{
"start": 37,
"end": 64,
"text": "(Baltrusaitis et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "\u2022 We tried a simple 2D CNN named DenseNet (Huang et al., 2017) . For each frame of the video, a feature vector is extracted by taking the output of the average pooling layer;",
"cite_spans": [
{
"start": 42,
"end": 62,
"text": "(Huang et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "\u2022 We tried different values for the number of mel filter bank (512 and 1024) and temporal reduction (1, 2, 4 and 8 frames), we also tried to use the full spectrogram;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "\u2022 We tried not using the GloVe embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "No\u00e9 Tits is funded through a FRIA grant (Fonds pour la Formation\u00e0 la Recherche dans l'Industrie et l'Agriculture, Belgium).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Openface 2.0: Facial behavior analysis toolkit",
"authors": [
{
"first": "Tadas",
"middle": [],
"last": "Baltrusaitis",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "13th IEEE International Conference on Automatic Face & Gesture Recognition",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. Openface 2.0: Fa- cial behavior analysis toolkit. In 13th IEEE Inter- national Conference on Automatic Face & Gesture Recognition, pages 59-66. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Densely connected convolutional networks",
"authors": [
{
"first": "Gao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhuang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Kilian Q",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4700--4708",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected con- volutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4700-4708.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in python",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Mcfee",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Dawen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "P",
"middle": [
"W"
],
"last": "Daniel",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcvicar",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 14th python in science conference",
"volume": "",
"issue": "",
"pages": "18--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian McFee, Colin Raffel, Dawen Liang, Daniel PW Ellis, Matt McVicar, Eric Battenberg, and Oriol Ni- eto. 2015. librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in sci- ence conference, pages 18-25.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532- 1543.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multimodal relational tensor network for sentiment and emotion classification",
"authors": [
{
"first": "Saurav",
"middle": [],
"last": "Sahay",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shachi",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Lama",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nachman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)",
"volume": "",
"issue": "",
"pages": "20--27",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3303"
]
},
"num": null,
"urls": [],
"raw_text": "Saurav Sahay, Shachi H Kumar, Rui Xia, Jonathan Huang, and Lama Nachman. 2018. Multimodal relational tensor network for sentiment and emo- tion classification. In Proceedings of Grand Chal- lenge and Workshop on Human Multimodal Lan- guage (Challenge-HML), pages 20-27, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multiloguenet: A context aware rnn for multi-modal emotion detection and sentiment analysis in conversation",
"authors": [
{
"first": "Aman",
"middle": [],
"last": "Shenoy",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sardana",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aman Shenoy and Ashish Sardana. 2020. Multilogue- net: A context aware rnn for multi-modal emotion detection and sentiment analysis in conversation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficiently trainable text-tospeech system based on deep convolutional networks with guided attention",
"authors": [
{
"first": "Hideyuki",
"middle": [],
"last": "Tachibana",
"suffix": ""
},
{
"first": "Katsuya",
"middle": [],
"last": "Uenoyama",
"suffix": ""
},
{
"first": "Shunsuke",
"middle": [],
"last": "Aihara",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4784--4788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideyuki Tachibana, Katsuya Uenoyama, and Shun- suke Aihara. 2018. Efficiently trainable text-to- speech system based on deep convolutional net- works with guided attention. In 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4784-4788. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning spatiotemporal features with 3d convolutional networks",
"authors": [
{
"first": "Du",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Lubomir",
"middle": [],
"last": "Bourdev",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Torresani",
"suffix": ""
},
{
"first": "Manohar",
"middle": [],
"last": "Paluri",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "4489--4497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Tor- resani, and Manohar Paluri. 2015. Learning spa- tiotemporal features with 3d convolutional networks. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 4489-4497.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A closer look at spatiotemporal convolutions for action recognition",
"authors": [
{
"first": "Du",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Torresani",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Ray",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Manohar",
"middle": [],
"last": "Paluri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "6450--6459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. 2018. A closer look at spatiotemporal convolutions for action recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6450-6459.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep modular co-attention networks for visual question answering",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Dacheng",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6281--6290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6281-6290.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Soujanya Poria, Erik Cambria, and Louis-Philippe Morency",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Mazumder",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multi- view sequential learning. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph",
"authors": [
{
"first": "Amirali",
"middle": [],
"last": "Zadeh",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2236--2246",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1208"
]
},
"num": null,
"urls": [],
"raw_text": "AmirAli Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018b. Mul- timodal language analysis in the wild: CMU- MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2236-2246, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Monomodal Transformer encoder.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Multimodal Transformer Encoder for two modalities with joint-encoding.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "MOSEI statistics, taken from the author's paper.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Temporal dimension (i.e. rows in our feature matrices) for the acoustic and visual modality.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "7-class sentiment accuracy according to the number of blocks per Transformer.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Test set</td><td colspan=\"2\">Sentiment</td><td/><td/><td/><td/><td/><td colspan=\"2\">Emotions</td><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">2-class 7-class</td><td colspan=\"2\">Happy</td><td/><td>Sad</td><td colspan=\"2\">Angry</td><td colspan=\"2\">Fear</td><td colspan=\"2\">Disgust</td><td colspan=\"2\">Surprise</td></tr><tr><td/><td>A</td><td>A</td><td>A</td><td>F1</td><td>A</td><td>F1</td><td>A</td><td>F1</td><td>A</td><td>F1</td><td>A</td><td>F1</td><td>A</td><td>F1</td></tr><tr><td>L+ A + V</td><td>81.5</td><td>44.4</td><td colspan=\"12\">65.0 64.0 72.0 67.9 81.6 74.7 89.1 84.0 85.9 83.6 90.5 86.1</td></tr><tr><td colspan=\"15\">L + A 66.0 L 82.4 45.5 81.9 44.2 64.5 63.4 72.9 65.8 81.4 75.3 89.1 84.0 86.6 84.5 90.5 81.4</td></tr><tr><td>Mu-Net</td><td>82.1</td><td>-</td><td>-</td><td>68.4</td><td>-</td><td>74.5</td><td>-</td><td>80.9</td><td>-</td><td>87.0</td><td>-</td><td>87.3</td><td>-</td><td>80.9</td></tr><tr><td>G-MFN</td><td>76.9</td><td>45.0</td><td>-</td><td>66.3</td><td>-</td><td>66.9</td><td>-</td><td>72.8</td><td>-</td><td>89.9</td><td>-</td><td>76.6</td><td>-</td><td>85.5</td></tr></table>",
"num": null,
"text": "65.5 73.9 67.9 81.9 76.0 89.2 87.2 86.5 84.5 90.6 86.1",
"html": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"num": null,
"text": "Results on the private test-fold for 7-class sentiment problem and for each emotion. Accuracy is denoted by A. In this table, the F1-scores are unweighted, unlikeTable 1.",
"html": null,
"type_str": "table"
}
}
}
}