ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.31.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:38.773489Z"
},
"title": "All-in-One: A Deep Attentive Multi-task Learning Framework for Humour, Sarcasm, Offensive, Motivation, and Sentiment on Memes",
"authors": [
{
"first": "Dushyant",
"middle": [
"Singh"
],
"last": "Chauhan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Patna Patna",
"location": {
"postCode": "801106",
"settlement": "Bihar",
"country": "India"
}
},
"email": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Patna Patna",
"location": {
"postCode": "801106",
"settlement": "Bihar",
"country": "India"
}
},
"email": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Patna Patna",
"location": {
"postCode": "801106",
"settlement": "Bihar",
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we aim at learning the relationships and similarities of a variety of tasks, such as humour detection, sarcasm detection, offensive content detection, motivational content detection and sentiment analysis on a somewhat complicated form of information, i.e., memes. We propose a multi-task, multi-modal deep learning framework to solve multiple tasks simultaneously. For multi-tasking, we propose two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other. In contrast, iCRM develops relations between the different classes of tasks. Finally, representations from both the attentions are concatenated and shared across the five tasks (i.e., humour, sarcasm, offensive, motivational, and sentiment) for multi-tasking. We use the recently released dataset in the Memotion Analysis task @ SemEval 2020, which consists of memes annotated for the classes as mentioned above. Empirical results on Memotion dataset show the efficacy of our proposed approach over the existing state-of-theart systems (Baseline and SemEval 2020 winner). The evaluation also indicates that the proposed multi-task framework yields better performance over the single-task learning.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we aim at learning the relationships and similarities of a variety of tasks, such as humour detection, sarcasm detection, offensive content detection, motivational content detection and sentiment analysis on a somewhat complicated form of information, i.e., memes. We propose a multi-task, multi-modal deep learning framework to solve multiple tasks simultaneously. For multi-tasking, we propose two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other. In contrast, iCRM develops relations between the different classes of tasks. Finally, representations from both the attentions are concatenated and shared across the five tasks (i.e., humour, sarcasm, offensive, motivational, and sentiment) for multi-tasking. We use the recently released dataset in the Memotion Analysis task @ SemEval 2020, which consists of memes annotated for the classes as mentioned above. Empirical results on Memotion dataset show the efficacy of our proposed approach over the existing state-of-theart systems (Baseline and SemEval 2020 winner). The evaluation also indicates that the proposed multi-task framework yields better performance over the single-task learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The content and form of content shared on online social media platforms have changed rapidly over time. Currently, one of the most popular forms of media shared on such platforms is 'Memes'. According to its definition from Oxford Dictionary, a meme is a piece of data, often in the form of images, text or videos that carry cultural information through an imitable phenomenon with a mimicked theme, that is shared (sometimes with slight modification) rapidly by internet users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Every meme can be associated with five affect values, namely humour (Hu), sarcastic (Sar), offensive (Off), motivational (Mo), and sentiment (Sent). Hence, in a broad sense, memes can be categorized into four intersecting sets viz. humorous memes, sarcastic memes, offensive memes, and motivational memes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Humour refers to the quality of being amusing or comic. Formally, humour is defined as the nature of experiences to induce laughter and provide amusement. Humourous memes are the most popular and widely used on social media platforms. An example for humourous memes is shown in Figure 1a .",
"cite_spans": [],
"ref_spans": [
{
"start": 278,
"end": 287,
"text": "Figure 1a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sarcasm is often used to convey thinly veiled disapproval humorously. A sarcastic meme is a meme where an incongruity exists between the intended meaning and the way it is expressed. These are generally used to express dissatisfaction or to veil insult through humour. As we can see in Figure 1a , the person on the right is made fun of, without explicitly expressing it, which is a typical example of a sarcastic meme.",
"cite_spans": [
{
"start": 286,
"end": 295,
"text": "Figure 1a",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Offensive content include a lot of insulting, derogatory terms. It is contrary to the moral sense or good. As social media expands, offensive language has become a huge headache to maintain sanity on social media. As memes are growing to become more and more popular, detecting offensive memes on such platforms is becoming an important and challenging task. Motivation is derived from the word 'motive' which means needs or desires within the individuals. It is the process of stimulating people to actions to achieve their goals. By its definition, motivational memes are those that benefit a certain group of people to achieve their plans or goals. Motivation can be both either positive or negative. However, we usually consider motivation in a positive sense. Figure 1b is an excellent example for the positive motivation. Sentiment analysis refers to the process of computationally identifying and categorizing opinions expressed in a piece of communication, especially to determine whether the writer's attitude towards a particular topic, product, etc. is positive, negative, or neutral. This has been a very prominent and important task in Natural Language Processing. Sentiment analysis on memes refers to the task of systematically extracting its emotional tone in understanding the opinion expressed by the meme. Generally, specific labels of one task have a strong relation to the other labels of sarcasm, offensive, humour or motivational tasks. Through proper representation, training, and evaluation, these relations can be modelled to help each other for better classification. For example, in Figure 1b , just by seeing text, the meme can be either sarcastic or motivational, but the image in the meme confirms that this has an overall positive sentiment and hence motivational. Similarly, in Figure 1c , knowing that the meme is sarcastic and has a negative sentiment makes it highly probable to being offensive.",
"cite_spans": [],
"ref_spans": [
{
"start": 765,
"end": 774,
"text": "Figure 1b",
"ref_id": "FIGREF2"
},
{
"start": 1611,
"end": 1620,
"text": "Figure 1b",
"ref_id": "FIGREF2"
},
{
"start": 1811,
"end": 1820,
"text": "Figure 1c",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As seen above, humorous, motivational, offensive, and sarcastic nature of the memes are closely related. Thus, a multi-task learning framework would be extremely beneficial in such scenarios. In this paper, we exploit these relationships and similarities in the tasks of humour detection, sarcasm detection, offensive content detection, motivational content detection, and sentiment in a multi-task manner. The main contributions and/or attributes are as follows: (a). We propose a multi-task multimodal deep learning framework to leverage the util-ity of each task to help each other in a multi-task framework; (b). We propose two attention mechanisms viz. iTRM and iCRM to better understand the relationship between the tasks and between the classes of tasks, respectively; and (c). We present the state-of-the-art results for meme prediction in the multi-modal scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentiment analysis and its related tasks, such as humour detection, sarcasm detection, and offensive content detection, are the topics of interest due to their needs in recent times. There has been a phenomenal growth in multi-modal information sources in social media, such as audio, video, and text. Multi-modal information analysis has attracted the attention of researchers and developers due to their complexity, and multi-tasking has been of keen interest in the field of affect analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Humour: Early feature-based models attempt to solve humour include the models based on word overlap with jokes, presence of ambiguity, and word overlap with common idioms (Sj\u00f6bergh and Araki, 2007) , human-centeredness, and negative polarity (Mihalcea and Pulman, 2007) . Some of the recent multi-modal approaches include utilizing information from the various modalities, such as acoustic, visual, and text, using deep learning models (Bertero and Fung, 2016; Yang et al., 2019; Swamy et al., 2020) . Yang et al. (2020) employs a paragraph decomposition technique coupled with fine-tuning BERT (Devlin et al., 2018) model for humour detection on three languages (Chinese, Spanish and Russian).",
"cite_spans": [
{
"start": 171,
"end": 197,
"text": "(Sj\u00f6bergh and Araki, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 242,
"end": 269,
"text": "(Mihalcea and Pulman, 2007)",
"ref_id": "BIBREF16"
},
{
"start": 436,
"end": 460,
"text": "(Bertero and Fung, 2016;",
"ref_id": "BIBREF3"
},
{
"start": 461,
"end": 479,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF25"
},
{
"start": 480,
"end": 499,
"text": "Swamy et al., 2020)",
"ref_id": null
},
{
"start": 502,
"end": 520,
"text": "Yang et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 595,
"end": 616,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Sarcasm: Starting from the traditional approaches, such as rule-based methods (Veale and Hao, 2010) , lexical features (Carvalho et al., 2009) , and incongruity (Joshi et al., 2015) to all the way up to multi-modal deep learning techniques (Schi-fanella et al., 2016) , sarcasm detection has been showing its presence. Castro et al. (2019) created a multi-modal conversational dataset, MUStARD from the famous TV shows, and provided baseline SVM approaches for sarcasm detection. Recently, Chauhan et al. (2020) proposed a multi-task learning framework for multi-modal sarcasm, sentiment and emotion analysis to explore how sentiment and emotion helps sarcasm. The author used the MUS-tARD dataset and extended the MUStARD dataset with sentiment (implicit and explicit) and emotion (implicit and explicit) labels.",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "(Veale and Hao, 2010)",
"ref_id": "BIBREF23"
},
{
"start": 119,
"end": 142,
"text": "(Carvalho et al., 2009)",
"ref_id": "BIBREF4"
},
{
"start": 161,
"end": 181,
"text": "(Joshi et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 240,
"end": 267,
"text": "(Schi-fanella et al., 2016)",
"ref_id": null
},
{
"start": 319,
"end": 339,
"text": "Castro et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Offensive: Razavi et al. 2010used a threelevel classification model taking advantage of various features from statistical models and rulebased patterns and various dictionary-based features. Chen et al. (2012) proposed a feature-based Lexical Syntactic Feature (LSF) architecture to detect the offensive contents. Gomez et al. 2020created a multi-modal hate-speech dataset from Twitter (MMHS150K) to introduce a deep-learningbased multi-modal Textual Kernels Model (TKM) and compare it with various existing deep learning architectures on the proposed MMHS150K dataset.",
"cite_spans": [
{
"start": 191,
"end": 209,
"text": "Chen et al. (2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Motivation: Swieczkowska et al. 2020proposes a novel chaining method of neural networks for identifying motivational texts where the output from one model is passed on to the second model. Sentiment: An important task to leverage multimodality information effectively is to combine them using various strategies. Mai et al. (2019) employs a hierarchical feature fusion strategy, Divide, Conquer, and Combine for affective computing. Chauhan et al. (2019) uses the Inter-modal Interaction Module (IIM) to combine information from a pair of modalities for multi-modal sentiment and emotion analysis. Some of the other techniques include a contextual inter-modal attention based framework for multi-modal sentiment classification (Ghosal et al., 2018; .",
"cite_spans": [
{
"start": 313,
"end": 330,
"text": "Mai et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 727,
"end": 748,
"text": "(Ghosal et al., 2018;",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Multi-task: Some of the early attempts to correlate the tasks like sarcasm, humour, and offensive statements include a features based classification using various syntactic and semantic features, such as frequency of words, the intensity of adverbs and adjectives, the gap between positive and negative terms, the structure of the sentence, synonyms and others (Barbieri and Saggion, 2014) . More recently, Badlani et al. (2019) proposed a convolution-based model to extract the embedding by fine-tuning the same for the tasks of sentiment, sarcasm, humour, and hate-speech and then concatenating these representations to be used in a sentiment classifier.",
"cite_spans": [
{
"start": 361,
"end": 389,
"text": "(Barbieri and Saggion, 2014)",
"ref_id": "BIBREF2"
},
{
"start": 407,
"end": 428,
"text": "Badlani et al. (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In our current work, we propose a multi-task multi-modal deep learning framework to simultaneously solve the tasks of sarcasm, humour, offensive, and motivational on memes. Further, to the best of our knowledge, this is the very first attempt at solving the multi-modal affect analysis on memes in a multi-task deep learning framework. We demonstrate through a detailed empirical evaluation that a multi-task learning framework can improve the performance of individual tasks over a single task learning framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose an attention-based deep learning model to solve the problem of multi-task affect analysis of memes. The inputs to the model are the meme itself and the manually corrected text extracted through OCR. The overall architecture is depicted in Figure 2. The source code is available at http://www.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 256,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Methodology",
"sec_num": "3"
},
{
"text": "iitp.ac.in/\u02dcai-nlp-ml/resources.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methodology",
"sec_num": "3"
},
{
"text": "We now describe the input features for our proposed model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Layer:",
"sec_num": "3.1"
},
{
"text": "Given N number of samples, where each sample is associated with meme image and the corresponding text. Let us assume, in each sample, there are n T number of words w 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Input",
"sec_num": "3.1.1"
},
{
"text": "n T = w 1 , ..., w n T , where w j \u2208 R d T , d T = 768",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Input",
"sec_num": "3.1.1"
},
{
"text": ", and w j is obtained using BERT (Devlin et al., 2018) . The maximum number of words for i th sample across the dataset is 189.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Input",
"sec_num": "3.1.1"
},
{
"text": "Image is the prime component of any meme and contains the majority of the information. To leverage this information effectively, feature vectors from average pooling layer (avgpool) of the Im-ageNet pre-trained ResNet-152 (He et al., 2016) image classification model are extracted. Each image is first pre-processed by resizing to 224 \u00d7 224 and then normalized. The extracted feature vector for image of i th sample is represented by V i \u2208 R dv and d v = 2048.",
"cite_spans": [
{
"start": 222,
"end": 239,
"text": "(He et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Input",
"sec_num": "3.1.2"
},
{
"text": "These vectors are concatenated and then passed through a set of four dense layers to obtain the vectors of equal length d represented by where t is a task \u2208 {humour, sarcasm, offensive, motivational}. These vectors are then passed through the Inter-class Relationship Module and Inter-task Relationship module. The output is then concatenated and passed through another set of four dense layers, and a layer of softmax is applied to obtain the final output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Modules",
"sec_num": "3.2"
},
{
"text": "T V t \u2208 R d ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Modules",
"sec_num": "3.2"
},
{
"text": "This module is used to learn the relationship between the classes of all the tasks. This is done by passing T V t through another dense layer and softmax (confidence score). For each task, we first group all the classes into two classes for the hierarchical classification of the sample. At this level, the sample is labelled with either positive or negative for all the tasks. For instance, a sample will be labelled as either sarcastic or not sarcastic for sarcasm tasks. A loss is back-propagated using these confidence scores for the corresponding tasks. This is done in order to control each dense layer so that it aligns with the respective tasks. Meanwhile, a dot-product of the softmax scores of each task is obtained and used to form the Score Matrix. This is then flattened and passed forward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Relationship Module",
"sec_num": "3.2.1"
},
{
"text": "While the above module is used to find the correlation between the individual classes, this module is used to find the relationship between the different tasks in the model. This is done by initially finding the cosine-similarity between T V t vectors. And a pooling layer is used to collect information between the tasks and then normalized by the corresponding cosine-similarity score. The output from the pooling layer is then flattened and passed forward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-task Relationship Module",
"sec_num": "3.2.2"
},
{
"text": "The flattened vectors from iT RM and iCRM are concatenated and then branched into four dense layers for each task. This is then forwarded through a softmax layer to obtain the final output for each task, and the loss is back-propagated to learn the parameters. In this layer, the information from both iCRM and iTRM modules will be leveraged and used to predict the final outcome.Please note that, there are two sets of loss used in the model, one in the iCRM module and second at the end the of Output Unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Unit",
"sec_num": "3.3"
},
{
"text": "We perform experiments using the dataset released in the Memotion Analysis 1.0 @SemEval 2020 Task (Sharma et al., 2020) 1 . This dataset consists of 6992 samples. Each sample consists of an image, corrected text extracted from the meme, and the five labels associated with the five tasks, viz., Humour, Sarcasm, Offensive, Motivational, and Overall Sentiment. The distribution of the classes associated with each of the five tasks with label is shown in Table 1 and Table 2 . We address 5 multi-modal affective analysis problems, namely humour classification, sarcasm classification, offensive classification, motivational classification, and sentiment classification.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Sharma et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 454,
"end": 473,
"text": "Table 1 and Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "A. Humour classification: There are four classes associated with the humour task, namely not funny, funny, very funny, and hilarious, which are labelled as 0, 1, 2, and 3, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "B. Sarcasm classification: There are four classes associated with the sarcasm task, namely not sarcastic, general, twisted meaning, and very twisted which are labelled as 0, 1, 2, and 3 respectively. C. Offensive classification: There are four classes associated with the offensive task, namely not offensive, slight, very offensive, and hateful offensive which are labelled as 0, 1, 2, and 3, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "There are two classes associated with the motivational task, namely not motivational and motivational, which are labelled as 0 and 1, respectively. E. Sentiment classification: There are five classes associated with the sentiment task, namely very negative, negative, neutral, positive, and very positive, which are labelled as 0, 1, 2, 3, and 4, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D. Motivational classification:",
"sec_num": null
},
{
"text": "In accordance with the SemEval 2020 (Sharma et al., 2020) , the project is organized into three sets of tasks 2 .",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Sharma et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "\u2022 Task A: Sentiment Classification: In this task, memes are classified into 3 classes viz., -1 (negative, very negative), 0 (neutral) and +1 (positive, very positive).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "\u2022 Task B: Binary-class Classification: In this set of tasks, the memes are classified as follows (c.f. T-B in Table 2 );",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "1. Humour ( funny, very funny, hilarious) and Non-humour (not funny). 2. Sarcasm (general, twisted meaning, very twisted) and Non-sarcasm (non sarcastic) 3. Offensive (slight, very offensive, hateful offensive) and Non-Offensive (not offensive), and 4. Motivational (motivational) and Nonmotivational (not motivational).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "\u2022 Task C: Multi-class Classification: In this set of task, the original labels are used as described in the dataset (c.f. T-C in Table 2 ) for the tasks of Humour, Sarcasm, Offensive and Motivational.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Please note that, in Task A, as it is not a multitask scenario, iCRM and iTRM are not applicable. For all the other sets of tasks, the entire network is shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "We evaluate our proposed model on the multimodal Memotion dataset. We perform grid search to find the optimal hyper-parameters (c.f. Table 3 ). Though we aim for a generic hyper-parameter configuration for all the experiments, in some cases, a different choice of the parameter has a significant effect. Therefore, we choose different parameters for a different set of experiments. We implement our proposed model on the open source machine learning library PyTorch 3 . Hugging Face 4 library is used for BERT implementation. As the evaluation metric, we employ precision (P), recall (R), macro-F1 (M a -F1), and micro-F1 (M i -F1) for all the tasks i.e., humour, sarcasm, offensive, motivational, and sentiment. We use Adam as an optimizer, Softmax as a classifier, and the categorical cross-entropy as a loss function for all the tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "We evaluate our proposed architecture with bimodal inputs (i.e., text and visual). We show the obtained results for Task-A (i.e., sentiment analysis) in Table 4 . Task-B has four different tasks, i.e., humour, sarcasm, offensive, and sentiment with binary-class labels (c.f. binary-class classification in Section 5). The results are shown in Table 5 . Task-C has also four different tasks, i.e., humour, sarcasm, offensive, and sentiment with multi-class labels (c.f. multi-class classification in Section 5). The results are shown in Table 6 . Table 6 : Memes: Single-task vs Multi-task (Task C)",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 343,
"end": 350,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 536,
"end": 543,
"text": "Table 6",
"ref_id": null
},
{
"start": 546,
"end": 553,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "L a b e l s Task-B (Binary Classification) STL MTL P R Ma-F1 M i -F1 P R Ma-F1 M i -F1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "R Ma-F1 M i -F1 P R Ma-F1 M i -F1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "In both the tasks B and C, we outline the comparison between the multi-task (MTL) and single-task (STL) learning frameworks in Table 5 and Table 6 . We observe that MTL shows better performance over the STL setups.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 146,
"text": "Table 5 and Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "For the offensive task, we find that STL performs better than MTL. We hypothesize that this is due to the model getting confused between the offensive and sarcastic (or humorous) memes. From Table 9 , under Sarcasm, we can see that for the class V t , MTL predicts a few samples as sarcastic, whereas in actuality it belongs to the other classes. However, we can see a decrease in performance for class H o under Offensive. This is due to the lack of a larger dataset for the complex model to disambiguate the same. In the example, BRB...GOT TO TAKE CARE OF SOME SH*T IN UKRAIN (c.f. Figure 1d) , the actual set of labels are F n , G n , S g , N m . The predicted labels in STL are",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 9",
"ref_id": "TABREF12"
},
{
"start": 584,
"end": 594,
"text": "Figure 1d)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "V f , G n , S g , M o and in MTL are V f , T m , V o , M o .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "This is supposed to be slightly offensive but got it confused with the sarcastic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "We compare the results obtained in our proposed model against the baseline model and SemEval 2020 winner, which also made use of the same dataset. The comparative analysis is shown in Table 7 . Our proposed multi-modal framework achieves the best macro-F1 of 35.8% (0.4% \u2191) and micro-F1 of 50.6% (1.9% \u2191) as compared to macro-F1 of 35.4% and micro-F1 of 48.7% of the state-of-the-art system (i.e., SemEval 2020 Winner) for Task-A. Similarly, for Task-B, we obtain the macro-F1 of 53.5% (1.7% \u2191) and micro-F1 of 63.4% (2.0% \u2191) as compared to the macro-F1 of 51.8% and micro-F1 of 61.4% of the state-ofthe-art system, whereas for Task-C, we obtain the macro-F1 of 33.3% (1.1% \u2191) and micro-F1 of 41.9% (4.1% \u2191) as compared to the macro-F1 of 32.2% and micro-F1 of 37.8% of the state-of-theart system.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparative Analysis",
"sec_num": "7"
},
{
"text": "It is evident from Table 5 and Table 6 that multitask learning framework successfully leverages the S y s t e m s Table 7 : Comparative Analysis of the proposed approach with recent state-of-the-art systems. Here, SE'20 denotes the SemEval 2020 winner, and 'Proposed' refers to the models described in the paper for the respective tasks. inter-dependence between all the tasks in improving the overall performance in comparison to the single-task learning. We also show the confusion matrices corresponding to each set of tasks in Table 8a, Table 8b , and Table 9 , respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 38,
"text": "Table 5 and Table 6",
"ref_id": "TABREF6"
},
{
"start": 114,
"end": 121,
"text": "Table 7",
"ref_id": null
},
{
"start": 531,
"end": 549,
"text": "Table 8a, Table 8b",
"ref_id": "TABREF10"
},
{
"start": 556,
"end": 563,
"text": "Table 9",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Comparative Analysis",
"sec_num": "7"
},
{
"text": "Task A Task B Task C M a -F1 M i -F1 M a -F1 M i -F1 M a -F1 M i -F1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparative Analysis",
"sec_num": "7"
},
{
"text": "Sentiment N g N u P s N g 17",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparative Analysis",
"sec_num": "7"
},
{
"text": "N f F n V f H r N s G r T m V t N o S g V o H o N m M o STL N f 122",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparative Analysis",
"sec_num": "7"
},
{
"text": "We perform error analysis (i.e. for Task-C) on the predictions of our proposed model. We take some utterances (c.f. Table 10 ) with corresponding image (c.f. Figure 3) , where we show that MTL is predicting correct while STL is not able to predict the right labels. We also present the attention heatmaps for iCRM and iTRM of the multi-task learning framework in Figure 4 and Figure 5 , respectively. We take the fifth utterance from Table 10 (c.f. Figure 3e) to illustrate the heatmap. For iCRM (c.f. Figure 4) , there are six matrices which show the interdependency between humour and sarcasm (Hu-Sar), humour and offensive (Hu-Off), humour and motivational (Hu-Mo), sarcasm and offensive (saroff), sarcasm and motivational (Sar-Mo), and offensive and motivational (Off-Mo), respectively, where Table 10. the light shade to dark shade shows the amount of contributions in ascending sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Table 10",
"ref_id": null
},
{
"start": 158,
"end": 167,
"text": "Figure 3)",
"ref_id": "FIGREF8"
},
{
"start": 363,
"end": 371,
"text": "Figure 4",
"ref_id": "FIGREF9"
},
{
"start": 376,
"end": 384,
"text": "Figure 5",
"ref_id": "FIGREF10"
},
{
"start": 449,
"end": 459,
"text": "Figure 3e)",
"ref_id": "FIGREF8"
},
{
"start": 502,
"end": 511,
"text": "Figure 4)",
"ref_id": "FIGREF9"
},
{
"start": 797,
"end": 806,
"text": "Table 10.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "8"
},
{
"text": "The main objective of iCRM is to develop the relationship between the classes of tasks. Figure 4 shows the established relationship between the tasks. We see the established relationship between the classes of tasks in Figure 4 . For predicting the fifth utterance correctly in Table 10 , humour and Similarly, the main objective of iTRM is to develop the relationship between the tasks. Figure 5 shows the established relationship between the tasks, and we see that attention put more weight on sarcasm and offensive pair while less weight on humour and sarcasm. It is clear from the definition of sarcasm and humour (c.f. Section 1) that both of them have a very different meaning when used in a sentence while the actual sentence looks similar. Hence sarcasm and humour is found not be helping each other. ",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 4",
"ref_id": "FIGREF9"
},
{
"start": 219,
"end": 227,
"text": "Figure 4",
"ref_id": "FIGREF9"
},
{
"start": 278,
"end": 286,
"text": "Table 10",
"ref_id": null
},
{
"start": 388,
"end": 396,
"text": "Figure 5",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "8"
},
{
"text": "In this paper, we have successfully established the concept of obtaining effective relationships between inter-tasks and between inter-classes for multi-modal affect analysis. We have proposed a deep attentive multi-task learning framework which helps to obtain very effective inter-tasks and interclasses relationship. To capture the interdependence, we have proposed two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks, i.e. which task helps the other tasks. In contrast, iCRM develops the relations between the classes of tasks. We have evaluated our proposed approach on a recently published Memotion dataset. Experimental results suggest the efficacy of the proposed model over the existing state-of-the-art systems (Baseline and SemEval 2020 winner). The evaluation shows that the proposed multi-task framework yields better performance over single-task learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "The dataset used for the experiments is relatively small for training an effective deep learning model and is heavily biased. Therefore, assembling a large, and more balance dataset with quality annotations is an important job. Moreover, the memes are a complicated form of data which includes both text and image that repeat over numerous memes (meme templates). Hence quality representation of memes for affect analysis is challenging future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "https://competitions.codalab.org/com petitions/20629",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://competitions.codalab.org/com petitions/20629#learn the details-task-la bels-format",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pytorch.org/ 4 https://github.com/huggingface/trans formers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported here is partially supported by SkyMap Global India Private Limited. Dushyant Singh Chauhan acknowledges the support of Prime Minister Research Fellowship (PMRF), Govt. of India. Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (Meit/8Y), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": " Table 10 : Comparison between multi-task learning and single-task learning frameworks .Few error cases where MTL framework performs better than the STL framework.not sarcasm (Figure 4a ), humour and not offensive (Figure 4b ",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 9,
"text": "Table 10",
"ref_id": null
},
{
"start": 175,
"end": 185,
"text": "(Figure 4a",
"ref_id": null
},
{
"start": 214,
"end": 224,
"text": "(Figure 4b",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multi-task learning for multimodal emotion recognition and sentiment analysis",
"authors": [
{
"first": "Dushyant",
"middle": [],
"last": "Md Shad Akhtar",
"suffix": ""
},
{
"first": "Deepanway",
"middle": [],
"last": "Chauhan",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Ghosal",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "370--379",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1034"
]
},
"num": null,
"urls": [],
"raw_text": "Md Shad Akhtar, Dushyant Chauhan, Deepanway Ghosal, Soujanya Poria, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Multi-task learning for multi- modal emotion recognition and sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 370-379, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Disambiguating sentiment: An ensemble of humour, sarcasm, and hate speech features for sentiment classification",
"authors": [
{
"first": "Rohan",
"middle": [],
"last": "Badlani",
"suffix": ""
},
{
"first": "Nishit",
"middle": [],
"last": "Asnani",
"suffix": ""
},
{
"first": "Manan",
"middle": [],
"last": "Rai",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohan Badlani, Nishit Asnani, and Manan Rai. 2019. Disambiguating sentiment: An ensemble of humour, sarcasm, and hate speech features for sentiment clas- sification. W-NUT 2019, page 337.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic detection of irony and humour in twitter",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2014,
"venue": "ICCC",
"volume": "",
"issue": "",
"pages": "155--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri and Horacio Saggion. 2014. Auto- matic detection of irony and humour in twitter. In ICCC, pages 155-162.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multimodal deep neural nets for detecting humor in tv sitcoms",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Bertero",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "383--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Bertero and Pascale Fung. 2016. Multimodal deep neural nets for detecting humor in tv sitcoms. In 2016 IEEE Spoken Language Technology Work- shop (SLT), pages 383-390. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Clues for detecting irony in user-generated contents: oh",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Sarmento",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "M\u00e1rio",
"suffix": ""
},
{
"first": "Eug\u00e9nio De",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Oliveira",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Carvalho, Lu\u00eds Sarmento, M\u00e1rio J Silva, and Eug\u00e9nio De Oliveira. 2009. Clues for detecting irony in user-generated contents: oh...",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "!! it's\" so easy",
"authors": [],
"year": null,
"venue": "Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion",
"volume": "",
"issue": "",
"pages": "53--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "!! it's\" so easy\";-. In Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion, pages 53-56.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards multimodal sarcasm detection (an obviously perfect paper)",
"authors": [
{
"first": "Santiago",
"middle": [],
"last": "Castro",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Ver\u00f3nica",
"middle": [],
"last": "P\u00e9rez-Rosas",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Zimmermann",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01815"
]
},
"num": null,
"urls": [],
"raw_text": "Santiago Castro, Devamanyu Hazarika, Ver\u00f3nica P\u00e9rez- Rosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. Towards multimodal sar- casm detection (an obviously perfect paper). arXiv preprint arXiv:1906.01815.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Contextaware interactive attention for multi-modal sentiment and emotion analysis",
"authors": [
{
"first": "Dushyant",
"middle": [],
"last": "Singh Chauhan",
"suffix": ""
},
{
"first": "Md",
"middle": [
"Shad"
],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5651--5661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dushyant Singh Chauhan, Md Shad Akhtar, Asif Ek- bal, and Pushpak Bhattacharyya. 2019. Context- aware interactive attention for multi-modal senti- ment and emotion analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5651-5661, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sentiment and emotion help sarcasm? a multi-task learning framework for multi-modal sarcasm, sentiment and emotion analysis",
"authors": [
{
"first": "Dushyant",
"middle": [],
"last": "Singh Chauhan",
"suffix": ""
},
{
"first": "S R",
"middle": [],
"last": "Dhanush",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4351--4360",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.401"
]
},
"num": null,
"urls": [],
"raw_text": "Dushyant Singh Chauhan, Dhanush S R, Asif Ekbal, and Pushpak Bhattacharyya. 2020. Sentiment and emotion help sarcasm? a multi-task learning frame- work for multi-modal sarcasm, sentiment and emo- tion analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4351-4360, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Detecting offensive language in social media to protect adolescent online safety",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yilu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Sencun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2012,
"venue": "International Confernece on Social Computing",
"volume": "",
"issue": "",
"pages": "71--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Chen, Yilu Zhou, Sencun Zhu, and Heng Xu. 2012. Detecting offensive language in social media to protect adolescent online safety. In 2012 Inter- national Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, pages 71-80. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Contextual inter-modal attention for multi-modal sentiment analysis",
"authors": [
{
"first": "Deepanway",
"middle": [],
"last": "Ghosal",
"suffix": ""
},
{
"first": "Shad",
"middle": [],
"last": "Md",
"suffix": ""
},
{
"first": "Dushyant",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Singh Chauhan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3454--3466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepanway Ghosal, Md Shad Akhtar, Dushyant Singh Chauhan, Soujanya Poria, Asif Ekbal, and Pushpak Bhattacharyya. 2018. Contextual inter-modal atten- tion for multi-modal sentiment analysis. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3454-3466, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exploring hate speech detection in multimodal publications",
"authors": [
{
"first": "Raul",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Jaume",
"middle": [],
"last": "Gibert",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Dimosthenis",
"middle": [],
"last": "Karatzas",
"suffix": ""
}
],
"year": 2020,
"venue": "The IEEE Winter Conference on Applications of Computer Vision",
"volume": "",
"issue": "",
"pages": "1470--1478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raul Gomez, Jaume Gibert, Lluis Gomez, and Dimos- thenis Karatzas. 2020. Exploring hate speech detec- tion in multimodal publications. In The IEEE Win- ter Conference on Applications of Computer Vision, pages 1470-1478.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Harnessing context incongruity for sarcasm detection",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Vinita",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "757--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 757-762.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Divide, conquer and combine: Hierarchical feature fusion network with local and global perspectives for multimodal affective computing",
"authors": [
{
"first": "Sijie",
"middle": [],
"last": "Mai",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Songlong",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "481--492",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sijie Mai, Haifeng Hu, and Songlong Xing. 2019. Di- vide, conquer and combine: Hierarchical feature fu- sion network with local and global perspectives for multimodal affective computing. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 481-492.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Characterizing humour: An exploration of features in humorous texts",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2007,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "337--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Stephen Pulman. 2007. Charac- terizing humour: An exploration of features in hu- morous texts. In International Conference on Intelli- gent Text Processing and Computational Linguistics, pages 337-347. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Offensive language detection using multi-level classification",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Amir H Razavi",
"suffix": ""
},
{
"first": "Sasha",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Uritsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2010,
"venue": "Canadian Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "16--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir H Razavi, Diana Inkpen, Sasha Uritsky, and Stan Matwin. 2010. Offensive language detection using multi-level classification. In Canadian Conference on Artificial Intelligence, pages 16-27. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Detecting sarcasm in multimodal social platforms",
"authors": [
{
"first": "Rossano",
"middle": [],
"last": "Schifanella",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "De Juan",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Liangliang",
"middle": [],
"last": "Cao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 24th ACM international conference on Multimedia",
"volume": "",
"issue": "",
"pages": "1136--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rossano Schifanella, Paloma de Juan, Joel Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multimodal social platforms. In Proceedings of the 24th ACM international conference on Multimedia, pages 1136-1145.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tanmoy Chakraborty, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 Task 8: Memotion Analysis-The Visuo-Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"authors": [
{
"first": "Chhavi",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Deepesh",
"middle": [],
"last": "Bhageria",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Paka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "P Y K L",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chhavi Sharma, Deepesh Bhageria, William Paka, Scott, Srinivas P Y K L, Amitava Das, Tanmoy Chakraborty, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval- 2020 Task 8: Memotion Analysis-The Visuo- Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recognizing humor without recognizing meaning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Sj\u00f6bergh",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Araki",
"suffix": ""
}
],
"year": 2007,
"venue": "International Workshop on Fuzzy Logic and Applications",
"volume": "",
"issue": "",
"pages": "469--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Sj\u00f6bergh and Kenji Araki. 2007. Recognizing humor without recognizing meaning. In Interna- tional Workshop on Fuzzy Logic and Applications, pages 469-476. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Basil Abdussalam, Debayan Datta, and Anupam Jamatia. 2020. Nit-agartala-nlp-team at semeval-2020 task 8: Building multimodal classifiers to tackle internet humor",
"authors": [
{
"first": "Steve",
"middle": [
"Durairaj"
],
"last": "Swamy",
"suffix": ""
},
{
"first": "Shubham",
"middle": [],
"last": "Laddha",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.06943"
]
},
"num": null,
"urls": [],
"raw_text": "Steve Durairaj Swamy, Shubham Laddha, Basil Abdus- salam, Debayan Datta, and Anupam Jamatia. 2020. Nit-agartala-nlp-team at semeval-2020 task 8: Build- ing multimodal classifiers to tackle internet humor. arXiv preprint arXiv:2005.06943.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Stepwise noise elimination for better motivational and advisory texts classification",
"authors": [
{
"first": "Patrycja",
"middle": [],
"last": "Swieczkowska",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Rzepka",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Araki",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Advanced Computational Intelligence and Intelligent Informatics",
"volume": "24",
"issue": "",
"pages": "156--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrycja Swieczkowska, Rafal Rzepka, and Kenji Araki. 2020. Stepwise noise elimination for better motivational and advisory texts classification. Jour- nal of Advanced Computational Intelligence and In- telligent Informatics, 24(1):156-168.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Detecting ironic intent in creative comparisons",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
},
{
"first": "Yanfen",
"middle": [],
"last": "Hao",
"suffix": ""
}
],
"year": 2010,
"venue": "ECAI",
"volume": "215",
"issue": "",
"pages": "765--770",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Veale and Yanfen Hao. 2010. Detecting ironic in- tent in creative comparisons. In ECAI, volume 215, pages 765-770.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Humor detection based on paragraph decomposition and bert fine-tuning",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Minghan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Shiliang",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Yang, Yao Deng, Minghan Wang, Ying Qin, and Shiliang Sun. 2020. Humor detection based on para- graph decomposition and bert fine-tuning.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multimodal indicators of humor in videos",
"authors": [
{
"first": "Zixiaofan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Ai",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)",
"volume": "",
"issue": "",
"pages": "538--543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zixiaofan Yang, Lin Ai, and Julia Hirschberg. 2019. Multimodal indicators of humor in videos. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pages 538-543. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1a, Figure 1c and Figure 1d are the instances of Offensive memes.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "(a) Humour, sarcasm, offensive. (b) Motivational, positive. (c) Sarcasm, offensive, Negative. (d) Sarcasm, offensive, Funny.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Few examples from the Memotion dataset to show the inter-dependency between different tasks.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Figure 1bis an example for positive sentiment towards the government andFigure 1cfor negative sentiment towards Ph.D. in Electrical Engineering.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Overall architecture of the proposed multi-modal multi-task affect analysis framework for Memes. Here V refers to the Meme Image and T refers to the text extracted from the Meme.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"text": "Few examples for Human Error Analysis corresponding to",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF9": {
"text": "iCRM attention for Figure 3e under Task C",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF10": {
"text": "iTRM attention forFigure 3eunder Task C",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"num": null,
"content": "<table/>"
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "Model configurations",
"num": null,
"content": "<table/>"
},
"TABREF4": {
"html": null,
"type_str": "table",
"text": "",
"num": null,
"content": "<table/>"
},
"TABREF6": {
"html": null,
"type_str": "table",
"text": "Memes: Single-task vs Multi-task (Task B)",
"num": null,
"content": "<table/>"
},
"TABREF10": {
"html": null,
"type_str": "table",
"text": "Confusion Matrix for Task-A and Task-B (ReferTable 1andTable 2for Label definitions).",
"num": null,
"content": "<table><tr><td>Setups</td><td>Humour</td><td>Sarcasm</td><td>Offensive</td><td>Motivational</td></tr></table>"
},
"TABREF12": {
"html": null,
"type_str": "table",
"text": "Confusion Matrix for Task C (ReferTable 2for Label definitions).",
"num": null,
"content": "<table/>"
}
}
}
}