ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.113.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:13:41.268485Z"
},
"title": "NLP UIOWA at SemEval-2020 Task 8: You're not the only one cursed with knowledge -Multi branch model memotion analysis",
"authors": [
{
"first": "Ingroj",
"middle": [],
"last": "Shrestha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Iowa Iowa City",
"location": {
"region": "IA",
"country": "USA"
}
},
"email": "ingroj-shrestha@uiowa.edu"
},
{
"first": "Jonathan",
"middle": [],
"last": "Rusert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Iowa Iowa City",
"location": {
"region": "IA",
"country": "USA"
}
},
"email": "jonathan-rusert@uiowa.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose hybrid models (HybridE and HybridW) for meme analysis (SemEval 2020 Task 8), which involves sentiment classification (Subtask A), humor classification (Subtask B), and scale of semantic classes (Subtask C). The hybrid model consists of BLSTM and CNN for text and image processing respectively. HybridE provides equal weight to BLSTM and CNN performance, while HybridW provides weightage based on the performance of BLSTM and CNN on a validation set. The performances (macro F1) of our hybrid model on Subtask A are 0.329 (HybridE), 0.328 (HybridW), on Subtask B are 0.507 (HybridE), 0.512 (HybridW), and on Subtask C are 0.309 (HybridE), 0.311 (HybridW).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose hybrid models (HybridE and HybridW) for meme analysis (SemEval 2020 Task 8), which involves sentiment classification (Subtask A), humor classification (Subtask B), and scale of semantic classes (Subtask C). The hybrid model consists of BLSTM and CNN for text and image processing respectively. HybridE provides equal weight to BLSTM and CNN performance, while HybridW provides weightage based on the performance of BLSTM and CNN on a validation set. The performances (macro F1) of our hybrid model on Subtask A are 0.329 (HybridE), 0.328 (HybridW), on Subtask B are 0.507 (HybridE), 0.512 (HybridW), and on Subtask C are 0.309 (HybridE), 0.311 (HybridW).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Background. With the increasing social media culture, the sharing of internet memes on social media platforms has grown immensely in the recent years. Meme is defined as the unit of cultural information that replicates and transmits with reliability and fecundity (Linxia and Ziran, 2006) . Memes are generally an image paired with text, and used to express an array of ideas (e.g. humor, sarcasm). Memes can be derived from pop cultures, previous experiences, or even more abstract ideas. Memes have become a large part of internet culture, and can preserve viewpoints specific to the community from where it originated. Memes can be used to express humor, embarrassment, hate, and even more emotions. The creativity of memes, however, carry a downside. Hateful or offensive memes can also be created and can lead to an increase in hate crimes (Heikkil\u00e4, 2017; Sabat et al., 2019) . As with hateful language, several social media platforms have been working on policies to control such hateful and offensive memes while being careful not to hinder the creativity of users' expressions through memes (Kastrenakes, 2019; Hutchinson, 2020; Heilweil, 2020) .",
"cite_spans": [
{
"start": 264,
"end": 288,
"text": "(Linxia and Ziran, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 845,
"end": 861,
"text": "(Heikkil\u00e4, 2017;",
"ref_id": null
},
{
"start": 862,
"end": 881,
"text": "Sabat et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 1100,
"end": 1119,
"text": "(Kastrenakes, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 1120,
"end": 1137,
"text": "Hutchinson, 2020;",
"ref_id": "BIBREF9"
},
{
"start": 1138,
"end": 1153,
"text": "Heilweil, 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the major steps in controlling the sharing of hateful memes is being able to successfully detect them. Detection of offensive content on social media is an ongoing task. Current attempts at detecting offensive memes is limited. Furthermore, detecting offensive memes is more challenging than detecting offensive text as it involves both visual and language understanding while the latter only requires language understanding. Currently, many sites rely on human moderators to identify and remove memes that express emotions that violate the platform's policy. However, with the increasing use of memes across social media platforms, handpicking offensive memes would require larger human resource and can cause problems in scalability. Automated systems to identify the emotion of a meme could help in a first line defense/analysis of memes and could help reduce the load on human moderators. We already see this hybrid approach being employed for offensive and hateful text detection on several social media platforms (Yenala et al., 2018; Zhang et al., 2018) , so it is only natural to extend this approach to classifying memes as well.",
"cite_spans": [
{
"start": 1026,
"end": 1047,
"text": "(Yenala et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 1048,
"end": 1067,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to address the problem of detecting offensive memes as well as classifying types of memes in general, a group of organizers created a community driven task, SemEval 2020 Task 8 (Memotion Analysis). Sharma et al. (2020) brings attention of the research community towards automatic meme emotion analysis and allows for the examination of multiple approaches. We approach this problem with a hybrid architecture of Convolutional Neural Network (CNN) for image classification and a Bidirectional Long Short Term Memory (BLSTM) neural network for text classification.",
"cite_spans": [
{
"start": 207,
"end": 227,
"text": "Sharma et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to capture informative features from both images and text to help the system in its classification. To increase the usefulness of both image and text, we first fine tune a CNN on image classification and BLSTM on text classification separately, then use a validation set to score their respective performances. A CNN was chosen as CNNs have shown strong performance in image classification (Xin and Wang, 2019) . Likewise, BLSTMs have shown strong performance on text classification, therefore we chose this for our framework 1 . We finally combine the CNN and BLSTM models using a hybrid approach.",
"cite_spans": [
{
"start": 402,
"end": 422,
"text": "(Xin and Wang, 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "2"
},
{
"text": "To classify the text, we implement a Bidirectional Long Short Term Memory (BLSTMs) with pretrained word embeddings. Figure 1 represents the BLSTM architecture we used. Embedding layer. The embedding layer converts the input text (input layer) to a real valued vector using pre-trained word embeddings 2 of dimension 200. The pre-trained word embeddings are obtained from Glove (Pennington et al., 2014) word embeddings trained on English Gigaword 3 and Wikipedia data. For the words not in the vocabulary, we randomly initialed the word embedding. After preprocessing, we find the longest text size (V). The input text that is shorter than the longest text size is padded with zeros at the end. Next, the embedding layer output is fed into BLSTM layer.",
"cite_spans": [
{
"start": 377,
"end": 402,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text classification",
"sec_num": "2.1"
},
{
"text": "BLSTM Layer. Long Short-Term Memory (LSTMs) build on top of traditional RNNs, by adding 4 gates through which input travels: ignoring (i), memory(c), forgetting (f), and selection (o). These gates aim to help the system remember the important parts of input, while forgetting the non-relevant parts. Ignoring gates out the non relevant information from predictions. To add in longer term memory, a memory mechanism is applied. Tied with the memory gate, the forgetting mechanism is used to help to filter irrelevant previous prediction with old memory. Selection gate looks at possible predictions and gates them before allowing the system to make a final prediction. The gates are represented by the following equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": "2.1"
},
{
"text": "LSTM : h l\u22121 t , h l t\u22121 , c l t\u22121 \u2192 h l t , c l t \uf8eb \uf8ec \uf8ec \uf8ed i l t f l t o l t g l t \uf8f6 \uf8f7 \uf8f7 \uf8f8 = \uf8eb \uf8ec \uf8ed sigm sigm sigm tanh \uf8f6 \uf8f7 \uf8f8 T2n,4n h l\u22121 t h l t\u22121 c l t = f l t c l t\u22121 + i l t g l t h l t = o l t tanh(c l t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": "2.1"
},
{
"text": "where sigm, and tanh are sigmoid and tanh activation functions,respectively. represents elementwise multiplication, h l t represents the hidden state at time step t for layer l, and h l\u22121 t is the output from embedding layer \u2208 R V * 200 for (l = 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": "2.1"
},
{
"text": "A BLSTM, a 2 directional LSTM which reads the sentence in normally (forward direction) i.e., \u2212\u2212\u2212\u2212\u2192 LST M , and reads the sentence in backward direction i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": "2.1"
},
{
"text": "\u2190\u2212\u2212\u2212\u2212 LST M . The final learned representation of text from BLSTM layer is \u2212\u2212\u2212\u2212\u2192 LST M \u2295 \u2190\u2212\u2212\u2212\u2212 LST M ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": "2.1"
},
{
"text": "where \u2295 refers to concatenation. Dense Layer. The output of BLSTM layer is flattened and fed to a dense layer of size 128 and then fed to an output layer of size L with softmax activation, where L is the number of classes c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification",
"sec_num": "2.1"
},
{
"text": "We implemented a Convolutional Neural Network (CNN) for the image classification task. Figure 2 represents CNN architecture we used. Input Layer. The first layer of CNN network is the input layer, which takes images, resizes them to a dimension of w * w, where w = 224. We then fed the image to the convolutional layer for feature extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Image classification",
"sec_num": "2.2"
},
{
"text": "Convolutional Layer. In the convolutional layer, we use a k * k filter with a stride of s=1 and zero padding p=0 to produce a feature map of size w\u2212k+2 * p s + 1 , where k=3. The convolutional layer uses n ch = 16 output channels. So, the final output of convolutional layer (conv out ) is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image classification",
"sec_num": "2.2"
},
{
"text": "n ch * w\u2212k+2 * p s + 1 * w\u2212k+2 * p s + 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image classification",
"sec_num": "2.2"
},
{
"text": "Max Pooling Layer. A max pooling of size j * j is applied to the output from convolutional layer, where j = 2. The resulting output is n ch * convout j * convout j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image classification",
"sec_num": "2.2"
},
{
"text": "Dense Layer. The output from max pooling is flattened and fed into a dense layer consisting 128 neurons with ReLU activation. Finally, the output is fed to the output layer of size L. The output layer uses softmax activation function to provide the probability distribution s p for each class prediction (y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image classification",
"sec_num": "2.2"
},
{
"text": "In order to balance text and visual features for prediction, we use a hybrid approach. The hybrid approach is shown in Figure 3 . In the hybrid approach, we give each system, BLSTM and CNN, a weight for their predictions, \u03b1 and \u03b2 respectively. Hybrid Model Weighted (HybridE). In this approach, we set \u03b1 = \u03b2 = 1. We obtain probability distribution for each class using softmax activation. We then compute element-wise sum of probability distribution of each class obtained from two architectures (CNN and BLSTM). Finally, we take argmax of combined probability distribution to predict final class for a meme. Hybrid Weighted Average (HybridW) 4 . In this approach, the contribution (i.e., softmax distribution) of each class is weighted by the performance (macro F1) of models \u03b1 (BLSTM), and \u03b2 (CNN). The performance of models are evaluated on the validation set (described further in section 3.2). Finally, we take argmax of the weighted probability distribution to obtain a final class for a meme. We experimented with different epochs (10, 15, 20) and batch sizes (64, 100, 150). We found an epoch of 10 and batch size 64 (text) and 100 (image) are optimal. We use a dropout of 0.2 (CNN) and 0.5 (BLSTM) in penultimate layer to handle the issue of model overfitting. For BLSTM, we use a hidden size (n) of 64. The model learns optimal parameters minimizing cross-entropy loss shown in equation 1a (L = 2), equation 1b (L > 2). We use Adam optimizer with a learning rate of 0.001. We implemented the system using PyTorch 5 .",
"cite_spans": [
{
"start": 1038,
"end": 1042,
"text": "(10,",
"ref_id": null
},
{
"start": 1043,
"end": 1046,
"text": "15,",
"ref_id": null
},
{
"start": 1047,
"end": 1050,
"text": "20)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Hybrid approach",
"sec_num": "2.3"
},
{
"text": "\u2212ylogsp + (1 \u2212 y)log(1 \u2212 sp) (1a) L c=1 yo,clog(sp o,c ) (1b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid approach",
"sec_num": "2.3"
},
{
"text": "3 Subtasks and Dataset SemEval 2020 Task 8 involves an overall task of analysis of memes, which is divided into three subtasks -Sentiment analysis (Subtask A), Humor classification (Subtask B), and Scale of semantic classes (Subtask C).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid approach",
"sec_num": "2.3"
},
{
"text": "Subtask A. Subtask A requires a system to identify if a meme is positive, negative, or neutral (multi-class classification). Subtask B. Subtask B involves identification of humor expressed in meme (sarcastic, humorous, offensive, motivational). This involves four binary classifications, where each of the humor is classified as being present (e.g., sarcastic), or absent/not (e.g., not sarcastic). Overall, it is multi-label classification task. Subtask C. Subtasks C involves multi-class, multi-label classification. This is an extension to Subtask B, where a system requires to quantify the extent to which a particular effect is being expressed (scale of semantic) in a meme. With one exception (motivational), the type of humor expressed is scaled from 0 to 4 -not (0), slightly (1), mildly (3), and very (4). Motivational is categorized as motivational or non motivational. Architecture for subtasks. For each subtask, we use the same architecture (corresponding architecture for text and image analysis -Section 2), changing only the size of L (the number of classes). For Task A, we use L = 3. For Task B, we perform four binary classifications with L = 2 for each humor expressed. For Task C, we perform four multi-class classifications with L = 4 for each semantic class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtasks Description",
"sec_num": "3.1"
},
{
"text": "Training set. The organizers provided a training dataset for development of automatic meme analysis. The training sets consist of 6992 memes. Each meme consists of five classifications (semantic classes) -humor, sarcasm, offensive, motivational, and overall sentiment, with scale of semantic classes. These classifications corresponds to subtasks (Section 3.1). A distribution of these sets is found in Table 1 . Testing set. The testing set consists of 1878 memes. The text was missing for several memes in the testing set, so we added these in manually by transcribing from the provided image. ",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 410,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3.2"
},
{
"text": "To test our approach, we leveraged the training set, and performed train-validation split (80%-20%) to find macro and micro F1 scores. We first describe the steps employed to work with the training data, then give the results on the set. Data Preprocessing. Though we presumed that the provided dataset would be set up to accommodate each subtask in SemEval, this was not the case. This caused us to employ some preprocessing steps to make the data more in line with the aforementioned subtasks. We remove six instances from the training set as the text was not available for those instances. Similarly, when working with the CNN, we found an image got GOT-Meme-9 failed to load, being corrupt. So, we remove the image from the training set. Assigning Labels. Recall that Subtask A is a multi-class classification problem requiring for the memes to be classified into positive, negative or neutral. The training dataset contained 5 labels: very positive, positive, neutral, negative, very negative. We reduced the number of labels by collapsing the very positive memes into the positive category, and followed the same with the very negative memes to meet classification requirements in Subtask A. Again as previously noted, in Subtask B, a given meme can have one of multiple binary classification labels. For example, a meme can be humorous or non humorous. The same meme can be sarcastic or non sarcastic, offensive or non-offensive and motivational or non motivational. Each of these binary classification problem in Subtask B has multiple labels except for the motivational classification, which is why for the first three classifications task we combined the labels to fit them for binary classification. We combined funny, very funny, and hilarious into humorous, general, twisted meaning, and very twisted into sarcastic and slight, very offensive, and hateful offensive into offensive. For Subtask C, the labels required no conversion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training set Evaluation",
"sec_num": "3.3"
},
{
"text": "We obtain results (Table 2 ) on gold standard training set using the aforementioned train-validation split. Subtask A. CNN outperforms BLSTM ( The result for our proposed approach's performance on the Testing set for three subtasks are shown in Table 3 . On the testing set, the proposed hybrid model (HybridE) achieves a macro F1 score of 0.3287, 0.5073, and 0.3087 on Subtask A, B, and C, respectively. The HybridE model outperforms baseline (in macro F1) in all of the subtasks (11.11% points (Subtask A), 0.71% points (Subtask B), 0.78% points (Subtask C)). On a similar line to HybridE, the weighted hybrid model (HybridW) outperforms baselines (provided by organizer) in both metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "(Table 2",
"ref_id": "TABREF3"
},
{
"start": 245,
"end": 252,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Training set Results",
"sec_num": "3.4"
},
{
"text": "We also can see that the HybridE favors BLSTM in macro F1 (performance is similar to BLSTM) and CNN in micro F1 (performance is similar to CNN). The weighted average approach (HybridW) shows little or no improvement over HybridE approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training set Results",
"sec_num": "3.4"
},
{
"text": "Subtask A. In contrast with the performance trend in Training set, BLSTM outperforms CNN by 4% in macro F1, while CNN outperforms BLSTM by 5% in micro F1 in Testing set (Table 3a) . The HybridE favoring BLSTM, in terms of macro F1, shows an F1 score of 0.3287, which is similar to BLSTM. Likewise, HybridE achieves micro F1 of 0.5266 (similar performance to CNN). The HybridW shows no or little improvement in macro F1 and micro F1, respectively. HybridE performance (in macro F1) is 7.3% lower than the top system. Subtask B. As with the Training set, BLSTM and CNN perform similarly on this subtask. On overall, the hybrid model (HybridE) achieves macro F1 and micro F1 of 0.5073 and 0.6330 respectively (Table 3b ). The HybridW shows a slight improvement in macro F1, but no improvement on micro F1. HybridE performance (in macro F1) is similar to the top system. Subtask C. Similar to Training set, BLSTM outperforms CNN in macro F1 by 5%, while CNN outperforms BLSTM in micro F1 by 8.5% (Table 3c) . As mentioned earlier, HybridE favors BSLTM in macro F1, while it favors CNN in micro F1, achieving 0.3087 macro F1 (similar performance to BLSTM) and 0.4016 micro F1 (similar performance to CNN). The HybridW shows a slight improvement in both performance metrics. HybridE performance (in macro F1) is 4.3% lower than the top system. Table 4 and Table 5 , respectively (Note that since Subtask A only consists of one multiclass problem, the results are the same as shown in Table 2a ). BLSTM performs better in some classes, while CNN perform better in other classes. For example, BLSTM outperforms CNN in the class Sarcasm by 11% (Table 5a ). However, CNN outperforms BLSTM in the class Humor by 3.2% (Table 5a ). These results acted as a motivation for our weighted hybrid approach (HybridW).",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 179,
"text": "(Table 3a)",
"ref_id": "TABREF5"
},
{
"start": 706,
"end": 716,
"text": "(Table 3b",
"ref_id": "TABREF5"
},
{
"start": 993,
"end": 1003,
"text": "(Table 3c)",
"ref_id": "TABREF5"
},
{
"start": 1339,
"end": 1358,
"text": "Table 4 and Table 5",
"ref_id": null
},
{
"start": 1479,
"end": 1487,
"text": "Table 2a",
"ref_id": "TABREF2"
},
{
"start": 1636,
"end": 1645,
"text": "(Table 5a",
"ref_id": null
},
{
"start": 1707,
"end": 1716,
"text": "(Table 5a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training set Results",
"sec_num": "3.4"
},
{
"text": "adding extra weight really helps better to incorporate trade-off of BLSTM and CNN to capture more informative features. Class imbalance and effect on performance metric. Macro F1 average computes F1 for each class and take average by treating all class equally. However, micro F1 average aggregates the contribution of each class, and then computes the average F1. From Table 1 , we can see that the distribution of class is not balanced for each subtask. So, micro F1 scores are larger than macro F1 scores for each subtask (Table 3 ) since predictions favor the larger class. Failure of transfer learning. For text analysis, we tried pre-trained BERT (Devlin et al., 2018) . For image analysis, we tried VGG16 (Simonyan and Zisserman, 2014) , and ResNet18 (He et al., 2016) . We removed the last layer from each model and added a custom dense layer to fit the subtasks. We then finetune using the train set. However, each model overfitted. The overfitting issue might be due to complex architecture of pre-trained models, or due to failure to learn task specific features provided small train set.",
"cite_spans": [
{
"start": 654,
"end": 675,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 713,
"end": 743,
"text": "(Simonyan and Zisserman, 2014)",
"ref_id": "BIBREF18"
},
{
"start": 759,
"end": 776,
"text": "(He et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 370,
"end": 377,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 525,
"end": 534,
"text": "(Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Training set Results",
"sec_num": "3.4"
},
{
"text": "Since the multimodal social media content has seen a steady increase in the recent years, deriving the intended meaning from this content by establishing the connection between the image and the text has seen an increase in research. A limited research has been done in extracting meaning from social media images and texts, which includes identification of the humor, offensiveness or sentiment expressed in image, text or meme. Humor Classification. Detection of humor in image and text has been approached by several teams in recent years. Chandrasekaran et al. (2016) analyze the humor present in abstract scenes at the scene-level and the object level and detect different types of humor depicted in the scenes. Tsakona (2009) mentions that the meaning and humor in a cartoon is expressed through verbal and visual mode. In order to capture the humor expressed in the cartoon, one has to pay attention to all the verbal and visual details of the cartoon. Offense Classification. Recently, there has been a growing interest in identifying the offensive language of social media data. Chen et al. (2012) presents user-level offensive language detection on social media. This architecture uses features such as the user's writing style, structure, and specific cyberbullying for detecting offensiveness in the text. Wiegand et al. (2018) proposed a GermEval task for classifying offensive language as offensive, or other, and then further classify the offensive tagged language. More recently, Zampieri et al. (2019) ran a shared task, OffensEval, on detecting different classes of offensive text. Sentiment Classification. Sentiment detection in the image or text has also seen a greater focus on research. Wang and Li (2015) mention that the accurate sentiment detection from internet images requires connection between visual and textual feature. They presented the Unsupervised Sentiment Analysis (USEA) framework to perform sentiment analysis on social media images in an unsupervised approach using both features mentioned earlier. Borth et al. (2013) present a method built upon web mining to automatically construct a visual detector library to detect Adjective Noun Pair in an image, which they used to identify the sentiment from visual content. Sarcasm Classification. Though sarcasm is not always easy to identify online, researchers have attempted this with various approaches. Joshi et al. (2017) present a survey on various methods for automatic sarcasm detection. They link to many sarcasm papers which include the sarcasm datasets used (e.g. (Barbieri et al., 2014; Gonz\u00e1lez-Ib\u00e1nez et al., 2011) ) as well as sarcasm detection approachs leveraged (e.g. (Reyes and Rosso, 2014; Rajadesingan et al., 2015) ).",
"cite_spans": [
{
"start": 543,
"end": 571,
"text": "Chandrasekaran et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 717,
"end": 731,
"text": "Tsakona (2009)",
"ref_id": "BIBREF19"
},
{
"start": 1088,
"end": 1106,
"text": "Chen et al. (2012)",
"ref_id": "BIBREF3"
},
{
"start": 1318,
"end": 1339,
"text": "Wiegand et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 1710,
"end": 1728,
"text": "Wang and Li (2015)",
"ref_id": "BIBREF20"
},
{
"start": 2040,
"end": 2059,
"text": "Borth et al. (2013)",
"ref_id": "BIBREF1"
},
{
"start": 2561,
"end": 2584,
"text": "(Barbieri et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 2585,
"end": 2614,
"text": "Gonz\u00e1lez-Ib\u00e1nez et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 2672,
"end": 2695,
"text": "(Reyes and Rosso, 2014;",
"ref_id": "BIBREF15"
},
{
"start": 2696,
"end": 2722,
"text": "Rajadesingan et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We analyzed texts and images from memes using BLSTM and CNN, respectively. We then propose two hybrid approaches HybridE (equal weightage to prediction probability from BLSTM and CNN) and HybridW (weighted average based on performance of BLSTM and CNN) to identify humor, offensiveness, and sentiment expressed in memes. HybridE performs better overall than the individual systems, however, HybridW shows a little or no improvement over HybridE. Limitations and Future direction. We trained models for text and image analysis separately. Perhaps, we can feed the text output and image output into another dense layer (in a neural net). This approach might catch some features the first missed. Also, since the deep learning model shows better performance on a large data set, we could explore the problem on a larger data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We also ran preliminary tests on Logistic Regression models for both image and text, but were outperformed by the CNN and BLSTM.2 https://nlp.stanford.edu/projects/glove/ 3 https://catalog.ldc.upenn.edu/LDC2011T07",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The hybrid weighted average was performed after the Evaluation period ended. 5 https://pytorch.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "System Humor Sarcasm Offense Motivation BLSTM 0.542 6 0.498 9 0.527 9 0.516 8 CNN 0.475 1 0.494 4 0.476 3 0.410 8 HybridE 0.536 6 0.492 5 0.485 8 0.386 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "System Humor Sarcasm Offense Motivation BLSTM 0.656 7 0.650 9 0.541 7 0.545 1 CNN 0.613 2 0.733 5 0.524 4 0.603 7 HybridE 0.683 8 0.648 1 0.517 2 0.629 5 (b) Micro F1 Table 6 and Table 7 , respectively. We can see a drop in macro F1 for some classes on combining the performances of BLSTM and CNN. For example, the macro F1 drops for the class Sarcasm in Subtask B (Table 6a ). However, we also can see that hybrid approaches help improve the overall class wise performance for some classes. For example, macro F1 on the class Offense is 0.4928 and 0.4898 for BLSTM and CNN, respectively (Table 6a) . When combining the BLSTM and the CNN results, there is an improvement in macro F1 score (HybridE: 2.3% over BLSTM and 3% over CNN, HybridW: 3.9% over BLSTM, and 4.6% over CNN). We can see similar observations for the class Motivation for Subtask B (Table 6a) , and Subtask C (Table 7a) . Overall, the effect of hybrid approach is somewhat mixed with respect to macro F1. We can see similar mixed performance with respect to micro F1 also (Table 6b and Table 7b ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 6",
"ref_id": null
},
{
"start": 179,
"end": 186,
"text": "Table 7",
"ref_id": null
},
{
"start": 365,
"end": 374,
"text": "(Table 6a",
"ref_id": null
},
{
"start": 588,
"end": 598,
"text": "(Table 6a)",
"ref_id": null
},
{
"start": 849,
"end": 859,
"text": "(Table 6a)",
"ref_id": null
},
{
"start": 876,
"end": 886,
"text": "(Table 7a)",
"ref_id": null
},
{
"start": 1039,
"end": 1062,
"text": "(Table 6b and Table 7b",
"ref_id": null
}
],
"eq_spans": [],
"section": "(a) Macro F1",
"sec_num": null
},
{
"text": "Trade off in the performance of BLSTM and CNN. As seen in Table 3 , BLSTM shows better performance in macro F1, while CNN shows better performance in micro F1. Due to this, the hybrid model's performance is compromised. Comparison of HybridE and HybridW. Overall, HybridW performs slightly better than HybridE (Subtask B and Subtask C) in terms of macro F1. Since there is no significant improvement, it is unclear that",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Italian irony detection in twitter: a first approach",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2014,
"venue": "The First Italian Conference on Computational Linguistics CLiC-it",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Francesco Ronzano, and Horacio Saggion. 2014. Italian irony detection in twitter: a first approach. In The First Italian Conference on Computational Linguistics CLiC-it, volume 28.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Large-scale visual sentiment ontology and detectors using adjective noun pairs",
"authors": [
{
"first": "Damian",
"middle": [],
"last": "Borth",
"suffix": ""
},
{
"first": "Rongrong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Breuel",
"suffix": ""
},
{
"first": "Shih-Fu",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 21st ACM international conference on Multimedia",
"volume": "",
"issue": "",
"pages": "223--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Damian Borth, Rongrong Ji, Tao Chen, Thomas Breuel, and Shih-Fu Chang. 2013. Large-scale visual sentiment ontology and detectors using adjective noun pairs. In Proceedings of the 21st ACM international conference on Multimedia, pages 223-232.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "We are humor beings: Understanding and predicting visual humor",
"authors": [
{
"first": "Arjun",
"middle": [],
"last": "Chandrasekaran",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ashwin",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Vijayakumar",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4603--4612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arjun Chandrasekaran, Ashwin K Vijayakumar, Stanislaw Antol, Mohit Bansal, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2016. We are humor beings: Understanding and predicting visual humor. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4603-4612.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Detecting offensive language in social media to protect adolescent online safety",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yilu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Sencun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing",
"volume": "",
"issue": "",
"pages": "71--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Chen, Yilu Zhou, Sencun Zhu, and Heng Xu. 2012. Detecting offensive language in social media to protect adolescent online safety. In 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, pages 71-80. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Identifying sarcasm in Twitter: a closer look",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Gonz\u00e1lez-Ib\u00e1nez",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Wacholder",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers",
"volume": "2",
"issue": "",
"pages": "581--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Gonz\u00e1lez-Ib\u00e1nez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in Twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2, pages 581-586. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Online antagonism of the alt-right in the 2016 election",
"authors": [],
"year": 2017,
"venue": "European journal of American studies",
"volume": "12",
"issue": "",
"pages": "12--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niko Heikkil\u00e4. 2017. Online antagonism of the alt-right in the 2016 election. European journal of American studies, 12(12-2).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Facebook is flagging some coronavirus news posts as spam",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Heilweil",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Heilweil. 2020. Facebook is flagging some coronavirus news posts as spam. https://www.vox.com/recode/2020/3/17/21183557/ coronavirus-youtube-facebook-twitter-social-media.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Twitter Will Increase Its Use of Automation Tools as It Looks to Ensure Accuracy in COVID-19 Discussion",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Hutchinson",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Hutchinson. 2020. Twitter Will Increase Its Use of Automation Tools as It Looks to En- sure Accuracy in COVID-19 Discussion. https://www.socialmediatoday.com/news/ twitter-will-increase-its-use-of-automation-tools-as-it-looks-to-ensure-acc/",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic sarcasm detection: A survey",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark J",
"middle": [],
"last": "Carman",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "50",
"issue": "5",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J Carman. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5):1-22.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Twitter says it now removes half of all abusive tweets before users report them",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Kastrenakes",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Kastrenakes. 2019. Twitter says it now removes half of all abusive tweets be- fore users report them. https://www.theverge.com/2019/10/24/20929290/ twitter-abusive-tweets-automated-removal-earnings-q3-2019.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Analysis of memes in language",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Linxia",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Ziran",
"suffix": ""
}
],
"year": 2006,
"venue": "Foreign Language Teaching and Research",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Linxia and He Ziran. 2006. Analysis of memes in language. Foreign Language Teaching and Research, 2.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sarcasm detection on twitter: A behavioral modeling approach",
"authors": [
{
"first": "Ashwin",
"middle": [],
"last": "Rajadesingan",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Zafarani",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the eighth ACM international conference on web search and data mining",
"volume": "",
"issue": "",
"pages": "97--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the eighth ACM international conference on web search and data mining, pages 97-106.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the difficulty of automatically detecting irony: beyond a simple case of negation",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Reyes",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2014,
"venue": "Knowledge and Information Systems",
"volume": "40",
"issue": "3",
"pages": "595--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Reyes and Paolo Rosso. 2014. On the difficulty of automatically detecting irony: beyond a simple case of negation. Knowledge and Information Systems, 40(3):595-614.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hate Speech in Pixels: Detection of Offensive Memes towards Automatic Moderation",
"authors": [
{
"first": "",
"middle": [],
"last": "Benet Oriol",
"suffix": ""
},
{
"first": "Cristian Canton",
"middle": [],
"last": "Sabat",
"suffix": ""
},
{
"first": "Xavier Giro-I",
"middle": [],
"last": "Ferrer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nieto",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.02334"
]
},
"num": null,
"urls": [],
"raw_text": "Benet Oriol Sabat, Cristian Canton Ferrer, and Xavier Giro-i Nieto. 2019. Hate Speech in Pixels: Detection of Offensive Memes towards Automatic Moderation. arXiv preprint arXiv:1910.02334.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 Task 8: Memotion Analysis-The Visuo-Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"authors": [
{
"first": "Chhavi",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Deepesh",
"middle": [],
"last": "Bhageria",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Paka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "P Y K L",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chhavi Sharma, Deepesh Bhageria, William Paka, Scott, Srinivas P Y K L, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bj\u00f6rn Gamb\u00e4ck. 2020. SemEval-2020 Task 8: Memotion Analysis-The Visuo- Lingual Metaphor! In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval- 2020), Barcelona, Spain, Sep. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.1556"
]
},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Language and image interaction in cartoons: Towards a multimodal theory of humor",
"authors": [
{
"first": "Villy",
"middle": [],
"last": "Tsakona",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Pragmatics",
"volume": "41",
"issue": "6",
"pages": "1171--1188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Villy Tsakona. 2009. Language and image interaction in cartoons: Towards a multimodal theory of humor. Journal of Pragmatics, 41(6):1171-1188.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sentiment analysis for social media images",
"authors": [
{
"first": "Yilin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Baoxin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Data Mining Workshop (ICDMW)",
"volume": "",
"issue": "",
"pages": "1584--1591",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yilin Wang and Baoxin Li. 2015. Sentiment analysis for social media images. In 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pages 1584-1591. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Overview of the germeval 2018 shared task on the identification of offensive language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Melanie Siegel, and Josef Ruppenhofer. 2018. Overview of the germeval 2018 shared task on the identification of offensive language.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Research on image classification model based on deep convolution neural network",
"authors": [
{
"first": "Mingyuan",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "EURASIP Journal on Image and Video Processing",
"volume": "2019",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingyuan Xin and Yong Wang. 2019. Research on image classification model based on deep convolution neural network. EURASIP Journal on Image and Video Processing, 2019(1):40.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deep learning for detecting inappropriate content in text",
"authors": [
{
"first": "Harish",
"middle": [],
"last": "Yenala",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Jhanwar",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Manoj",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Chinnakotla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goyal",
"suffix": ""
}
],
"year": 2018,
"venue": "International Journal of Data Science and Analytics",
"volume": "6",
"issue": "4",
"pages": "273--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harish Yenala, Ashish Jhanwar, Manoj K Chinnakotla, and Jay Goyal. 2018. Deep learning for detecting inappro- priate content in text. International Journal of Data Science and Analytics, 6(4):273-286.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval). In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Detecting hate speech on twitter using a convolution-gru based deep neural network",
"authors": [
{
"first": "Ziqi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Tepper",
"suffix": ""
}
],
"year": 2018,
"venue": "European semantic web conference",
"volume": "",
"issue": "",
"pages": "745--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqi Zhang, David Robinson, and Jonathan Tepper. 2018. Detecting hate speech on twitter using a convolution-gru based deep neural network. In European semantic web conference, pages 745-760. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1: BLSTM architecture",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Figure 2: CNN architecture",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Hybrid 2.4 Hyper-parameters/Tuning.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "HybridE is reduced, while micro F1 shows a slight improvement. The HybridE achieves a macro F1 and a micro F1 of 0.4753 and 0.6197, respectively. Subtask C. Similar to Subtask B, BLSTM and CNN show a trade-off in the two performance metrics. Overall, the hybrid equal-weighted model achieves a macro F1 of 0.2858 and micro F1 of 0.4288.",
"content": "<table><tr><td colspan=\"2\">System Macro F1 Micro F1</td><td colspan=\"2\">System Macro F1 Micro F1</td><td colspan=\"2\">System Macro F1 Micro F1</td></tr><tr><td colspan=\"2\">BLSTM 0.318 5 0.453 5</td><td colspan=\"2\">BLSTM 0.521 6 0.598 6</td><td colspan=\"2\">BLSTM 0.315 3 0.383 8</td></tr><tr><td>CNN</td><td>0.344 3 0.507 9</td><td>CNN</td><td>0.464 2 0.618 7</td><td>CNN</td><td>0.283 1 0.427 8</td></tr><tr><td colspan=\"2\">HybridE 0.316 8 0.485 0</td><td colspan=\"2\">HybridE 0.475 3 0.619 7</td><td colspan=\"2\">HybridE 0.285 8 0.428 8</td></tr><tr><td/><td>(a) Subtask A</td><td/><td>(b) Subtask B</td><td/><td>(c) Subtask C</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Results on Training set 3.5 Testing set Results.",
"content": "<table/>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Results on Testing set ( * Percentage change with respect to top system)",
"content": "<table><tr><td>3.6 Class wise performance</td></tr><tr><td>Training set class wise results. The class wise performances for Subtask B and Subtask C on Training</td></tr><tr><td>set are shown in</td></tr></table>",
"html": null
}
}
}
}