ACL-OCL / Base_JSON /prefixD /json /dravidianlangtech /2021.dravidianlangtech-1.52.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:11.351523Z"
},
"title": "Codewithzichao@DravidianLangTech-EACL2021: Exploring Multimodal Transformers for Meme Classification in Tamil Language",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {
"country": "China"
}
},
"email": "lizichao@pku.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our submission to shared task on Meme Classification for Tamil Language. To address this task, we explore a multimodal transformer for meme classification in Tamil language. According to the characteristics of the image and text, we use different pretrained models to encode the image and text so as to get better representations of the image and text respectively. Besides, we design a multimodal attention layer to make the text and corresponding image interact fully with each other based on cross attention. Our model achieved 0.55 weighted average F1 score and ranked first in this task.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our submission to shared task on Meme Classification for Tamil Language. To address this task, we explore a multimodal transformer for meme classification in Tamil language. According to the characteristics of the image and text, we use different pretrained models to encode the image and text so as to get better representations of the image and text respectively. Besides, we design a multimodal attention layer to make the text and corresponding image interact fully with each other based on cross attention. Our model achieved 0.55 weighted average F1 score and ranked first in this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, with the prosperity of social media platforms, memes have gradually become a part of online communication. Therefore, it is essential to detect whether memes are offensive to individuals or organizations to ensure the diversity and sustainability of content on the Internet. It it a challenging task to classify whether memes are troll or not. In addition, there has been a lot of work currently focused on English (Truong and Lauw, 2019; Xu et al., 2019; Cai et al., 2019) , but little work has been done for Tamil language.",
"cite_spans": [
{
"start": 432,
"end": 455,
"text": "(Truong and Lauw, 2019;",
"ref_id": "BIBREF13"
},
{
"start": 456,
"end": 472,
"text": "Xu et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 473,
"end": 490,
"text": "Cai et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Shared task on Meme Classification for Tamil Language fills this gap. The goal of this shared task is to detect whether memes which are collected from social media platforms are troll or not. Each meme has been annotated with troll or not troll class. Furthermore, a transcription of captions in Latin script for both Tamil is embedded in each image. This is a multimodal classification task that given the image and text pair, systems have to classify this pair into troll or not troll class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore a multimodal transformer for meme classification on Tamil language. According to the characteristics of the image and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Train Test troll 1282 395 not troll 1018 272 text, we use different pre-trained models to encode the image and text so as to get better representations of the image and text respectively. Besides, due to the particularity of social media text, in many cases we can only understand the meaning of text through the corresponding image, so it is essential to make the text and corresponding image interact fully with each other. To tackle this issue, we design a multimodal attention layer based on cross attention. Our model took first place in this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class",
"sec_num": null
},
{
"text": "The data we used is provided by the organizers of shared task on Meme Classification in Tamil Language (Suryawanshi et al., 2020; Suryawanshi and Chakravarthi, 2021) . There are 2300 samples in the training data. The specific statistics of the data are shown in Table 1 .",
"cite_spans": [
{
"start": 103,
"end": 129,
"text": "(Suryawanshi et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 130,
"end": 165,
"text": "Suryawanshi and Chakravarthi, 2021)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 262,
"end": 269,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "There are two methods used to preprocess social media text as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "2.1"
},
{
"text": "\u2022 Noise removal: Emojis and extra blanks in the training data are removed in advance. Experimental results show that removing these noise can improve the performance of our model. The maximum sequence size is 256.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "2.1"
},
{
"text": "\u2022 Tokenization: Texts are tokenized using the sentencepiece toolkit 1 and converted to the corresponding IDs through the vocabulary of XLM-RoBERTa (Conneau et al., 2020) .",
"cite_spans": [
{
"start": 147,
"end": 169,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "2.1"
},
{
"text": "XLM-RoBERTa E _\"# E #$%$ \u2022\u2022\u2022\u2022\u2022\u2022 E ['()] E [+,'] E -$./0 E -$./0 \u2022\u2022\u2022\u2022\u2022\u2022 E -$./0 E -$./0 E 1 E 2 \u2022\u2022\u2022\u2022\u2022\u2022 E 3 E 4 _en nada \u2022\u2022\u2022\u2022\u2022\u2022 [SEP] [CLS] + + + + + + + + + + t 1 t 2 \u2022\u2022\u2022\u2022\u2022\u2022 t 3 t 4 ResNet m 2 m 5 \u2022\u2022\u2022\u2022\u2022\u2022 m 67 m 1 224x224x3 Multimodal Multi-Head Cross Attention FFN Add & Norm Add & Norm Multimodal Multi-Head Cross Attention FFN Add & Norm Add & Norm 2 x 1 2 \u2022\u2022\u2022\u2022\u2022\u2022 3 4 p 1 p 2 \u2022\u2022\u2022\u2022\u2022\u2022 p # p 4 a 1 a 2 \u2022\u2022\u2022\u2022\u2022\u2022 a # a 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "2.1"
},
{
"text": "Figure 1: Our model architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Softmax",
"sec_num": null
},
{
"text": "Similar to the image preprocessing method in Ima-geNet (Deng et al., 2009) , each image is cropped and scaled so that the dimension size of each image is 224\u00d7224\u00d73.",
"cite_spans": [
{
"start": 55,
"end": 74,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Preprocessing",
"sec_num": "2.2"
},
{
"text": "In this section, we will present our model for meme classification on Tamil language. Our model is mainly divided into three layers: encoding layer, multimodal attention layer and prediction layer. Encoding layer is used to obtain word representations and image representations. Multimodal attention layer is used to make the text and corresponding image interact fully with each other. Prediction layer is used to get the probabilities of all classes. Overall model architecture is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 492,
"end": 500,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3"
},
{
"text": "Text Encoding: Compared with RNN such as LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Chung et al., 2014) , the pre-trained language model like XLM-RoBERTa can learn better contextual representations of text and also helps in fighting vanishing and exploding gradient descent. Therefore, we use XLM-RoBERTa as the encoder of the social media text. Given a sentence X = {x i } n i=0 , where n is equal to 256 and x i is the sum of token embedding, position embedding and language embedding of token at position i and the dimension size of x i is d, we can obtain contextual representation T = {t i } n i=0 for each word:",
"cite_spans": [
{
"start": 46,
"end": 80,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
},
{
"start": 88,
"end": 108,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.1"
},
{
"text": "T = XLM \u2212RoBERT a(X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.1"
},
{
"text": "Image Encoding: Since ResNet (He et al., 2016) is currently the most widely used image feature extraction network, we use ResNet with 152 layers as the encoder of the image to extract the features for each image I \u2208 R 224\u00d7224\u00d73 . The output of the last layer which is denoted asM = {m i } 49 i=1 is used to represent an image:",
"cite_spans": [
{
"start": 29,
"end": 46,
"text": "(He et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M = ResN et(I).",
"eq_num": "(2)"
}
],
"section": "Encoding Layer",
"sec_num": "3.1"
},
{
"text": "Besides, we use a linear transformation to make the dimension size of the image representations consistent with the word representations: M = W T MM , where W M \u2208 R 2048 * d and the length of M is 49.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.1"
},
{
"text": "This layer is the core of our model. Due to the particularity of social media text, although we have obtained representations of the image and text respectively, in many cases we can only understand the meaning of text through the corresponding image, so it is essential to make the text and corresponding image interact fully with each other. To tackle this issue, we designed a multimodal attention layer based on cross attention (Tsai et al., 2019) . Multimodal Multi-Head Cross Attention: Similar to multi-head self attention used in Transformer (Vaswani et al., 2017) , multimodal multi-head cross attention (MMHCA) has three input vectors:",
"cite_spans": [
{
"start": 432,
"end": 451,
"text": "(Tsai et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 550,
"end": 572,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "Q = {w i } n Q i=1 , K = {w i } n K i=1 , V = {w i } n V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "i=1 , where w i refers to a d dimension vector and n K is equal to n V . Attention results are defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "Attn i (Q i , K i ) = sof tmax( (W Q i Q i )(W K i K T i ) d/m ) (3) V attn i = Attn i (Q i , K i )(W V i V i ) (4) M M HCA(Q, K, V ) = [V attn 1 ; ...; V attn m ]. (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "Attentive Word Representations: To obtain word representations for each image, we use MMHCA, which take M as queries and T as keys and values. The attentive word representations are defined as :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R = LN (M + M M HCA(M, T, T ));",
"eq_num": "(6)"
}
],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R = LN (R + F F N (R)),",
"eq_num": "(7)"
}
],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "where LN is the layer normalization (Ba et al., 2016) and FFN is the feed-forward network. We use MMHCA again to obtain final attentive word representations.",
"cite_spans": [
{
"start": 36,
"end": 53,
"text": "(Ba et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "Attentive Image Representations: To obtain image representations for each word, we use MMHCA, which take T as queries and M as keys and values. The attentive image representations are defined as : ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = LN (M + M M HCA(T, M, M ));",
"eq_num": "(8)"
}
],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = LN (P + F F N (P )).",
"eq_num": "(9"
}
],
"section": "Multimodal Attention Layer",
"sec_num": "3.2"
},
{
"text": "To classify each image and text pair, we feed A to a average-over-time pooling layer and then use softmax to get the probabilities of all classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Layer",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z = AvgP ool(A); (10) P (y|X, I) = sof tmax(W Z + b),",
"eq_num": "(11)"
}
],
"section": "Prediction Layer",
"sec_num": "3.3"
},
{
"text": "where A is the output of multimodal attention layer. We use focal loss to train our model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Layer",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 {X,I}\u2208S \u03b1(1 \u2212 P (y|X, I)) \u03b3 logP (y|X, I),",
"eq_num": "(12)"
}
],
"section": "Prediction Layer",
"sec_num": "3.3"
},
{
"text": "where S refers to the train dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Layer",
"sec_num": "3.3"
},
{
"text": "We use Pytorch (Paszke et al., 2017) and Hug-gingFace's transformers (Wolf et al., 2020) to implement our model. We use XLM-RoBERTa and ResNet-152 as encoders for the text and image, respectively. We use mixed precision training based on Apex library 2 . AdamW (Loshchilov and Hutter, 2019) optimizer is used to optimize our model with a learning rate at 2e-5. We use 5-fold cross validation to obtain better performance. We use adversarial training (i.e. FGM (Goodfellow et al., 2015) ) to further improve the robustness and generalization ability of our model. We list all hyper-parameters of our model in Table 4 . we conduct the experiments on NVIDIA Tesla T4 GPUs. Our code is available at Github 3 .",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 69,
"end": 88,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 261,
"end": 290,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF9"
},
{
"start": 460,
"end": 485,
"text": "(Goodfellow et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 608,
"end": 615,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "The top five results in this task have been shown in Table 2 . Our model achieved 0.55 weighted average F1 score and ranked first. Besides, our result is 0.01 higher than the second. In addition, to prove the effectiveness of multimodal attention layer, we conduct the ablation experiment. The ablation result is shown in Table 3. When multimodal attention layer is removed, the final result drops by 0.01, which indicates that multimodal attention layer is considerably useful.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Ablations",
"sec_num": "4.2"
},
{
"text": "In this paper, we present a multimodal transformer for meme classification on Tamil language. Using ResNet and XLM-RoBERTa for the image and text, we obtain better representations of the image and text. Besides, we design a multimodal attention layer to make the text and corresponding image interact fully with each other. Finally, our model took first place in this task which demonstrates the effectiveness of our model. In future research, we will explore ways to better filter irrelevant information in the image and text to obtain better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/google/sentencepiece",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/NVIDIA/apex 3 https://github.com/codewithzichao/Multimodal-Transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Layer normalization. ArXiv",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Ba, J. Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. ArXiv, abs/1607.06450.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multimodal sarcasm detection in twitter with hierarchical fusion model",
"authors": [
{
"first": "Yitao",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Huiyu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2506--2515",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1239"
]
},
"num": null,
"urls": [],
"raw_text": "Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multi- modal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2506-2515, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, \u00c7 aglar G\u00fcl\u00e7ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. CoRR, abs/1412.3555.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Imagenet: A large-scale hierarchical image database",
"authors": [
{
"first": "J",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "2009 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Com- puter Vision and Pattern Recognition, pages 248- 255.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. CoRR, abs/1412.6572.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"http://arxiv.org/abs/https://doi.org/10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Focal loss for dense object detection",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Priya",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. 2017. Focal loss for dense ob- ject detection. In Proceedings of the IEEE Interna- tional Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "I",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Loshchilov and F. Hutter. 2019. Decoupled weight decay regularization. In ICLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Alban Desmaison, L. Antiga, and A. Lerer",
"authors": [
{
"first": ",",
"middle": [
"S"
],
"last": "Adam Paszke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Soumith Chintala",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS-W",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, S. Gross, Soumith Chintala, G. Chanan, E. Yang, Zachary Devito, Zeming Lin, Alban Des- maison, L. Antiga, and A. Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Findings of the shared task on Troll Meme Classification in Tamil",
"authors": [
{
"first": "Shardul",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shardul Suryawanshi and Bharathi Raja Chakravarthi. 2021. Findings of the shared task on Troll Meme Classification in Tamil. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A dataset for troll classification of TamilMemes",
"authors": [
{
"first": "Shardul",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Mihael",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Arcan",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Philip Mccrae",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the WILDRE5-5th Workshop on Indian Language Data: Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "7--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shardul Suryawanshi, Bharathi Raja Chakravarthi, Pranav Verma, Mihael Arcan, John Philip McCrae, and Paul Buitelaar. 2020. A dataset for troll clas- sification of TamilMemes. In Proceedings of the WILDRE5-5th Workshop on Indian Language Data: Resources and Evaluation, pages 7-13, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vistanet: Visual aspect attention network for multimodal sentiment analysis",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Hady",
"middle": [
"W"
],
"last": "Truong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lauw",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "305--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc-Tuan Truong and Hady W. Lauw. 2019. Vistanet: Visual aspect attention network for multimodal senti- ment analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):305-312.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multimodal transformer for unaligned multimodal language sequences",
"authors": [
{
"first": "Yao-Hung Hubert",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Shaojie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"Pu"
],
"last": "Liang",
"suffix": ""
},
{
"first": "J",
"middle": [
"Zico"
],
"last": "Kolter",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), Florence, Italy.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multiinteractive memory network for aspect based multimodal sentiment analysis",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Wenji",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Guandan",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "371--378",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.3301371"
]
},
"num": null,
"urls": [],
"raw_text": "Nan Xu, Wenji Mao, and Guandan Chen. 2019. Multi- interactive memory network for aspect based multi- modal sentiment analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):371- 378.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Statistics of the train and test datasets."
},
"TABREF2": {
"content": "<table><tr><td>Model</td><td>Metric</td><td colspan=\"3\">Precision Recall F1-score</td></tr><tr><td>Our submission</td><td>weighted avg</td><td>0.57</td><td>0.60</td><td>0.55</td></tr><tr><td colspan=\"2\">w/o Multimodal Attention Layer weighted avg</td><td>0.56</td><td>0.59</td><td>0.54</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Top-5 of the official leader-board in shared task for meme classification in Tamil language. Systems are ordered by weighted average F1 score."
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Ablation study of our model in the test dataset. w/o means without."
},
"TABREF5": {
"content": "<table><tr><td>After obtaining attentive word representations and</td></tr><tr><td>attentive image representations, we concatenate</td></tr><tr><td>them as the output of this layer: A = [R; P ].</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Hyper-parameters of our model."
}
}
}
}