ACL-OCL / Base_JSON /prefixC /json /constraint /2022.constraint-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:12:38.984034Z"
},
"title": "Logically at the Constraint 2022: Multimodal role labelling",
"authors": [
{
"first": "Ludovic",
"middle": [],
"last": "Kun",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jayesh",
"middle": [],
"last": "Bankoti",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Kiskovski",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our system for the Constraint 2022 challenge at ACL 2022, whose goal is to detect which entities are glorified, vilified or victimised, within a meme. The task should be done considering the perspective of the meme's author. In our work, the challenge is treated as a multi-class classification task. For a given pair of a meme and an entity, we need to classify whether the entity is being referenced as Hero, a Villain, a Victim or Other. Our solution combines (ensembling) different models based on Unimodal (Text only) model and Multimodal model (Text + Images). We conduct several experiments and benchmarks different competitive pre-trained transformers and vision models in this work. Our solution, based on an ensembling method, is ranked first on the leaderboard and obtains a macro F1-score of 0.58 on test set. The code for the experiments and results are available at here .",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our system for the Constraint 2022 challenge at ACL 2022, whose goal is to detect which entities are glorified, vilified or victimised, within a meme. The task should be done considering the perspective of the meme's author. In our work, the challenge is treated as a multi-class classification task. For a given pair of a meme and an entity, we need to classify whether the entity is being referenced as Hero, a Villain, a Victim or Other. Our solution combines (ensembling) different models based on Unimodal (Text only) model and Multimodal model (Text + Images). We conduct several experiments and benchmarks different competitive pre-trained transformers and vision models in this work. Our solution, based on an ensembling method, is ranked first on the leaderboard and obtains a macro F1-score of 0.58 on test set. The code for the experiments and results are available at here .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The rapid rise in the amount of harmful content being spread online is becoming a major societal challenge, with still unknown negative consequences. Large resources have been invested by many actors in the field of social media to shield users from harmful content. It is imperative to understand in a systematic way how information is spread, and be able to scalably monitor existing narratives and flag hateful ones circulating using technology. One way this is done is using entity recognition coupled with entity sentiment (Kiritchenko et al., 2021) . The former technique is to support OSINT(open source intelligence) analysts in understanding who or what are the subjects of discussion, and the latter automates the process of analysing if they are coupled with positive or negative feelings, in order to assist with understanding the stance of online users on specific topics. Efforts to tackle this challenge were mainly focused on English-language text-based data formats such as articles (Wankhade et al., 2022) . However, the complexity of content being posted online has drastically increased over time, and the challenge of harmful content detection now extends to multimedia, including memes (Alam et al., 2021) . The emergence and proliferation of memes on social media have made their analysis a crucial challenge to understand online interactions. A point can also be made about the study of entities sentiment online, as the polarising portrayal of famous (or infamous) personalities or institutions often give rise to inflammatory views and content.",
"cite_spans": [
{
"start": 528,
"end": 554,
"text": "(Kiritchenko et al., 2021)",
"ref_id": "BIBREF10"
},
{
"start": 999,
"end": 1022,
"text": "(Wankhade et al., 2022)",
"ref_id": "BIBREF26"
},
{
"start": 1207,
"end": 1226,
"text": "(Alam et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extracting insights from memes is a novel field and still has a lot of opportunities for growth. The multimodality of text and image adds a layer of complexity which contains more information, but is also harder to extract. Indeed each modality needs to understand their intrinsic properties but also capture cross-modal semantic understanding (M\u00fcller-Budack et al., 2021) . This paper delves into the field of multimodal semantic role labelling, a new task with particular challenges.",
"cite_spans": [
{
"start": 344,
"end": 372,
"text": "(M\u00fcller-Budack et al., 2021)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Examples of the multimodal dataset (Sharma et al., 2022) used to tackle this problem and provided as part of the CONSTRAINT competition are presented in Figure 1 . The first sample shows a meme image displaying two politicians from opposite parties separated on two sides of the image, with text around them, as well as the associated JSON line input with the extracted text from the image (also known as Optical Character Recognition or OCR), as well as the entities' mentioned labelled roles. In this case, all entities are referenced in the text of the image. In the second sample, however, we notice that not all are mentioned in the text, and visual information is needed to classify all entities.",
"cite_spans": [
{
"start": 35,
"end": 56,
"text": "(Sharma et al., 2022)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Depending on the textual information in the image, textual role classification is insufficient as some memes' underlying message requires under- standing of the visual information it contains, especially with the use of humour and sarcasm often associated with the format. The work done in this competition aims at finding unique and effective ways of tackling harmful meme classification as seen in the current social media space. An algorithm is designed for the task of role labelling for memes using a twin model (and ensemble) method. This Siamese network is constructed by combining the output of pre-trained State-of-the-Art (SoTA) models for both the visual components in the form of a CNN (Efficientnet-B7 (Tan and Le, 2019) ) and for textual components using a transformer (DeBERTa (He et al., 2020) ). The feature outputs obtained from both branches are then combined to obtain a final solution. Data analysis and investigation into potential bias in the dataset are also conducted to contextualise the task and present the difficulties of curating accurate multimodal datasets aimed at tackling the task for data in the wild (Gao et al., 2021) . In this paper, an overview of past work in the field is presented (section 2), followed by a deep dive into the problem statement as well as the method followed to respond to it (section 3), then data analysis (section 4). Experiments ran are presented in section 5, with results and discussion in section 6, and finally conclusion (section 7).",
"cite_spans": [
{
"start": 715,
"end": 733,
"text": "(Tan and Le, 2019)",
"ref_id": "BIBREF24"
},
{
"start": 792,
"end": 809,
"text": "(He et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 1137,
"end": 1155,
"text": "(Gao et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been some work done with respect to semantic role labelling in text. The idea of ABSA(Aspect Based Sentiment Analysis) works along the same line. Hence, utilisation of De-BERTa has provided the SoTA results (Silva and Marcacini) due to the disentangled attention improving the focus more on the positional embeddings rather than just based on the word embeddings. Hence, improved results were also obtained in various SNLI task for this algorithm (He et al., 2020) .They are nowadays very popular in Natural Language Processing (NLP) as they usually get SoTA for a variety of NLP tasks such as classification, sentiment analysis, Named Entity Recognition, Translation, Question Answering, etc.",
"cite_spans": [
{
"start": 458,
"end": 475,
"text": "(He et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Classifying memes into relevant classes is a field that has got much more interest over the past few years. The Facebook Hateful meme competition (Kiela et al., 2020 ) was a very publicised initiative to try and augment the field's capabilities. The task was a binary classification of hateful/not hateful meme based on a dataset curated by META. The winning solutions all comprised of ensembles of multimodal models. The Memotion competitions (Sharma et al., 2020) are another example of work done in the meme space. This time, the classification was based on sentiment (positive, negative, neutral), as well as the strength of the sentiment and the underlying aim of the meme (satirical, humour or harmful). Multimodal models here also obtained the top scores.",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "(Kiela et al., 2020",
"ref_id": null
},
{
"start": 444,
"end": 465,
"text": "(Sharma et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Multimodal models have seen a change over the past few years from twin networks like Siamese (Gu et al., 2018) to models pretrained on multiple multimodal tasks such as image captioning and visual question answering using transformers (Devlin et al., 2018) . Object detection is used in these models to extract image features thanks to pre-trained two-staged detectors Faster R-CNN model (Ren et al., 2015)), or single-stage detectors (YOLO V3 (Adarsh et al., 2020) ). Inspired by BERT (Devlin et al., 2018) , models such as Uniter and VisualBERT (Li et al., 2019b ) use a transformer architecture to jointly encode text and images, while LXMERT (Tan and Bansal, 2019) and ViLBERT (Lu et al., 2019) innovated by splitting their architectures in two, where a different transformer is applied to images and text individually before the features are combined by a third transformer. OSCAR (Object-Semantics Aligned Pre-training )( (Li et al., 2020) ) add in the text input the class objects detected from the images by a Faster R-CNN detector called object tags. The use of object tags in images as anchor points, significantly ease the learning of alignments during the pretraining. These models' effectiveness are demonstrated through their SoTA results on different multimodal dataset tasks such as NLVR2. This can be attributed to the models' increased capability to understand cross-modal correlations. However, these models are only as good as the data they've been pretrained on, which will present a challenge for the use case of the competition tackled in this paper. Another point is that the architectures of the textual streams of these models are a few years old (such as BERT) and inferior to the current SoTA (DeBERTa).",
"cite_spans": [
{
"start": 93,
"end": 110,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 235,
"end": 256,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 444,
"end": 465,
"text": "(Adarsh et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 486,
"end": 507,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 547,
"end": 564,
"text": "(Li et al., 2019b",
"ref_id": "BIBREF13"
},
{
"start": 646,
"end": 668,
"text": "(Tan and Bansal, 2019)",
"ref_id": "BIBREF23"
},
{
"start": 681,
"end": 698,
"text": "(Lu et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 928,
"end": 945,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The CONSTRAINT competition is a multimodal semantic role labelling multi-class classification problem. The aim is to classify the role of entities present in a meme using the image, its textual information and the entities it contains. The different classes are (\"Hero\", \"Villain\", \"Victim\", \"Other\"). The label applied for each entity depends on how the entity is presented in the meme:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3.1"
},
{
"text": "Hero: The entity is glorified Villain: the entity is vilified Victim: the entity is victimised, Other: none of the above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3.1"
},
{
"text": "Our final model is an ensemble of 5 classifiers based on existing pretrained Unimodal (text) and Multimodal (text + images) architectures. (see figure 3 ) An ensemble combine several models to obtain a better generalised one. It usually gives a boost of performance in exchange for a more time-consuming model compared to more shallow model. Different methods of ensembling exist such as bagging, boosting, stacking, etc. We consider that this strategy will be very helpful to reduce the overfitting given the small number of instances we have, and how imbalanced the dataset is. To combine our models, we average the predictions of our individual models. We experimented a few unimodal architectures based on transformers (Vaswani et al., 2017 ) such as DeBERTa and RoBerta using only texts (OCR) and entities provided. The idea here was to see how much performance could be obtained just by textual information. These models are based on self-attention layers and an improved version of the BERT method pretrained on millions of sentences (Devlin et al., 2018) for language modelling. We fine-tuned on these models and found DeBERTa to be performing the best among the pretrained BERT models. For the fine-tuning, the last FC layer added over pooler layer of DeBERTa. The last layer was a FC layer of size 4 to provide us with the respective role label. The architecture for this structure is given (see figure 2) .",
"cite_spans": [
{
"start": 723,
"end": 744,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF25"
},
{
"start": 1041,
"end": 1062,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Ensembling :",
"sec_num": "3.2"
},
{
"text": "We also experimented Multi Modal models which include as input data : images and texts (OCR + entity). We tried different approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Modal :",
"sec_num": "3.2.2"
},
{
"text": "(1) The \"Naive\" approach consisted in extracting text features with a strong Language model -De-BERTa -and concatenating it with visual features with Convolutional Neural Network -EfficientNet-B7. We added on top of these concatenated features a Linear Layer to predict the class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Modal :",
"sec_num": "3.2.2"
},
{
"text": "(2) The second approach was based on fine-tuning the whole image-text multimodal model. We experimented with two models: MMBT transformers ( Multimodal Bitransformers ) (Kiela et al., 2019) and VisualBERT (Li et al., 2019b) which has been pre-trained on classifying multimodal experiments. (i) The MMBT transformer model utilise bert-baseuncased model as text encoder and the CLIP model (Radford et al., 2021) as image encoder. The main idea was to reuse the BERT text model we had finetuned for the task and freeze the 12 encoder layers. Further we fine-tuned the MMBT multimodal model by projecting the image embeddings to text token space. (ii) The VisualBERT was pretrained model (Li et al., 2019b) for image-and-language tasks like VQA, VCR, NLVR2, and Flickr30Ks. We used the detectron2 embeddings (Ren et al., 2015) as image encodings with bert-base-uncased as text encoder to finetune the model.",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Kiela et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 205,
"end": 223,
"text": "(Li et al., 2019b)",
"ref_id": "BIBREF13"
},
{
"start": 387,
"end": 409,
"text": "(Radford et al., 2021)",
"ref_id": "BIBREF18"
},
{
"start": 684,
"end": 702,
"text": "(Li et al., 2019b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Modal :",
"sec_num": "3.2.2"
},
{
"text": "(3) The last architecture used was ViLT ) (Vision and Language Transformers) which is one of the simplest architectures for a vision and language model. ViLT is composed of a transformer module which extracts and processes textual and visual features without using separate embedder as it can be the case for MMBT for instance. That method gave a significant runtime and parameter optimisation. (see figure 5) 3.3 Meta Data extractions :",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 409,
"text": "figure 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Modal :",
"sec_num": "3.2.2"
},
{
"text": "We attempted to extract meta data information from images in order to improve the insight from those. Indeed, using only the OCR was sometimes insufficient because the entities were not always present in the text. Multiple strategies were investigated for gathering insights from images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Modal :",
"sec_num": "3.2.2"
},
{
"text": "The first observation made was in the image below (see figure 4) , the MEME is talking about Donald Trump (who is considered as a villain in the author's view). However he is not mentioned explicitly. His face is visible in the MEME though. That is why we decided to use a celebrities face detector which detects if a select famous face is visible in the MEME. The model is composed of two main steps : (i) a face detector based on the popular MTCNN face detector ((Zhang et al., 2016)) (ii) the face recognition part is based on a ResNet Architecture. We consider adding the face in the jsonl provided by the host when the confidence score of the face celebrities was above 0.95. The celebrity detector comes from Giphy's github.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 64,
"text": "figure 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Celebrity Detector :",
"sec_num": "3.3.1"
},
{
"text": "The second observation made was that a MEME can contain multiples \"sub images\". In fact, as in the figure 4, the MEME contains two images in it. A \"sub images\" detector was implemented based on YoloV5 (https://github.com/ultralytics/yolov5). We generated an artificial dataset, based on the Hateful MEME competition (Kiela et al., 2020) , where we filtered and kept only the MEMEs with one image. Different single images were then combined to create one artificial MEME, with associated bounding boxes of the multiple subimages it contained. For the evaluation, 100 manually labelled images were used. The YOLO checkpoint is shared in our github solution. Our original idea was to extract with our detector each sub images from the MEME and associate each sentence of the OCR to the correct sub image with the name of the famous face if it existed. However, the OCR provided did not contain the coordinate of the sentence. We attempted to make the OCRed text match an open source OCR framework containing word coordinates, which yielded poor results. Therefore, the final multimodal model used the sub image as well as the face name into the text processing. The input of the transformer for text data was then as follows : \" ",
"cite_spans": [
{
"start": 316,
"end": 336,
"text": "(Kiela et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sub Image Detector",
"sec_num": "3.3.2"
},
{
"text": "The competition dataset consists of 2 memes subsets, one about US politics, and the other about Covid-19, totalling 5552 images with associated OCR and entity annotation in the training set, and 650 in the validation set. This size is very small to expect to build any robust SoTA vision or multimodal capabilities, training from scratch. The distributions of the 4 labels are heavily imbalanced (see table 1). Over three quarters of the entities belong to the \"other\" class, and of the remaining classes, \"villain\" appears around twice as much as both the \"hero\" and \"victim\" class combined. An analysis of the entities in the dataset was undertaken and they were observed to be well balanced amongst the 4 classes. Indeed, as can be expected of using data from the political domain over the past few years, examples of common mentions were of \"Donald Trump\", \"Barrack Obama\", \"The Republicans\", \"The Democrats\". The fact that they were all amongst the most cited entities in each label indicates the sources used to curate the dataset was unbiased politically. The OCRed text was obtained by running the Google OCR API on the images, which in some examples leads to imperfect text detection or extraction. These two issues materialise in the form of either poorly clustered text paragraphs into the appropriate text boxes, meaning sentences from two separate paragraphs would be concatenated together midway through, but also through more basic spelling mistakes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "Another point relevant to meme analysis is the presence of sub images inside each image. An image might itself contain two separate images which tell a different story, often contrasting between sentiments of entities in each, such as in figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "A big challenge with this task of entity classification is detecting where the entity is mentioned whether in the OCR or in the image. Table 3 shows top-n entity other villain hero victim 1 donald trump donald trump donald trump donald trump 2 coronavirus joe biden barack obama america 3 joe biden democratic party green party people 4 barack obama republican party joe biden barack obama 5 mask barack obama libertarian party democratic party 4 , have one of the entities to classify not present in neither the OCR nor the image, and must be classified from understanding of context, which makes the task more difficult.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 358,
"text": "Table 3 shows top-n entity other villain hero victim 1 donald trump donald trump donald trump donald trump 2 coronavirus joe biden barack obama america 3 joe biden democratic party green party people 4",
"ref_id": "TABREF1"
},
{
"start": 467,
"end": 468,
"text": "4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "To train and evaluate our different models, we used the Google Cloud Service with VM using the V100 GPU (16GB) and A100(40GB). We use the famous Pytorch framework with the Huggingface library in python. All our training used mixed precision and gradient accumulation in order to speed up some training time and allow larger model training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting :",
"sec_num": "5.1"
},
{
"text": "Data Analysis was performed in order to understand the underlying problem better and find potential imbalances that could be leveraged for higher performances. The distribution of the number of entities per class, as well as each individual entity for each class was computed. Based on an a given entity, the aim was to try and predict which class it would most likely belong. An issue we came across was that some entities were mentioned in different ways: \"americans\" vs \"american people\". A rulebased approach was incorporated in an attempt to group these similar terms together. Analysis was running on the OCR as well as the output of the celebrity detection model to determine if the entity was mentioned inside the text, in the image, both or neither. References to single entities in the textual format would vary, one example being for the entity \"Donald Trump\", which would be referenced as \"Trump\", \"donald\", \"Donald Trump\" to name a few. A rule based classifier was implemented to group these terms together for the entities that showed up most frequently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis :",
"sec_num": "5.2"
},
{
"text": "A prediction was made based on the heuristics of the imbalances found to establish a baseline model, by classifying all the entities as \"other\", which is the class which contains over 75% of entities. Learning models would have to beat the accuracy of this rule based baseline to add value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis :",
"sec_num": "5.2"
},
{
"text": "Only one augmentation was used during the training. The augmentation was applied to the entity which needed to be classified. In fact, the entities provided were all without any punctuation and in lowercase format. We created a simple script which found the entity in the original text. The original text could contain punctuation and/or uppercase letter. We used this augmentation for the training, not the inference of the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentations :",
"sec_num": "5.3"
},
{
"text": "We trained a few competitive transformer architectures on text-only data, DeBERTa-v3 and RoBERTa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unimodal NLP :",
"sec_num": "5.4"
},
{
"text": "Two experiements were conducted for DeBERTa (1) The first was a direct approach where we found the role for the entity based on the OCR extracted by the google model. The input of the transformer was as follows : \"[CLS] Sentence OCR [SEP] entity to classify [SEP]\" (2) The second approach consisted of incorporating image signals in the unimodal training. We ran the celebrity face detection algorithm and further added these faces names text with the extracted OCR. The input of the transformer was as follow : \"[CLS] Sentence OCR \"\\n\" face name [SEP] entity to classify [SEP]\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeBERTa",
"sec_num": "5.4.1"
},
{
"text": "We utilized both DeBERTa-small and DeBERTalarge for these experiments. During the training, a batch size of 16 was used, with a sequence length of 128 and a linear scheduler where the learning rate was reduced linearly during the training. The initial learning rate was 1e \u2212 5, gradient accumulation is set at 3 epochs, and the optimizer used was AdamW. We trained these models for 6-7 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeBERTa",
"sec_num": "5.4.1"
},
{
"text": "A batch size of 8 was used, with a sequence length of 275 and a linear scheduler where the learning rate was reduced linearly during the training. The initial learning rate was 5e \u2212 6, and the optimizer used was AdamW. We trained these models for 6-7 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RoBERTa large",
"sec_num": "5.4.2"
},
{
"text": "We used a batch size of 4 (A100 GPU), with a sequence length of 275. As a unimodal model, we use the face name in the text input processing. We use 4 sub images when they exist and the MEME image. We use an attention system inspired by the Word Attention in (Li et al., 2019a) , before concatenating the image features with the text features. We use a linear scheduler where the learning rate is reduced linearly during the training. The initial learning rate is 5e \u2212 6, gradient accumulation is set at 3 epochs, and the optimizer used is AdamW. We trained these models for 7-8 epochs with early stopping of 2 epoch.",
"cite_spans": [
{
"start": 258,
"end": 276,
"text": "(Li et al., 2019a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Merging:",
"sec_num": "5.5.1"
},
{
"text": "We use a batch size of 4, with a sequence length of 275. As unimodal model, we use the face name in the text input processing. We don't use here a linear scheduler, but ReduceLROnPlateau where the learning rate is reduced by a factor of 0.5 when there is no improvement during 5 epochs. The initial learning rate is 2e \u2212 5, and the optimizer used is Adam. We trained these models for 7-8 epochs with early stopping of 2 epoch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ViLT:",
"sec_num": "5.5.2"
},
{
"text": "We use a batch size of 16, with a sequence length of 128. As for multimodal model, we use the image embeddings obtained from CLIP (Radford et al., 2021) and detectron2 (Ren et al., 2015) model individually for MMBT and VisualBERT. The text model used in both the architecture is bert. We use a linear scheduler where the learning rate is reduced linearly during the training. The initial learning rate is 1e \u2212 5, gradient accumulation is set at 3 epochs, and the optimizer used is AdamW. We trained these models for 7-8 epochs with early stopping of 2 epoch.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Radford et al., 2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MultiModal : MMBT and VisualBERT",
"sec_num": "5.5.3"
},
{
"text": "To improve the robustness of our solution we decide to combine 5 of our models (table 4) . We chose the models to combine based on the results of the validation score and also the diversity they could bring. For instance, we did not select DeBERTa-v3-small because it is just a smaller version of DeBERTa-v3-large. We select only two multimodal models, as most of them perform quite badly compared to the unimodal. Otherwise they would just harm the ensemble.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 88,
"text": "(table 4)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Ensembling :",
"sec_num": "5.6"
},
{
"text": "Just the simple experiment classifying all entities as \"other\" yielded 0.21 f1 score. We experimented with various models starting with just the textbased model, further adding image signals to using the image embeddings and finally a fully imageand-language based multimodal model to evaluate the model architecture efficiency in predicting a low resource multimodal problem. Here are some observations :-(1) Unimodal -We can see the difference in results moving from \"DeBERTa-v3-small\" to \"DeBERTa-v3-large\" in Table 4 . We can also see 2% improvement in the model when we tried to add image signal naively by adding the celebrity face name in text.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "6"
},
{
"text": "(2) Multi-Modal -We can see that multimodal model under performed a lot as seen in Table 4 . We tried to fine-tune the Visual-BERT model and the mmbt model i.e. pre-trained vision-and-language model but they seem to under perform due to the lack of pre-training data. As they had been pretrained on much less data and very different problem like VQA , it failed to capture the model understanding required for the transfer learning. So as to solve this issue we went ahead and utilised trained \"DeBERTa-v3-large\" model final output layer embeddings and concatenated them with pooled subimage embedding with EfficientNetB7. Thus we utilised the transfer learning from both the models to give us the optimum results.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "6"
},
{
"text": "(3) Ensemble -The ensemble approach was our final approach where we combined all the different ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "6"
},
{
"text": "We described our participation in the CON-STRAINT 2022 Shared Task on \"Detecting the Hero, the Villain, and the Victim in Memes\" with the implementation of various models. Ensemble model based system outperforms all the models on val set and test set. A challenge in this task is the low resource of data available for training models. Hence, transfer learning provides the best results. The best performing model in this competition combines the simple averaging of ViLT, RoBERTa large, DeBERTa large, naive multimodal and DeBERTa xlarge models. The ensemble seems to perform the best as the data size is small and we use a large model to allow for better transfer learning, This ultimately leads to some overfit of models but applying the averaging improves the results, like the boosted trees systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We found that there were two major challenges with the problem :-(i) The entities were sometimes not present in the image or the text. (ii) The size of data required to learn this implicit learning was not sufficient. This ultimately undermines the performance of our deep learning architecture. Creating a dataset for real-word multimodal problems, particularly the natural language inference problem of role labelling is challenging (Le Bras et al., 2020) . We appreciate the work by the CON-STRAINT 2022 organizers, yet, a more elaborate and extensive data would make this dataset more suitable for benchmarking. As an emergent research field, we hope our extensive model analysis and proposed solutions can act as baseline and inspire further work.",
"cite_spans": [
{
"start": 435,
"end": 457,
"text": "(Le Bras et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "A Appendix I ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Yolo v3-tiny: Object detection and recognition using one stage improved model",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Adarsh",
"suffix": ""
},
{
"first": "Pratibha",
"middle": [],
"last": "Rathi",
"suffix": ""
},
{
"first": "Manoj",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)",
"volume": "",
"issue": "",
"pages": "687--694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Adarsh, Pratibha Rathi, and Manoj Kumar. 2020. Yolo v3-tiny: Object detection and recognition using one stage improved model. In 2020 6th International Conference on Advanced Computing and Communi- cation Systems (ICACCS), pages 687-694. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hamed Firooz, and Preslav Nakov. 2021. A survey on multimodal disinformation detection",
"authors": [
{
"first": "Firoj",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Cresci",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Silvestri",
"suffix": ""
},
{
"first": "Dimiter",
"middle": [],
"last": "Dimitrov",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Shaden",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shaar",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.12541"
]
},
"num": null,
"urls": [],
"raw_text": "Firoj Alam, Stefano Cresci, Tanmoy Chakraborty, Fab- rizio Silvestri, Dimiter Dimitrov, Giovanni Da San Martino, Shaden Shaar, Hamed Firooz, and Preslav Nakov. 2021. A survey on multimodal disinforma- tion detection. arXiv preprint arXiv:2103.12541.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Uniter: Learning universal imagetext representations",
"authors": [
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Linjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Licheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"El"
],
"last": "Kholy",
"suffix": ""
},
{
"first": "Faisal",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Uniter: Learning universal image- text representations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Logically at the factify 2022: Multimodal fact verification",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Hella-Franziska",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Stylianos",
"middle": [],
"last": "Oikonomou",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kiskovski",
"suffix": ""
},
{
"first": "Anil",
"middle": [],
"last": "Bandhakavi",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2112.09253"
]
},
"num": null,
"urls": [],
"raw_text": "Jie Gao, Hella-Franziska Hoffmann, Stylianos Oikonomou, David Kiskovski, and Anil Bandhakavi. 2021. Logically at the factify 2022: Multimodal fact verification. arXiv preprint arXiv:2112.09253.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Look, imagine and match: Improving textual-visual cross-modal retrieval with generative models",
"authors": [
{
"first": "Jiuxiang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Jianfei",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Shafiq",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "7181--7189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiuxiang Gu, Jianfei Cai, Shafiq R Joty, Li Niu, and Gang Wang. 2018. Look, imagine and match: Im- proving textual-visual cross-modal retrieval with gen- erative models. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 7181-7189.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deberta: Decoding-enhanced bert with disentangled attention",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.03654"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hamed Firooz, and Davide Testuggine",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Suvrat",
"middle": [],
"last": "Bhooshan",
"suffix": ""
}
],
"year": 2019,
"venue": "Supervised multimodal bitransformers for classifying images and text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02950"
]
},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, and Davide Testuggine. 2019. Supervised multimodal bitransformers for classifying images and text. arXiv preprint arXiv:1909.02950.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Firooz",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
}
],
"year": null,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "2611--2624",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes chal- lenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33:2611-2624.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Vilt: Vision-and-language transformer without convolution or region supervision",
"authors": [
{
"first": "Wonjae",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Bokyung",
"middle": [],
"last": "Son",
"suffix": ""
},
{
"first": "Ildoo",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "5583--5594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolu- tion or region supervision. In International Con- ference on Machine Learning, pages 5583-5594. PMLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Confronting abusive language online: A survey from the ethical and human rights perspective",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Isar",
"middle": [],
"last": "Nejadgholi",
"suffix": ""
},
{
"first": "Kathleen C",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2021,
"venue": "Journal of Artificial Intelligence Research",
"volume": "71",
"issue": "",
"pages": "431--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko, Isar Nejadgholi, and Kathleen C Fraser. 2021. Confronting abusive language online: A survey from the ethical and human rights per- spective. Journal of Artificial Intelligence Research, 71:431-478.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adversarial filters of dataset biases",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1078--1088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Le Bras, Swabha Swayamdipta, Chandra Bha- gavatula, Rowan Zellers, Matthew Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases. In International Conference on Machine Learning, pages 1078-1088. PMLR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bidirectional lstm with hierarchical attention for text classification",
"authors": [
{
"first": "Jianping",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yimou",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Huaye",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC)",
"volume": "1",
"issue": "",
"pages": "456--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianping Li, Yimou Xu, and Huaye Shi. 2019a. Bidirec- tional lstm with hierarchical attention for text clas- sification. In 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Con- ference (IAEAC), volume 1, pages 456-459. IEEE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Visualbert: A simple and performant baseline for vision and language",
"authors": [
{
"first": "Liunian Harold",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.03557"
]
},
"num": null,
"urls": [],
"raw_text": "Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019b. Visualbert: A simple and performant baseline for vision and lan- guage. arXiv preprint arXiv:1908.03557.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Oscar: Object-semantics aligned pre-training for vision-language tasks",
"authors": [
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaowei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Pengchuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lijuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Houdong",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.02265"
]
},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolin- guistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multimodal news analytics using measures of cross-modal entity and context consistency",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "M\u00fcller-Budack",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Theiner",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Diering",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Idahl",
"suffix": ""
},
{
"first": "Sherzod",
"middle": [],
"last": "Hakimov",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Ewerth",
"suffix": ""
}
],
"year": 2021,
"venue": "International Journal of Multimedia Information Retrieval",
"volume": "10",
"issue": "2",
"pages": "111--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric M\u00fcller-Budack, Jonas Theiner, Sebastian Diering, Maximilian Idahl, Sherzod Hakimov, and Ralph Ew- erth. 2021. Multimodal news analytics using mea- sures of cross-modal entity and context consistency. International Journal of Multimedia Information Re- trieval, 10(2):111-125.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning transferable visual models from natural language supervision",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jong",
"middle": [
"Wook"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hallacy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Goh",
"suffix": ""
},
{
"first": "Sandhini",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Askell",
"suffix": ""
},
{
"first": "Pamela",
"middle": [],
"last": "Mishkin",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "8748--8763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Faster r-cnn: Towards real-time object detection with region proposal networks",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "Shaoqing Ren",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "28",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28:91-99.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Viswanath Pulabaigari, and Bjorn Gamback. 2020. Semeval-2020 task 8: Memotion analysis-the visuolingual metaphor! arXiv preprint",
"authors": [
{
"first": "Chhavi",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Deepesh",
"middle": [],
"last": "Bhageria",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Pykl",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.03781"
]
},
"num": null,
"urls": [],
"raw_text": "Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas Pykl, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bjorn Gamback. 2020. Semeval-2020 task 8: Memotion analysis-the visuo- lingual metaphor! arXiv preprint arXiv:2008.03781.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Findings of the constraint 2022 shared task on detecting the hero, the villain, and the victim in memes",
"authors": [
{
"first": "Shivam",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Tharun",
"middle": [],
"last": "Suresh",
"suffix": ""
},
{
"first": "Atharva",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Himanshi",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Md",
"middle": [
"Shad"
],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations -CONSTRAINT 2022",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shivam Sharma, Tharun Suresh, Atharva Kulkarni, Hi- manshi Mathur, Preslav Nakov, Md. Shad Akhtar, and Tanmoy Chakraborty. 2022. Findings of the con- straint 2022 shared task on detecting the hero, the villain, and the victim in memes. In Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations - CONSTRAINT 2022, Collocated with ACL 2022.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Aspectbased sentiment analysis using bert with disentangled attention",
"authors": [
{
"first": "H",
"middle": [],
"last": "Emanuel",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [
"M"
],
"last": "Silva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcacini",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuel H Silva and Ricardo M Marcacini. Aspect- based sentiment analysis using bert with disentangled attention.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Lxmert: Learning cross-modality encoder representations from transformers",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.07490"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from trans- formers. arXiv preprint arXiv:1908.07490.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficientnet: Rethinking model scaling for convolutional neural networks",
"authors": [
{
"first": "Mingxing",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "6105--6114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingxing Tan and Quoc Le. 2019. Efficientnet: Re- thinking model scaling for convolutional neural net- works. In International conference on machine learn- ing, pages 6105-6114. PMLR.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A survey on sentiment analysis methods, applications, and challenges",
"authors": [
{
"first": "Mayur",
"middle": [],
"last": "Wankhade",
"suffix": ""
},
{
"first": "Annavarapu",
"middle": [],
"last": "Chandra Sekhara",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kulkarni",
"suffix": ""
}
],
"year": 2022,
"venue": "Artificial Intelligence Review",
"volume": "",
"issue": "",
"pages": "1--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mayur Wankhade, Annavarapu Chandra Sekhara Rao, and Chaitanya Kulkarni. 2022. A survey on senti- ment analysis methods, applications, and challenges. Artificial Intelligence Review, pages 1-50.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Joint face detection and alignment using multitask cascaded convolutional networks",
"authors": [
{
"first": "Kaipeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhanpeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Qiao",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE signal processing letters",
"volume": "23",
"issue": "10",
"pages": "1499--1503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10):1499-1503.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "CONSTRAINT dataset example",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Figure 2: UniModal Model",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Final model used for the Constraint22 competition.",
"num": null
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>split</td><td colspan=\"4\">other villain hero victim</td></tr><tr><td>train</td><td>13702</td><td>2427</td><td>475</td><td>910</td></tr><tr><td colspan=\"2\">train (ratio) 0.782</td><td colspan=\"3\">0.139 0.027 0.052</td></tr><tr><td>val</td><td>1589</td><td>305</td><td>54</td><td>121</td></tr><tr><td>val (ratio)</td><td>0.768</td><td colspan=\"3\">0.147 0.026 0.058</td></tr><tr><td colspan=\"5\">Table 1: distribution class of Constraint22 dataset</td></tr><tr><td colspan=\"4\">the top 5 most common entity per class.</td><td/></tr></table>",
"text": "Figure 4: Constraint dataset example : The first MEME contains two sub images whereas the second MEME don't have the entity we are looking for."
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>split multimodal heighttrain</td><td>ratio matching 0.572</td></tr><tr><td>val</td><td>0.602</td></tr></table>",
"text": "Top 5 most common entities per class in training dataset"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Ratio of entities which are present in OCR provided the percentage of entities present in the OCR of the image in the dataset. Some examples, such as in figure"
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Rank 1</td><td>Team Logically</td><td>Final accuracy 58.671%</td></tr><tr><td>2</td><td>c1pher</td><td>55.240%</td></tr><tr><td>3</td><td>zhouziming</td><td>54.707%</td></tr><tr><td>4</td><td>smontariol</td><td>48.483%</td></tr><tr><td>5</td><td>zjl123001</td><td>46.177%</td></tr><tr><td>6</td><td>amanpriyanshu</td><td>31.943%</td></tr><tr><td>7</td><td>fharookshaik</td><td>23.855%</td></tr><tr><td>8</td><td>rabindra.nath</td><td>23.717%</td></tr></table>",
"text": "Experiments Results"
},
"TABREF6": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Constraint22 Leaderboard model outputs . We tried various ensembles and blending techniques but we got the best LB score with averaging of ViLT, RoBERTa large, DeBERTa large, naive multimodal and DeBERTa-xlarge models. Final test set results and competition leaderboard are presented inTable 5. Our best model (\"Ensemble\") outperforms all competition systems and best baseline models. Test result of Ensemble model achieved 0.58 avg. F1."
}
}
}
}