ACL-OCL / Base_JSON /prefixD /json /dravidianlangtech /2021.dravidianlangtech-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:20:22.998569Z"
},
"title": "DOSA: Dravidian Code-Mixed Offensive Span Identification Dataset",
"authors": [
{
"first": "Manikandan",
"middle": [],
"last": "Ravikiran",
"suffix": "",
"affiliation": {},
"email": "mravikiran3@gatech.edu"
},
{
"first": "Subbiah",
"middle": [],
"last": "Annamalai",
"suffix": "",
"affiliation": {},
"email": "sannamali@gatech.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the Dravidian Offensive Span Identification Dataset (DOSA) for under-resourced Tamil-English and Kannada-English code-mixed text. The dataset addresses the lack of code-mixed datasets with annotated offensive spans by extending annotations of existing code-mixed offensive language identification datasets. It provides span annotations for Tamil-English and Kannada-English code-mixed comments posted by users on YouTube social media. Overall the dataset consists of 4786 Tamil-English comments with 6202 annotated spans and 1097 Kannada-English comments with 1641 annotated spans, each annotated by two different annotators. We further present some of our baseline experimental results on the developed dataset, thereby eliciting research in under-resourced languages, leading to an essential step towards semi-automated content moderation in Dravidian languages. The dataset is available in https://github. com/manikandan-ravikiran/DOSA.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the Dravidian Offensive Span Identification Dataset (DOSA) for under-resourced Tamil-English and Kannada-English code-mixed text. The dataset addresses the lack of code-mixed datasets with annotated offensive spans by extending annotations of existing code-mixed offensive language identification datasets. It provides span annotations for Tamil-English and Kannada-English code-mixed comments posted by users on YouTube social media. Overall the dataset consists of 4786 Tamil-English comments with 6202 annotated spans and 1097 Kannada-English comments with 1641 annotated spans, each annotated by two different annotators. We further present some of our baseline experimental results on the developed dataset, thereby eliciting research in under-resourced languages, leading to an essential step towards semi-automated content moderation in Dravidian languages. The dataset is available in https://github. com/manikandan-ravikiran/DOSA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Fighting offensive content is imperative for social media companies and other entities involved in content moderation. Currently, much of moderation is relatively limited on most community platforms (Jhaver et al., 2019) with most of them relying on detection of repeatedly used words 1 and block-lists (Jhaver et al., 2018) . Additionally, most social media companies employ human content moderators, who are frequently swamped by offensive mentions and their volume (Arsht and Etcovitch, 2018) . On the other hand, precise moderation leads to content delay leading to user attrition. Furthermore, smaller entities cannot utilize human moderators on a large scale due to their sheer cost. As a result, they shut down their comments sections entirely. Although content moderation to some degree has utilized semi-automated approaches (Jhaver et al., 2019) , most of them are not yet available for Non-English languages and code-mixed texts.",
"cite_spans": [
{
"start": 199,
"end": 220,
"text": "(Jhaver et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 303,
"end": 324,
"text": "(Jhaver et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 468,
"end": 495,
"text": "(Arsht and Etcovitch, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 834,
"end": 855,
"text": "(Jhaver et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Code-switching or code-mixing is a mixing of linguistic units from two or more languages in a single conversation or sometimes even a single utterance and is widely used across the world (Sitaram et al., 2019) . In India, due to widely employed educational and cultural guidelines, English largely influences all the Indian spoken languages, including Dravidian languages like Kannada and Tamil (Chakravarthi et al., 2020) . However, with the advent of social media, code-switching has permeated to mediums with informal contexts like forums and messaging platforms. As a result, code-switching is part and parcel of offensive conversations in social media.",
"cite_spans": [
{
"start": 187,
"end": 209,
"text": "(Sitaram et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 377,
"end": 422,
"text": "Kannada and Tamil (Chakravarthi et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite many recent NLP advancements, handling code-mixed offensive content is still a challenge in Dravidian Languages (Sitaram et al., 2019) . The primary reason is data scarcity, as it appears relatively less in standard textual resources and instead spread across the World Wide Web. However, recently the research of offensive code-mixed texts in Dravidian languages has seen traction Hande et al., 2020) . However, these are restricted to the whole comment's classification for offensiveness and do not identify the spans that make a text offensive. But emphasizing such offensive spans can assist human moderators who often deal with lengthy comments and prefer attribution instead of just a systemgenerated unexplained score per post. Accordingly, the contributions of this paper are as follows",
"cite_spans": [
{
"start": 120,
"end": 142,
"text": "(Sitaram et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 390,
"end": 409,
"text": "Hande et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We first present DOSA, a code-mixed Tamil-English, and Kannada-English dataset annotated for offensive spans. We describe our annotation scheme in the due process and examine the dataset properties, and brief about annotator-related information 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We also provide an experimental baseline over state-of-the-art multilingual language models of BERT (Devlin et al., 2019) , DistillBERT , and XLM-RoBERTA (Conneau et al., 2020) on the developed offensive span identification dataset.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 156,
"end": 178,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. In section 2, we discuss literature on offensive language and span identification. Following this in section 3, we present the dataset collection and annotation process. Section 4 offers the experimental setting used for the baseline creation. In section 5, we discuss our results and errors so identified. Finally, in section 6, we conclude with a summary and possible directions for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Identification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive language & Span",
"sec_num": "2.1"
},
{
"text": "Offensive language identification (OLID) problem is widely investigated in the literature via multiple facets of works ranging from hierarchical OLID annotation scheme (Zampieri et al., 2019a,b) , release of large-scale semi-supervised training dataset with over nine million English tweets , extension of OLID to languages of Arabic , Danish (Sigurbergsson and Derczynski, 2020), Greek (Pitenis et al., 2020) , and Turkish (\u00c7\u00f6ltekin, 2020) and development of multiple systems Ravikiran et al., 2020 Disclaimer: This paper contains examples that may be considered profane, vulgar, or offensive. These contents do not reflect the authors' views or the graduate schools/employed organization with which they are associated and exclusively serve to explain linguistic research challenges.",
"cite_spans": [
{
"start": 168,
"end": 194,
"text": "(Zampieri et al., 2019a,b)",
"ref_id": null
},
{
"start": 387,
"end": 409,
"text": "(Pitenis et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 477,
"end": 499,
"text": "Ravikiran et al., 2020",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive language & Span",
"sec_num": "2.1"
},
{
"text": "3 The relationship between offensiveness, Hatespeech, agreesiveness etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive language & Span",
"sec_num": "2.1"
},
{
"text": "is presented in https: //link.springer.com/article/10.1007/ s10579-020-09502-8 shared task with 10k comments is the only work in this line. Our work extends span identification to Youtube comments in code-mixed Dravidian languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive language & Span",
"sec_num": "2.1"
},
{
"text": "Span Identification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixing in Offensive Language and",
"sec_num": "2.2"
},
{
"text": "Offensive language identification with code mixed texts have seen most works in Hindi-English (Srivastava et al., 2020; Bohra et al., 2018; Santosh and Aravind, 2019; Rajput et al., 2020; Chopra et al., 2020) . Recently there are works in Bangla (Jahan et al., 2019) , Kannada (Hande et al., 2020) and Tamil (Chakravarthi et al., 2020) . To the best of our knowledge, there are no works on span identification with Dravidian code-mixed datasets. Our work addresses this gap, by emphasizing the creation of code-mixed offensive span identification inline with Pavlopoulos et al. 2021.",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "(Srivastava et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 120,
"end": 139,
"text": "Bohra et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 140,
"end": 166,
"text": "Santosh and Aravind, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 167,
"end": 187,
"text": "Rajput et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 188,
"end": 208,
"text": "Chopra et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 246,
"end": 266,
"text": "(Jahan et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 269,
"end": 297,
"text": "Kannada (Hande et al., 2020)",
"ref_id": null
},
{
"start": 302,
"end": 335,
"text": "Tamil (Chakravarthi et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixing in Offensive Language and",
"sec_num": "2.2"
},
{
"text": "In this work, we reuse TamilMixSentiment (Chakravarthi et al., 2020), and KanCMD (Hande et al., 2020) datasets consisting of 15k and 7k YouTube code-mixed comments respectively in Tamil-English and Kannada-English languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection and Annotation",
"sec_num": "3"
},
{
"text": "Reusing the existing dataset is beneficial. It encourages the development of multitask models with span identification as one of the tasks, analysis of model interpretability during offensive language identification, and developing a unified benchmark dataset for multiple NLP tasks in code-mixed Dravidian languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection and Annotation",
"sec_num": "3"
},
{
"text": "In this work, we considered only a subset of the comments that were already annotated as offensive 4 for our span annotation process. Out of this subset, we rechecked and removed non-code mixed comments resulting in 9049 and 1311 comments in Tamil-English and Kannada-English, respectively. For the final annotation process, we considered all of the code-mixed Kannada-English comments and 5000 Tamil-English comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection and Annotation",
"sec_num": "3"
},
{
"text": "For annotation, we follow earlier works on span identification (Pavlopoulos et al., 2021) where two annotators annotated every comment according to the guidelines from section 3.2. Since the original comments are from the public domain, we anonymized all the personal information and user-related tags to protect actual users' privacy during our annotation process. Besides, no personal information of annotators was collected except their educational background and expertise in the language they volunteered to annotate.",
"cite_spans": [
{
"start": 63,
"end": 89,
"text": "(Pavlopoulos et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Setup",
"sec_num": "3.1"
},
{
"text": "Moreover, all the annotators were informed that the contents to be annotated are profane, vulgar, or offensive and can withdraw from the annotation process if necessary. For annotation, we use doccano (Nakayama et al., 2018) which was locally hosted by each individual annotator, and the annotations were finally merged separately once all annotations were obtained. Within doccano, all the annotators were explicitly asked to create a single label called CAUSE with label id of 1, thus maintaining consistency of annotation labels. (See Figure 1 )",
"cite_spans": [
{
"start": 201,
"end": 224,
"text": "(Nakayama et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 538,
"end": 546,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Annotation Setup",
"sec_num": "3.1"
},
{
"text": "The annotators have explained the meaning of offensiveness with illustrative examples. Annotators who agreed that they understood this were given the following instructions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "3.2"
},
{
"text": "\u2022 Extract the offensive word sequences (spans) of the comment by highlighting each such span and labeling them as CAUSE as shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "3.2"
},
{
"text": "\u2022 If the comment is not offensive or if the offensiveness is context-dependent, do not highlight any span.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "3.2"
},
{
"text": "\u2022 If the whole comment should be annotated, then annotate the whole comment and convey the annotation verifier about the same after completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "3.2"
},
{
"text": "To start with, we selected a total of 15 annotators, all of whom had minimal education of Bachelors Degree with either medium of schooling to be one of the English, Tamil, and Kannada languages or proficient in both speaking and writing of one or both the Dravidian languages. Further, the annotation was done iteratively in a cycle of 500 sentences where each of the annotators was asked to report back to verify the quality of annotations and receive their next batch of 500 sentences. Each batch was manually verified by an annotation verifier, which allowed us to control the quality of annotations. This, in turn, permitted us to remove annotators who did not annotate well or had a significant delay in annotations. At the end of this process, we had six annotators, out of which all of the annotators were native speakers or writers of either Kannada or Tamil or both. Also, two of the annotators acted as annotation verifiers. Table 1 shows details of the annotators with educational qualification, gender diversity, Medium of instruction in schooling, miscellaneous qualities, including knowledge of multiple accents of Kannada/Tamil. Each YouTube comment was initially sent to two annotators for span annotation without revealing that the comment was offensive. If there was a disagreement in annotation, then the comment was sent to the third annotator. If all the three disagreed, then we skipped the annotation of that particular comment. Overall this leads to the annotation of each comment by two annotators.",
"cite_spans": [],
"ref_spans": [
{
"start": 935,
"end": 942,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Annotators",
"sec_num": "3.3"
},
{
"text": "For ground truth creation, we follow a strategy in line with works of Pavlopoulos et al. 2021 where for each comment, we obtain character offset of the identified span using doccano. We then re-tained only the overlapping annotations, i.e., both annotators must have included each character offset in their spans for the offset to be included in the ground truth. The annotation verifiers resolved any discrepancy in considering the non-overlapping part of the annotations. Corpus statistics is given in the Table 2 . Compared to Tamil-English, we can see Kannada-English has a significantly lesser number of samples. This is because of the inherent nature of the KanCMD dataset (Hande et al., 2020) which consists of only 1472 comments annotated as offensive. While the dataset is minimal, we release this along with Tamil-English to empower more annotation and subsequently build better offensive span identification models for the Kannada-English language. Moreover, we can see that the maximum size of the annotation is 82 and 102, respectively, across the datasets, but it can be seen from Figure 2 and 3 that these have very few occurrences. ",
"cite_spans": [],
"ref_spans": [
{
"start": 508,
"end": 515,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 1095,
"end": 1103,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Ground Truth Creation",
"sec_num": "3.4"
},
{
"text": "Since two annotators annotated each sentence, and the focus is only on the offensive contents, the annotation quality is validated using Cohens Kappa on annotated tokens only. In our case, we saw this value to be 0.6. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter annotator Agreement",
"sec_num": "3.6"
},
{
"text": "To establish a baseline performance, we applied multiple state-of-the-art multilingual language models to determine the span of offensive comments. In this section, we present various models, hyperparameters, and other experimental settings used as part of the baseline estimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "Since the task focuses on identifying spans of offensive word sequences, we treat the problem of identification of span as the task of sequence labeling where we tag words that contribute to offensiveness. In this work, we use the following language models available through HuggingFace's Transformer Library .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "\u2022 Mulitlingual BERT: Multilingual BERT (M-BERT) is a language model pre-trained from monolingual corpora in 104 languages where task-specific annotations in one language are used to finetune the model for evaluation in another language. We use two variants of BERT, namely BERT-M1 5 which is trained on Wikipedia corpus and BERT-M2 6 which is original BERT finetuned first on XQUAD and Tydi QA dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "\u2022 Mulitlingual DistilBERT: We also use a smaller general-purpose language representation model, DistilBERT, which upon finetuning offers better performance on downstream tasks. Again we use two variants of Distil-BERT namely DBERT-M1 7 which is the original model developed as part of and DBERT-M2 8 which is similar to the earlier case of BERT where the model is again finetuned on XQUAD and Tydi QA dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "\u2022 Mulitlingual XML-RoBERTA: This is a masked language model trained on a multilingual language modeling objective using only monolingual data. Here again we use two variants namely XBERT-M1 9 and XBERT-M2 10 with former being the base model released as part of (Conneau et al., 2020) and later being a larger model which is finetuned on multiple NLI datasets.",
"cite_spans": [
{
"start": 261,
"end": 283,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "For our experiments, we trained all of our models under a common setting. The various parameter setting is as shown in Table 3 . Considering the effect of the presence of specific offensive terms and the size of the overall dataset, rather than creating a random train-test split in this work, we employed 3-fold cross-validation for all the experiments. ",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "4.2"
},
{
"text": "The experimental results for various state-of-theart multilingual language models are as shown in Tables 4-9. Since the focus of these experiments is to just establish baselines and provide some starting pointers for further exploration, we restrict ourselves from in-depth error analysis and instead focus on unique errors which we came across during the experiments. To start with, we compute results for each of the fold where we identify span/entity level Precision (P), Recall (R), and F1-Score (F1) inline with past works (Wang and Iwaihara, 2019; Yamada et al., 2020) . Computing entity level P, R, and F1 measures consider only those word sequences which precisely match the annotation, thus eliminating partially identified offensive spans. This measure is also in line with Pavlopoulos et al. 2021. 8 distilbert-multi-finetuned-for-xqua-on-tydiqa 9 xlm-roberta-base 10 xlm-roberta-large-xnli-anli To start with, for both Tamil-English and Kannada-English code-mixed text, all the models perform poorly with the best average F1 of 0.403 for BERT-M2 and XBERT-M2, respectively for Kannada-English. Meanwhile, for Tamil-English comments, we found the maximum average F1 of 0.405 for DBERT-M1. Besides, across all the folds on each language model, the results are in similar ranges. Such poor performance can be attributed to two reasons: the training process and the complexity of the code-mixed text.",
"cite_spans": [
{
"start": 528,
"end": 553,
"text": "(Wang and Iwaihara, 2019;",
"ref_id": "BIBREF26"
},
{
"start": 554,
"end": 574,
"text": "Yamada et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 784,
"end": 810,
"text": "Pavlopoulos et al. 2021. 8",
"ref_id": null
},
{
"start": 857,
"end": 858,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, Results, and Discussion",
"sec_num": "5"
},
{
"text": "During our experiments, for Kannada-English, we found most models tend to overfit, and in some cases, the model training saturated at a training loss of 0.1. For the former case, we employed precision control of the learning rate to control overfitting; however, the net effect was relatively limited due to the small dataset size. However, the latter case tends to be more challenging to handle even after significant learning rate changes. Additionally, since the experiment was a baseline, we didn't perform any hyperparameter tuning for other parameters, leaving them for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, Results, and Discussion",
"sec_num": "5"
},
{
"text": "Furthermore, unlike Kannada-English experiments, we found overfitting for Tamil-English only in the case of XLM-RoBERTA models. Moreover, for both Kannada-English and Tamil-English, we found that comments with only one or more profane words causing offensiveness were correctly identified in their spans. However, for both Tamil-English and Kannada-English, we can see overall lower results for which one of the reasons we thought was the nature of code-mixed text themselves. To verify this, we cross-check the errors and found the following issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, Results, and Discussion",
"sec_num": "5"
},
{
"text": "Complete Sentence annotations -For cases where the complete comment or more than 70% of the characters are accounted in offensive span, we found the errors to be highest where one or more words are not tagged as offensive, leading to a drop in span level F1. Example sentences with ground truth and predicted spans are as shown below. Ground Truth: Sir intha cinema madida mele dodda mattadalli prachara madbekittu. nNodi prem avru dabba film madudru hype build-up create madodralli etthida kai. Prediction: Sir intha cinema madida mele dodda mattadalli prachara madbekittu. nNodi prem avru dabba film madudru hype build-up create madodralli etthida kai. Translation: Sir, after making this kind of movie, there was need of more publicity. Please see Mr. Prem, even after making useless movie still he is king in creating hype.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, Results, and Discussion",
"sec_num": "5"
},
{
"text": "Word Pronunciation -Another unique case of errors involves words that are the same except the texts are written differently. These are again not correctly identified as offensive. In the example below both Devidya and Thevidya translates to whore, which is often used as an abuse word. Sarcastic Sentences: Sarcastic comments where the complete sentence is annotated for spans were also cases where the model fails to work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, Results, and Discussion",
"sec_num": "5"
},
{
"text": "One such example is as shown with ground truth and prediction. Ground Truth: puratchi thalavi kamika sona tun tun aunty kamikerenga. Prediction: puratchi thalavi kamika sona tun tun aunty kamikerenga Translation: Rather than showing purtachi thalavi (Honorable Nick Name given to former Tamil Nadu CM), why are you showing Tun Tun Aunty (A Cartoon character in Chota Bheem).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, Results, and Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we presented DOSA, a dataset for offensive span identification for Tamil-English and Kannada-English code mixed texts. We also achieved an inter-annotator agreement of 0.6 for the annotations. We also created baselines and reported P, R, and F1 for span identification using the developed dataset and state-of-the-art multilingual language models. In the due process, we presented some of the challenges in the training language model-based baselines, which is a possible future work. Moreover, in this work, we did not present results on simpler models like LSTM-CRF and its variants, which is also a possible exploration. Most importantly, we could see cases where the complete comment was annotated either due to their sarcastic nature or because they had only offensive terms. In this regard, a possible question to explore includes evaluating the need for such annotations by considering larger datasets and improving models' performance under such conditions. Additionally, we think this resource is useful for multitask learn-ing and interpretability for code-mixed offensive language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Released as part of https://competitions. codalab.org/competitions/27654",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "bert-base-multilingual-uncased 6 bert-multi-cased-finedtuned-xquad-tydiqa-goldp 7 distilbert-base-multilingual-cased",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank our anonymous reviewers for their valuable feedback. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors only and does not reflect the view of their employing organization or graduate schools. The work is the result of continuous study group discussions during and after the CS7646-ML4T course (OMSCS Program, Georgia Tech) done at MLSG -An online machine learning study group formed to discuss interesting papers and open research problems in areas of NLP focus on South Indian Languages. Further, MLSG is a random name generated for the representation purpose of our reading-discussion group. We would also like to thank all of our annotators for their effort in annotation and review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling",
"authors": [
{
"first": "Alexander",
"middle": [
"F"
],
"last": "Segun Taofeek Aroyehun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018",
"volume": "",
"issue": "",
"pages": "90--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Segun Taofeek Aroyehun and Alexander F. Gelbukh. 2018. Aggression detection in social media: Us- ing deep neural networks, data augmentation, and pseudo labeling. In Proceedings of the First Work- shop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, August 25, 2018, pages 90-97. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The human cost of online content moderation",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Arsht",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Etcovitch",
"suffix": ""
}
],
"year": 2018,
"venue": "Harvard Journal of Law & Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Arsht and Daniel Etcovitch. 2018. The human cost of online content moderation. Harvard Journal of Law & Technology.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A dataset of hindi-english code-mixed social media text for hate speech detection",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Bohra",
"suffix": ""
},
{
"first": "Deepanshu",
"middle": [],
"last": "Vijay",
"suffix": ""
},
{
"first": "Vinay",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Syed Sarfaraz Akhtar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shrivastava",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media, PEOPLES@NAACL-HTL 2018",
"volume": "",
"issue": "",
"pages": "36--41",
"other_ids": {
"DOI": [
"10.18653/v1/w18-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Bohra, Deepanshu Vijay, Vinay Singh, Syed Sarfaraz Akhtar, and Manish Shrivastava. 2018. A dataset of hindi-english code-mixed social media text for hate speech detection. In Proceedings of the Second Workshop on Computational Model- ing of People's Opinions, Personality, and Emotions in Social Media, PEOPLES@NAACL-HTL 2018, New Orleans, Louisiana, USA, June 6, 2018, pages 36-41. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Corpus creation for sentiment analysis in code-mixed tamil-english text",
"authors": [
{
"first": "Vigneshwaran",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Muralidaran",
"suffix": ""
},
{
"first": "John",
"middle": [
"Philip"
],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Crae",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages and Collaboration and Computing for Under-Resourced Languages, SLTU/CCURL@LREC 2020",
"volume": "",
"issue": "",
"pages": "202--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Vigneshwaran Murali- daran, Ruba Priyadharshini, and John Philip Mc- Crae. 2020. Corpus creation for sentiment analy- sis in code-mixed tamil-english text. In Proceed- ings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages and Collaboration and Computing for Under-Resourced Languages, SLTU/CCURL@LREC 2020, Marseille, France, May 2020, pages 202-210. European Lan- guage Resources association.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hindi-english hate speech detection: Author profiling, debiasing, and practical perspectives",
"authors": [
{
"first": "Shivang",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "Puneet",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Rajiv Ratn",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "386--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shivang Chopra, Ramit Sawhney, Puneet Mathur, and Rajiv Ratn Shah. 2020. Hindi-english hate speech detection: Author profiling, debiasing, and practi- cal perspectives. In The Thirty-Fourth AAAI Con- ference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial In- telligence, EAAI 2020, New York, NY, USA, Febru- ary 7-12, 2020, pages 386-393. AAAI Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A corpus of turkish offensive language on social media",
"authors": [
{
"first": "",
"middle": [],
"last": "\u00c7 Agri \u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020",
"volume": "",
"issue": "",
"pages": "6174--6184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 agri \u00c7\u00f6ltekin. 2020. A corpus of turkish offensive language on social media. In Proceedings of The 12th Language Resources and Evaluation Confer- ence, LREC 2020, Marseille, France, May 11-16, 2020, pages 6174-6184. European Language Re- sources Association.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8440-8451. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Kancmd: Kannada codemixed dataset for sentiment analysis and offensive language detection",
"authors": [
{
"first": "R",
"middle": [],
"last": "Adeep Hande",
"suffix": ""
},
{
"first": "Bharathi Raja",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
}
],
"year": 2020,
"venue": "PEOPLES",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adeep Hande, R. Priyadharshini, and Bharathi Raja Chakravarthi. 2020. Kancmd: Kannada codemixed dataset for sentiment analysis and offensive lan- guage detection. In PEOPLES.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Abusive comments detection in bangla-english code-mixed and transliterated text",
"authors": [
{
"first": "Maliha",
"middle": [],
"last": "Jahan",
"suffix": ""
},
{
"first": "Md",
"middle": [
"Rayanuzzaman"
],
"last": "Istiak Ahamed",
"suffix": ""
},
{
"first": "Swakkhar",
"middle": [],
"last": "Bishwas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shatabda",
"suffix": ""
}
],
"year": 2019,
"venue": "2nd International Conference on Innovation in Engineering and Technology (ICIET)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maliha Jahan, Istiak Ahamed, Md. Rayanuzzaman Bishwas, and Swakkhar Shatabda. 2019. Abusive comments detection in bangla-english code-mixed and transliterated text. 2019 2nd International Con- ference on Innovation in Engineering and Technol- ogy (ICIET), pages 1-6.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Human-machine collaboration for content regulation: The case of reddit automoderator",
"authors": [
{
"first": "Shagun",
"middle": [],
"last": "Jhaver",
"suffix": ""
},
{
"first": "Iris",
"middle": [],
"last": "Birman",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"S"
],
"last": "Bruckman",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Trans. Comput. Hum. Interact",
"volume": "26",
"issue": "5",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3338243"
]
},
"num": null,
"urls": [],
"raw_text": "Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy S. Bruckman. 2019. Human-machine collaboration for content regulation: The case of reddit auto- moderator. ACM Trans. Comput. Hum. Interact., 26(5):31:1-31:35.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online harassment and content moderation: The case of blocklists",
"authors": [
{
"first": "Shagun",
"middle": [],
"last": "Jhaver",
"suffix": ""
},
{
"first": "Sucheta",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"S"
],
"last": "Bruckman",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Trans. Comput. Hum. Interact",
"volume": "25",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3185593"
]
},
"num": null,
"urls": [],
"raw_text": "Shagun Jhaver, Sucheta Ghoshal, Amy S. Bruckman, and Eric Gilbert. 2018. Online harassment and con- tent moderation: The case of blocklists. ACM Trans. Comput. Hum. Interact., 25(2):12:1-12:33.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cy- berbullying, TRAC@COLING 2018, Santa Fe, New",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Arabic offensive language on twitter: Analysis and experiments",
"authors": [
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Ammar",
"middle": [],
"last": "Rashed",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Younes",
"middle": [],
"last": "Samih",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Abdelali",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamdy Mubarak, Ammar Rashed, Kareem Darwish, Younes Samih, and Ahmed Abdelali. 2020. Arabic offensive language on twitter: Analysis and experi- ments. CoRR, abs/2004.02192.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "doccano: Text annotation tool for human",
"authors": [
{
"first": "Hiroki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Takahiro",
"middle": [],
"last": "Kubo",
"suffix": ""
},
{
"first": "Junya",
"middle": [],
"last": "Kamura",
"suffix": ""
},
{
"first": "Yasufumi",
"middle": [],
"last": "Taniguchi",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroki Nakayama, Takahiro Kubo, Junya Kamura, Ya- sufumi Taniguchi, and Xu Liang. 2018. doccano: Text annotation tool for human. Software available from https://github.com/doccano/doccano.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "and Ion Androutsopoulos. 2021. Semeval-2021 task 5: Toxic spans detection",
"authors": [
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "Lo",
"middle": [],
"last": "Laugier",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Pavlopoulos, Lo Laugier, Jeffrey Sorensen, and Ion Androutsopoulos. 2021. Semeval-2021 task 5: Toxic spans detection (to appear). In Proceedings of the 15th International Workshop on Semantic Evalu- ation.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Offensive language identification in greek",
"authors": [
{
"first": "Zeses",
"middle": [],
"last": "Pitenis",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "2020",
"issue": "",
"pages": "5113--5119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeses Pitenis, Marcos Zampieri, and Tharindu Ranas- inghe. 2020. Offensive language identification in greek. In Proceedings of The 12th Language Re- sources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 5113- 5119. European Language Resources Association.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Transfer Learning for Detecting Hateful Sentiments in Code Switched Language",
"authors": [
{
"first": "Kshitij",
"middle": [],
"last": "Rajput",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Kapoor",
"suffix": ""
},
{
"first": "Puneet",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Ponnurangam",
"middle": [],
"last": "Hitkul",
"suffix": ""
},
{
"first": "Rajiv Ratn",
"middle": [],
"last": "Kumaraguru",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "159--192",
"other_ids": {
"DOI": [
"10.1007/978-981-15-1216-2_7"
]
},
"num": null,
"urls": [],
"raw_text": "Kshitij Rajput, Raghav Kapoor, Puneet Mathur, Hitkul, Ponnurangam Kumaraguru, and Rajiv Ratn Shah. 2020. Transfer Learning for Detecting Hateful Sen- timents in Code Switched Language, pages 159-192. Springer Singapore, Singapore.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hitachi at semeval-2020 task 12: Offensive language identification with noisy labels using statistical sampling and post-processing",
"authors": [
{
"first": "Manikandan",
"middle": [],
"last": "Ravikiran",
"suffix": ""
},
{
"first": "Toshinori",
"middle": [],
"last": "Amin Ekant Muljibhai",
"suffix": ""
},
{
"first": "Hiroaki",
"middle": [],
"last": "Miyoshi",
"suffix": ""
},
{
"first": "Yuta",
"middle": [],
"last": "Ozaki",
"suffix": ""
},
{
"first": "Sakata",
"middle": [],
"last": "Koreeda",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Masayuki",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "1961--1967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manikandan Ravikiran, Amin Ekant Muljibhai, Toshi- nori Miyoshi, Hiroaki Ozaki, Yuta Koreeda, and Sakata Masayuki. 2020. Hitachi at semeval-2020 task 12: Offensive language identification with noisy labels using statistical sampling and post-processing. pages 1961-1967.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A largescale semi-supervised dataset for offensive language identification. CoRR, abs",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, and Preslav Nakov. 2020. A large- scale semi-supervised dataset for offensive language identification. CoRR, abs/2004.14454.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hate speech detection in hindi-english code-mixed social media text",
"authors": [
{
"first": "T",
"middle": [
"Y S S"
],
"last": "Santosh",
"suffix": ""
},
{
"first": "K",
"middle": [
"V S"
],
"last": "Aravind",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "310--313",
"other_ids": {
"DOI": [
"10.1145/3297001.3297048"
]
},
"num": null,
"urls": [],
"raw_text": "T. Y. S. S. Santosh and K. V. S. Aravind. 2019. Hate speech detection in hindi-english code-mixed social media text. pages 310-313.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Offensive language and hate speech detection for danish",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Gudbjartur Ingi Sigurbergsson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "2020",
"issue": "",
"pages": "3498--3508",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gudbjartur Ingi Sigurbergsson and Leon Derczynski. 2020. Offensive language and hate speech detection for danish. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 3498- 3508. European Language Resources Association.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A survey of code-switched speech and language processing",
"authors": [
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Khyathi Raghavi Chandu",
"suffix": ""
},
{
"first": "Krishna",
"middle": [],
"last": "Sai",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rallabandi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Kr- ishna Rallabandi, and A. Black. 2019. A survey of code-switched speech and language processing. ArXiv, abs/1904.00784.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Understanding script-mixing: A case study of hindi-english bilingual twitter users",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The 4th Workshop on Computational Approaches to Code Switching, CodeSwitch@LREC 2020",
"volume": "",
"issue": "",
"pages": "36--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Srivastava, Kalika Bali, and Monojit Choud- hury. 2020. Understanding script-mixing: A case study of hindi-english bilingual twitter users. In Pro- ceedings of the The 4th Workshop on Computational Approaches to Code Switching, CodeSwitch@LREC 2020, Marseille, France, May, 2020, pages 36-44. European Language Resources Association.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep neural architectures for joint named entity recognition and disambiguation",
"authors": [
{
"first": "Qianwen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mizuho",
"middle": [],
"last": "Iwaihara",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE International Conference on Big Data and Smart Computing",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {
"DOI": [
"10.1109/BIGCOMP.2019.8679233"
]
},
"num": null,
"urls": [],
"raw_text": "Qianwen Wang and Mizuho Iwaihara. 2019. Deep neu- ral architectures for joint named entity recognition and disambiguation. In IEEE International Confer- ence on Big Data and Smart Computing, BigComp 2019, Kyoto, Japan, February 27 -March 2, 2019, pages 1-4. IEEE.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning from bullying traces in social media",
"authors": [
{
"first": "Jun-Ming",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kwang-Sung",
"middle": [],
"last": "Jun",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Bellmore",
"suffix": ""
}
],
"year": 2012,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "656--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun-Ming Xu, Kwang-Sung Jun, Xiaojin Zhu, and Amy Bellmore. 2012. Learning from bullying traces in so- cial media. In Human Language Technologies: Con- ference of the North American Chapter of the Asso- ciation of Computational Linguistics, Proceedings, June 3-8, 2012, Montr\u00e9al, Canada, pages 656-666. The Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Luke: Deep contextualized entity representations with entity-aware self-attention",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, H. Takeda, and Yuji Matsumoto. 2020. Luke: Deep contextualized entity representations with entity-aware self-attention. ArXiv, abs/2010.01057.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1144"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Min- neapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1415-1420. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/s19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. Semeval-2019 task 6: Identifying and cate- gorizing offensive language in social media (offense- val). In Proceedings of the 13th International Work- shop on Semantic Evaluation, SemEval@NAACL- HLT 2019, Minneapolis, MN, USA, June 6-7, 2019, pages 75-86. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Zeses Pitenis, and \u00c7 agri \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media (offenseval 2020)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation, SemEval@COLING 2020",
"volume": "",
"issue": "",
"pages": "1425--1447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agri \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media (offenseval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, SemEval@COLING 2020, Barcelona (online), December 12-13, 2020, pages 1425-1447. International Committee for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Annotation of offensive spans using Doccano."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Histogram of annotated Span size in Kannada-English dataset."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Histogram of annotated Span size in Tamil-English dataset."
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "Annotators and their characteristics. \u263c indicates annotation verifiers.",
"content": "<table/>",
"html": null
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "DOSA corpus statistics",
"content": "<table/>",
"html": null
},
"TABREF6": {
"num": null,
"type_str": "table",
"text": "Hyperparameters used across experiments.",
"content": "<table/>",
"html": null
},
"TABREF8": {
"num": null,
"type_str": "table",
"text": "Results of BERT-M1 for offensive span identification.",
"content": "<table><tr><td>Model</td><td/><td colspan=\"3\">Kannada-English</td><td/><td colspan=\"2\">Tamil-English</td></tr><tr><td/><td>Fold #</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td/><td>1</td><td colspan=\"3\">0.394 0.394 0.394</td><td colspan=\"3\">0.382 0.391 0.387</td></tr><tr><td>BERT-M2</td><td>2</td><td colspan=\"3\">0.397 0.441 0.418</td><td colspan=\"3\">0.349 0.397 0.372</td></tr><tr><td/><td>3</td><td colspan=\"3\">0.386 0.408 0.396</td><td colspan=\"3\">0.387 0.406 0.396</td></tr><tr><td/><td colspan=\"4\">Average 0.392 0.414 0.403</td><td colspan=\"3\">0.373 0.398 0.385</td></tr></table>",
"html": null
},
"TABREF9": {
"num": null,
"type_str": "table",
"text": "Results of BERT-M2 for offensive span identification.",
"content": "<table><tr><td>Model</td><td/><td colspan=\"3\">Kannada-English</td><td/><td colspan=\"2\">Tamil-English</td></tr><tr><td/><td>Fold #</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td/><td>1</td><td colspan=\"3\">0.380 0.412 0.395</td><td colspan=\"3\">0.408 0.420 0.414</td></tr><tr><td>DBERT-M1</td><td>2</td><td colspan=\"3\">0.349 0.364 0.356</td><td colspan=\"3\">0.363 0.417 0.389</td></tr><tr><td/><td>3</td><td colspan=\"3\">0.413 0.417 0.415</td><td colspan=\"3\">0.393 0.436 0.414</td></tr><tr><td/><td colspan=\"4\">Average 0.381 0.398 0.389</td><td colspan=\"3\">0.388 0.425 0.405</td></tr></table>",
"html": null
},
"TABREF10": {
"num": null,
"type_str": "table",
"text": "Results of DBERT-M1 for offensive span identification.",
"content": "<table><tr><td>Model</td><td/><td colspan=\"3\">Kannada-English</td><td/><td colspan=\"2\">Tamil-English</td></tr><tr><td/><td>Fold #</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td/><td>1</td><td colspan=\"3\">0.372 0.391 0.381</td><td colspan=\"3\">0.378 0.387 0.382</td></tr><tr><td>DBERT-M2</td><td>2</td><td colspan=\"3\">0.295 0.365 0.328</td><td colspan=\"3\">0.382 0.440 0.409</td></tr><tr><td/><td>3</td><td colspan=\"3\">0.370 0.378 0.374</td><td colspan=\"3\">0.396 0.434 0.414</td></tr><tr><td/><td colspan=\"4\">Average 0.346 0.378 0.361</td><td colspan=\"3\">0.385 0.420 0.402</td></tr></table>",
"html": null
},
"TABREF11": {
"num": null,
"type_str": "table",
"text": "Results of DBERT-M2 for offensive span identification.",
"content": "<table><tr><td>Model</td><td/><td colspan=\"3\">Kannada-English</td><td/><td colspan=\"2\">Tamil-English</td></tr><tr><td/><td>Fold #</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td/><td>1</td><td colspan=\"3\">0.405 0.432 0.418</td><td colspan=\"3\">0.379 0.395 0.387</td></tr><tr><td>XBERT-M1</td><td>2</td><td colspan=\"3\">0.364 0.397 0.380</td><td colspan=\"3\">0.395 0.420 0.407</td></tr><tr><td/><td>3</td><td colspan=\"3\">0.407 0.415 0.411</td><td colspan=\"3\">0.374 0.391 0.382</td></tr><tr><td/><td colspan=\"4\">Average 0.392 0.415 0.403</td><td colspan=\"3\">0.383 0.402 0.392</td></tr></table>",
"html": null
},
"TABREF12": {
"num": null,
"type_str": "table",
"text": "Results of XBERT-M1 for offensive span identification.",
"content": "<table><tr><td>Model</td><td/><td colspan=\"3\">Kannada-English</td><td/><td colspan=\"2\">Tamil-English</td></tr><tr><td/><td>Fold #</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td/><td>1</td><td colspan=\"3\">0.365 0.381 0.372</td><td colspan=\"3\">0.249 0.308 0.275</td></tr><tr><td>XBERT-M2</td><td>2</td><td colspan=\"3\">0.379 0.438 0.405</td><td colspan=\"3\">0.216 0.254 0.234</td></tr><tr><td/><td>3</td><td colspan=\"3\">0.336 0.408 0.369</td><td colspan=\"3\">0.263 0.317 0.289</td></tr><tr><td/><td colspan=\"4\">Average 0.360 0.409 0.382</td><td colspan=\"3\">0.243 0.293 0.266</td></tr></table>",
"html": null
},
"TABREF13": {
"num": null,
"type_str": "table",
"text": "Results of XBERT-M2 for offensive span identification.",
"content": "<table/>",
"html": null
},
"TABREF14": {
"num": null,
"type_str": "table",
"text": "Example 1: ...Dai unga ammayepdi da unna petha Devidiya intha comments ku. Example 2: Otha Thevidiya Pasangala Neenga Nalla Padam Edukkanumnu Yenda... Noisy texts -Occasionally, we also found span errors where the sentence is full of hashtags that were annotated as offensive, but the model only identifies part of them. Examples are shown below. Disaster.... disaster .... disaster.....Disaster.... disaster .... disaster.....Disaster.... disaster.",
"content": "<table><tr><td>Example 1: #BoycottComali #BoycottComali</td></tr><tr><td>#BoycottComali #BoycottComali #BoycottCo-</td></tr><tr><td>mali #BoycottComali #BoycottComali #Boy-</td></tr><tr><td>cottComali #BoycottComali #BoycottComali ....</td></tr><tr><td>Example 2:</td></tr></table>",
"html": null
}
}
}
}