ACL-OCL / Base_JSON /prefixR /json /restup /2020.restup-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:03:34.404489Z"
},
"title": "An Indian Language Social Media Collection for Hate and Offensive Speech",
"authors": [
{
"first": "Anita",
"middle": [],
"last": "Saroj",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology (BHU)",
"location": {
"postCode": "221005",
"settlement": "Varanasi",
"region": "UP"
}
},
"email": ""
},
{
"first": "Sukomal",
"middle": [],
"last": "Pal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology (BHU)",
"location": {
"postCode": "221005",
"settlement": "Varanasi",
"region": "UP"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In social media, people express themselves every day on issues that affect their lives. During the parliamentary elections, people's interaction with the candidates in social media posts reflects a lot of social trends in a charged atmosphere. People's likes and dislikes on leaders, political parties and their stands often become subject of hate and offensive posts. We collected social media posts in Hindi and English from Facebook and Twitter during the run-up to the parliamentary election 2019 of India (PEI data-2019). We created a dataset for sentiment analysis into three categories: hate speech, offensive and not hate, or not offensive. We report here the initial results of sentiment classification for the dataset using different classifiers.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In social media, people express themselves every day on issues that affect their lives. During the parliamentary elections, people's interaction with the candidates in social media posts reflects a lot of social trends in a charged atmosphere. People's likes and dislikes on leaders, political parties and their stands often become subject of hate and offensive posts. We collected social media posts in Hindi and English from Facebook and Twitter during the run-up to the parliamentary election 2019 of India (PEI data-2019). We created a dataset for sentiment analysis into three categories: hate speech, offensive and not hate, or not offensive. We report here the initial results of sentiment classification for the dataset using different classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent years have seen indiscriminate spread of offensive languages on social media platforms such as Facebook and Twitter. Hate speech and offensive posts day by day are growing on social media. People post messages or tweets, often targeting other people with hate and nasty words. Such messages often hurt people, causing at times immense psychological distress and mental trauma to users. Instead of bringing people together, it causes digital divide and social alienation to many. Such practices should be minimized, if can not be stopped entirely for reasons like maintaining the civility and decorum of any forum so that everyone can feel at home to participate. But often absence of any moderator to flag a post objectionable makes the job difficult. Efforts are, therefore, on to automatically detect the use of various forms of abusive languages in social networks, micro-blogs, and blogs so that prevention can also be thought of. Since manual filtering takes a lot of time, and since it can cause symptoms such as post-traumatic stress disorder to human annotators, several research efforts have made to automate this process (Zampieri et al., 2019a) .",
"cite_spans": [
{
"start": 1138,
"end": 1162,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Few efforts have already been directed to create necessary datasets for automatic identification of offensive languages. The task is formulated as a supervised classification problem, where systems are trained for the presence of some form of abusive or offensive material. Hate speech in communication, is deemed to be harmful (individually or at a social level) based on defined 'protected attributes' such as race, disability, sexuality, etc., while Offensive speech is simply any communication that upsets someone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Most of such datasets come from general domain and are in English. In this paper, we focus on in a particular domain with respect to space and time. During any election, when political rivalry reaches the summit, spread and use of obscene language also hit the ceiling. We consider the period of campaigning for general election of India 2019 and interactions of political candidates and people in social media. We present here the first domain-specific data of hate speech and offensive content identification on Parliamentary Election of India 2019 (PEI2019) data for two Languages, English and Hindi. The dataset is created from Twitter and Facebook posts during the Indian Election 2019. It comprises three tasks: a binary classification task, and two multi-class classifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Parliamentary Election of India (PEI data) data is especially inspired by two previous evaluation forums: HASOC FIRE 2019 (Mandl et al., 2019a) and SemEval 2019 (Zampieri et al., 2019a) , and tries to leverage the synergies of these initiatives. There has been significant work in many languages, particularly for English, and the size of data is large. But there is no domain-specific data of hate speech and offensive content identification-which is the main motivation of making the PEI data. The size of PEI data is small but, we believe, enough to measure the performance of the classification models in Indian language hate speech dataset.",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Mandl et al., 2019a)",
"ref_id": "BIBREF3"
},
{
"start": 161,
"end": 185,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The primary purpose of the paper is to establish a lexical baseline for discriminating between hate speech and offensive speech on domain-specific data. Although some data for hate speech and offensive content identification are available,in English and other languages, there is no such dataset for the Indian language. Here we present a dataset of the Indian language, which is in Hindi and English dataset. We compare PEI 2019 data with two other datasets: SemEval-2019 Task 6 and FIRE 2019 HASOC dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of the paper is organised as follows. In Sec 2., we do literature survey. Next, we describe the dataset in Sec 3.. We discuss the result in Sec 4.. Finally we conclude in Sec 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Over the last few years, a few studies on hate speech and offensive content identification have been published. Different hate speech and offensive language identification problems are explored in the literature ranging from hate speech, offensive language, bullying content, and aggressive content. Below we discuss some of related works briefly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Hate speech is a statement of intention to offend another and use harsh or offensive language based on actual or perceived membership to another group (Britannica, 2015). Malmasi and Zampieri (2017) adopted a linear support vector classifier with three groups of extracted features for these tests: word skip-grams, surface n-gram, and Brown cluster. They reported accuracy scores and established a lexical baseline for discriminating between profane and hate speech on the standard dataset (Malmasi and Zampieri, 2017) .",
"cite_spans": [
{
"start": 491,
"end": 519,
"text": "(Malmasi and Zampieri, 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate speech identification",
"sec_num": "2.1."
},
{
"text": "While hate speech is targeted to a group of people based on their religion, caste, race, ethnicity or belief, offensive language such as insulting, harmful, derogatory, or obscene material is directed from one person to another and is open to others. Offensive language may be targeted or un-targeted. Usergenerated content on social media platforms such as Twitter often holds a high level of rough, harmful, or sometimes offensive language (Zampieri et al., 2019b) . Increasing vulgarity in online conversations and user commentary have emerged as relevant issues in society as well as in science (Ramakrishnan et al., 2019) . identified offensive tweets with an accuracy of 83.14 %, F 1 -score 0.7565 on the real test data for the classification of offensive vs non-offensive.",
"cite_spans": [
{
"start": 442,
"end": 466,
"text": "(Zampieri et al., 2019b)",
"ref_id": "BIBREF9"
},
{
"start": 599,
"end": 626,
"text": "(Ramakrishnan et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive language identification",
"sec_num": "2.2."
},
{
"text": "The above tasks are related to that of cyber-bullying and aggressive contents and often differences are blurred. A post can contain one or many of the features above and can belong to many categories. However, we focused here on hate speech and offensive language identification tasks. The datasets mentioned were mostly in English and not domain-specific, but from general domain. As far as language specific collection is concerned, there has been probably the first task as HaSpeeDe 2018 1 for Italian, PolEval 2019 and 2020 for Polish 2 and SemEval 2019 Task 5 that were domain-specific yet multi-lingual 3 . Here we build a domain-specific collection (political posts during election campaigns), and contain both English and Hindi posts. The vitriolic attacks become fierce as the campaign heats up and use of offensive languages nosedives to its nadir. We would like to see how the task of identifying hate and offensive language in such a collection and to gauge the extent of abusiveness in charged atmosphere.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive language identification",
"sec_num": "2.2."
},
{
"text": "In India, the last parliamentary election was held from 11 April to 19 May 2019. During this event, we collected tweets and Facebook messages from social media in two languages Hindi and English. The data is used for training and testing in both hate speech and offensive language identification tasks. PEI data was annotated using a hierarchical three-level annotation model introduced in Zampieri et al. 2019and Mandl et al. (2019) .",
"cite_spans": [
{
"start": 414,
"end": 433,
"text": "Mandl et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3."
},
{
"text": "We collected data from Facebook and Twitter during the parliamentary election 2019 of India. For Twitter, the data collection was done using the Twitter API with a tweepy Python library. The tweets collected from elected candidates' Twitter accounts and also collected with keywords #Twitter accounts name' and #Loksabha election, #election 2019, #loksabha election 2019 of India. For the hashtags, the tweets were between 11 April to 23 May 2019. For Facebook, we used the Facepager tool (Dr. Jakob J\u00fcnger, 2019) to capture messages. The collected tweets were in English, Hindi, and some other regional languages. For this study, we concentrated on tweets and messages in Hindi and English language. We collected more than ten thousand posts from Facebook and Twitter. Out of them, we found 20% tweets belonging to the hate speech and offensive content. Table 3.1. and Table 3 .1. show some example of hate speech and offensive content in English and Hindi respectively.",
"cite_spans": [
{
"start": 489,
"end": 513,
"text": "(Dr. Jakob J\u00fcnger, 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 855,
"end": 878,
"text": "Table 3.1. and Table 3",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1."
},
{
"text": "The dataset is created from Twitter and Facebook and distributed in a tab-separated format. The size of the data corpus is nearly 2000 posts for both English and Hindi separately. Figure 1 shows the categories of the post into different classes. The first stage categorization is Task A, and the second stage is Task B, and then, Task C as defined below.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3.2."
},
{
"text": "\u2022 Task A: We focus on Hate speech and Offensive language identification for Hindi and English during the parliamentary election 2019 in India. Task A is a coarse-grained binary classification in which posts classify into two classes, namely: Hate ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3.2."
},
{
"text": "The annotation is done by three undergraduate students of Engineering whose first language is Hindi for speaking and writing, and they can speak and write English as well. The average score of inter-annotation agreement (Cohen's Kappa) for Task A is 0.87 for the English language and 0.89 for the Hindi language. Similarly, the average Cohen's Kappa for Task B and Task ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.3."
},
{
"text": "C are 0.85 and 0.89, respectively. We also evaluate Krippendorff's alpha which are 0.90, and 0.89 for English and Hindi respectively. Annotation labels for English and Hindi are shown in Table 1 and Table 2 and Figure 1 shows the hierarchy of annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 206,
"text": "Table 1 and Table 2",
"ref_id": "TABREF0"
},
{
"start": 211,
"end": 219,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "HOF OFFN TIN",
"sec_num": null
},
{
"text": "We consider Hindi and English language posts for hate speech and offensive content identification and some regional language. English and Hindi are the third and fourth most-spoken languages respectively, with Hindi having the largest number native-speakers in India 4 . Most of our collected posts in Hindi language, and some posts are code-mixed. The data can be used for multiple tasks in multi-way classification. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Summary",
"sec_num": "3.4."
},
{
"text": "Collected posts are first cleaned using the tweet preprocessing library 5 and several symbols like the Retweets (RT), Hashtags, URLs, Twitter Mentions, Emoji's and Smileys are removed. This pre-processed data also excludes the English stopwords (available in NLTK 6 ) while tokenizing the sentences for the extraction of frequency-based feature extraction. Stopword removal and stemming are done on the terms. For prediction, the terms are represented by their tf-idf features considering each post as a document. These represented features are language independent and used for both Hindi and English. We did not use lemmatization, and any other lexical features that are language dependents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "3.5."
},
{
"text": "We use four machine learning classifiers: Multinomial Naive-Bayes (MNB), Stochastic Gradient Descent (SGD), Linear Support Vector Machine (Linear SVM), and Linear Regression (LR) for classification of Hate speech and Offensive content. The input for all the classifiers is in the form of tf-idf feature matrix, and output is a label for the categorical result. All the classifiers give different scores, as classifiers have different specialties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier",
"sec_num": "3.6."
},
{
"text": "For comparison, we also use similar data taken from other tasks. The first dataset of hate speech and offensive content is created by Davidson et al. (2017) and the second dataset is created by the HASOC track (FIRE 2019) (Mandl et al., 2019b) . The SemEval-2019 Task 6 dataset is based on three subtasks, the Offensive Language Identification Dataset (OLID), which contains over 14,000 English tweets (Zampieri et al., 2019a) . The HASOC track (FIRE 2019) is intended to encourage development in Hate speech identification for Hindi, German, and English language data. For English, HASOC 2019 has 5852 training instances, and 1153 instances for testing and for the Hindi language, the training corpus is 4665, and the testing corpus is 1318 (Mandl et al., 2019a) .",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "Davidson et al. (2017)",
"ref_id": null
},
{
"start": 222,
"end": 243,
"text": "(Mandl et al., 2019b)",
"ref_id": "BIBREF4"
},
{
"start": 402,
"end": 426,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF8"
},
{
"start": 742,
"end": 763,
"text": "(Mandl et al., 2019a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Data",
"sec_num": "3.7."
},
{
"text": "We begin by examining the accuracy of our tf-idf feature-based machine learning method. We first train the classifiers using tf-idf features. We perform classification on PEI 2019 data, SemEval 2018 task 6 (Zampieri et al., 2019a ) and, FIRE 2019 task HASOC (Mandl et al., 2019b) for English datasets and compare our results with other standard benchmarks. We report classification performance of MNB, SGD, LR, and Linear SVM techniques in terms of precision (Pre), recall (Rec), F 1 -score, and accuracy where their definitions considered are as given below.",
"cite_spans": [
{
"start": 206,
"end": 229,
"text": "(Zampieri et al., 2019a",
"ref_id": "BIBREF8"
},
{
"start": 258,
"end": 279,
"text": "(Mandl et al., 2019b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "the sum of true-positives and false-positives (FP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision: It is the ratio of true-positives (TP) to",
"sec_num": "1."
},
{
"text": "(1) 2. Recall: It is the ratio of true-positives (TP) to the sum of true-positives and false-negatives (FN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P recision(P ) = T P T P + F P",
"sec_num": null
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recall(R) = T P T P + F N",
"sec_num": null
},
{
"text": "3. F 1 -score: It is the balanced harmonic mean of precision and recall and used to have a composite idea of precision and recall. UNT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 UNT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Table 4 shows the result of PEI-2019 dataset for English. The machine learning models performed way better for PEI data than for the SemEval data-set. The reason is domain-specificity. While PEI dataset is specific to election domain, SemEval contains posts from diverse domains. This affects the learning accuracy of the models, and hence PEI-2019 dataset performs better. We participated in FIRE 2019 (Saroj et al., 2019) , and obtained the accuracy of XGBoost (81%) better than that of SVM (73%) for Subtask A (similar to Task A). The accuracy for Sub-task B and Sub-task C are the same for the XGBoost (80%). Table 6 and 9 show the FIRE HASOC English dataset results with accuracy 0.67, 0.64, 67 Subtask A, Subtask B and Subtask C respectively, where Mac_f1 is macro_f1 and W_f1 is weighted_f1. The results above show that classification performance of PEI 2019 dataset is much better than the other dataset that are compared with for any of the techniques. In linear regression (LR), the macro-averaged F 1 -score is 0.68 for SemEval 2019 dataset and 0.58 for the PEI 2019 dataset and FIRE 2019 dataset listed in Table 4 , 5, and 6 respectively. The results of these experiments listed in Table 7 , 8, and 9. Among the techniques, accuracy of the SGD classifier is the best among the three tasks (Task A, B, and C ). Table 10 and 11 show classification results for Hindi. The highest accuracy for Task A is 0.78 on SGD by linear SVM. For Tasks B and C, the highest accuracy are 0.72 and 0.79 respectively, again, by linear SVM.",
"cite_spans": [
{
"start": 662,
"end": 682,
"text": "(Saroj et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 872,
"end": 879,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 1377,
"end": 1384,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1453,
"end": 1460,
"text": "Table 7",
"ref_id": "TABREF7"
},
{
"start": 1581,
"end": 1589,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Recall(R) = T P T P + F N",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F 1 = 2 * R * P R + P",
"eq_num": "(3)"
}
],
"section": "Recall(R) = T P T P + F N",
"sec_num": null
},
{
"text": "We found the highest accuracy in SGD classifier for all three subtasks in English data. For Hindi Linear SVM gives the best accuracy for all classes. LR gives better score in SemEval 2019 dataset compared to PEI 2019 and HASOC dataset. Multinomial NB, SGD, and Linear SVM give better F_1 score and accuracy in PEI 2019 dataset in all three subtasks than other datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "In this paper, we introduced a dataset for hate speech and offensive content detection in Indian language and Indian context. We tested a number of text classification techniques to recognize hate speech and offensive posts to validate our dataset: Multinomial Naive-Bayes, Stochastic Gradient Descent, Logistic Regression, and Linear Support Vector. The best results are achieved by Stochastic Gradient Descent (SGD), achieving 83% accuracy in three subtasks. We believe that tackling hate and offensive content in social media is a serious challenge and our PEI dataset will be useful, specifically in Indian context as it the first such dataset in any Indian language. In the future, we'd like",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "http://www.di.unito.it/\u223ctutreeb/haspeede-evalita18/index.html 2 http://poleval.pl/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.aclweb.org/anthology/S19-2007/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wikipedia.org/wiki/Hindi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.org/project/tweet-preprocessor/ 6 https://www.nltk.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Britannica academic. Encyclopaedia Britannica Inc",
"authors": [
{
"first": "E",
"middle": [],
"last": "Britannica",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Britannica, E. (2015). Britannica academic. Ency- clopaedia Britannica Inc.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Facepager. An application for generic data retrieval through APIs",
"authors": [
{
"first": "",
"middle": [
"Jakob"
],
"last": "Dr",
"suffix": ""
},
{
"first": "T",
"middle": [
"K"
],
"last": "J\u00fcnger",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dr. Jakob J\u00fcnger, T. K. (2019). Facepager. An appli- cation for generic data retrieval through APIs.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Detecting hate speech in social media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.06427"
]
},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2017). Detect- ing hate speech in social media. arXiv preprint arXiv:1712.06427.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mandlia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "14--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandl, T., Modha, S., Majumder, P., Patel, D., Dave, M., Mandlia, C., and Patel, A. (2019a). Overview of the hasoc track at fire 2019: Hate speech and of- fensive content identification in indo-european lan- guages. In Proceedings of the 11th Forum for Infor- mation Retrieval Evaluation, pages 14-17.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages)",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mandlia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandl, T., Modha, S., Patel, D., Dave, M., Man- dlia, C., and Patel, A. (2019b). Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages). In Proceedings of the 11th annual meet- ing of the Forum for Information Retrieval Evalua- tion.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "UVA wahoos at SemEval-2019 task 6: Hate speech identification using ensemble machine learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zadrozny",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tabari",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "806--811",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishnan, M., Zadrozny, W., and Tabari, N. (2019). UVA wahoos at SemEval-2019 task 6: Hate speech identification using ensemble machine learn- ing. In Proceedings of the 13th International Work- shop on Semantic Evaluation, pages 806-811, Min- neapolis, Minnesota, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Irlab@ iitbhu at hasoc 2019: Traditional machine learning for hate speech and offensive content identification",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irlab@ iitbhu at hasoc 2019: Traditional machine learning for hate speech and offensive content iden- tification.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.08983"
]
},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019a). Semeval-2019 task 6: Identifying and categorizing offensive lan- guage in social media (offenseval). arXiv preprint arXiv:1903.08983.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019b). Semeval-2019 task 6: Identifying and categorizing offensive lan- guage in social media (offenseval). In Proceedings of the 13th International Workshop on Semantic Eval- uation, pages 75-86.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Process of the post or tweet annotation",
"uris": null,
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "Tweets or Facebook messages from the PEI dataset, with their labels for each level of the annotation model of English. This is a fine-grained classification of Task A. Hate-speech and offensive posts from Task A further classified into three categories. HATE contains Hate speech content and OFFN contain offensive material and NONE not hate speech or not offensive. This one checks the type of offensive content. Only posts labeled as HOF in Task A are considered here.",
"num": null,
"html": null,
"content": "<table><tr><td>Post</td><td/><td/><td/><td>Label</td></tr><tr><td colspan=\"3\">The Prime Minister</td><td>NOT</td><td>-</td><td>-</td></tr><tr><td colspan=\"3\">talks about economic</td><td/></tr><tr><td colspan=\"3\">growth &amp;progress. At</td><td/></tr><tr><td colspan=\"3\">the same time his</td><td/></tr><tr><td colspan=\"3\">colleagues talk about</td><td/></tr><tr><td colspan=\"3\">sending Bollywood stars</td><td/></tr><tr><td colspan=\"2\">to Pakistan!</td><td/><td/></tr><tr><td colspan=\"3\">NDTV features the</td><td colspan=\"2\">HOF HATE UNT</td></tr><tr><td colspan=\"3\">Prime Minister's new</td><td/></tr><tr><td colspan=\"3\">improved BJP dream</td><td/></tr><tr><td colspan=\"3\">team for Karnataka.</td><td/></tr><tr><td colspan=\"3\">FRESH out of jail,</td><td/></tr><tr><td colspan=\"2\">MODI-FIED</td><td>and</td><td/></tr><tr><td>REDDY</td><td>to</td><td>steal.</td><td/></tr><tr><td colspan=\"3\">#ReddyStingBJPEx-</td><td/></tr><tr><td>posed</td><td/><td/><td/></tr><tr><td>West</td><td>Bengal</td><td>Chief</td><td colspan=\"2\">HOF OFFN TIN</td></tr><tr><td colspan=\"3\">Minister and Trinamool</td><td/></tr><tr><td colspan=\"2\">Congress</td><td>supremo</td><td/></tr><tr><td colspan=\"3\">Mamata Banerjee on</td><td/></tr><tr><td colspan=\"3\">Monday called Prime</td><td/></tr><tr><td colspan=\"3\">Minister Narendra Modi</td><td/></tr><tr><td colspan=\"3\">the greatest danger for</td><td/></tr><tr><td colspan=\"3\">the country and said</td><td/></tr><tr><td colspan=\"3\">she will give her life</td><td/></tr><tr><td colspan=\"3\">to ensure that no riot</td><td/></tr><tr><td colspan=\"3\">takes place in the state.</td><td/></tr><tr><td colspan=\"5\">and Offensive (HOF) and Non-Hate, or offensive</td></tr><tr><td colspan=\"2\">(NOT).</td><td/><td/></tr><tr><td colspan=\"5\">\u2022 Task B: Targeted Insult (TIN)</td></tr><tr><td colspan=\"5\">posts hold an abuse/threat to a person, group, or</td></tr><tr><td colspan=\"5\">others. Untargeted (UNT) posts contain un-</td></tr><tr><td colspan=\"5\">targeted hate speech and offensive. Posts with</td></tr><tr><td colspan=\"5\">general obscenity are considered not targeted, al-</td></tr><tr><td colspan=\"5\">though they contain non-acceptable language.</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"text": "Tweets or Facebook messages from the PEI dataset, with their labels for each level of the annotation model of Hindi.",
"num": null,
"html": null,
"content": "<table><tr><td>Post</td><td/><td/><td>Label</td></tr><tr><td colspan=\"2\">\u0915\u093e\u0930 \u0907\u0928\u0915\u093e \u0939\u0932 \u091c \u0926 \u0915\u0930\u0947 \u0917\u0940\u0964 \u0938\u092d\u093e \u092e\u0947 \u0909\u0920\u093e\u092f\u093e\u0964 \u0909 \u092e\u0940\u0926 \u0939\u0948 \u0938\u0930-\u093f\u0915\u0938\u093e\u0928 \u0915 \u0938\u092e \u092f\u093e \u0932\u094b\u0915-\u0906\u091c \u0915\u0947 \u0930\u0932 \u0914\u0930 \u0935\u093e\u092f\u0928\u093e\u0921 \u0915\u0947</td><td>NOT</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">Today the problem of</td><td/><td/></tr><tr><td colspan=\"2\">farmers of Kerala and</td><td/><td/></tr><tr><td colspan=\"2\">Wayanad was raised in</td><td/><td/></tr><tr><td colspan=\"2\">the Lok Sabha. Hope</td><td/><td/></tr><tr><td colspan=\"2\">the government solves</td><td/><td/></tr><tr><td>these.</td><td/><td/><td/></tr><tr><td colspan=\"2\">\u0938\u092b \u0938 \u093e \u0938\u0947 \u092f\u093e\u0930 \u0939\u0948 -\u0915\u093e\u0928\u092a\u0941 \u0930 \u0917\u093e\u092f \u0938\u0947 \u092f\u093e\u0930 \u0939\u0948 ,\u0928 \u0927\u092e \u0938\u0947 ,\u0907\u0928\u0915\u094b \u0915 \u0926\u0932\u093e\u0932\u0940 \u0915\u0930\u0924\u0947 \u0939 \u0964\u0907\u0928\u0915\u094b \u0928 BJP \u0914\u0930 RSS \u0915\u0947 \u0932\u094b\u0917 \u0927\u092e</td><td colspan=\"3\">HOF HATE TIN</td></tr><tr><td>\u0926\u0947 \u0939\u093e\u0924.</td><td>People of</td><td/><td/></tr><tr><td colspan=\"2\">BJP and RSS broke reli-</td><td/><td/></tr><tr><td colspan=\"2\">gion. They neither love</td><td/><td/></tr><tr><td colspan=\"2\">cow nor religion, they</td><td/><td/></tr><tr><td colspan=\"2\">only love power -Kan-</td><td/><td/></tr><tr><td colspan=\"2\">pur countryside.</td><td/><td/></tr><tr><td colspan=\"2\">\u092c\u0940\u091c\u0947 \u092a\u0940 \u0915 \u093f\u0935\u091a\u093e\u0930\u0927\u093e\u0930\u093e \u0926\u0947 \u0936 \u0915\u094b \u092c\u093e\u0902 \u091f\u0928\u0947 \u0915 \u0939\u0948 , \u0926 \u0932\u0924 \u0915\u094b \u0915\u0941 -</td><td/><td/></tr><tr><td colspan=\"2\">\u091a\u0932\u0928\u0947 \u0915 \u0939\u0948 , \u0906\u093f\u0926\u0935\u093e \u0938\u092f \u0915\u094b \u0915\u0941 \u091a\u0932\u0928\u0947 \u0915 \u0939\u0948 , \u0905 \u092a\u0938\u0902 \u092f\u0915</td><td/><td/></tr><tr><td colspan=\"2\">\u0915\u094b \u0915\u0941 \u091a\u0932\u0928\u0947 \u0915 \u0939\u0948 , \u092c\u0940\u091c\u0947 \u092a\u0940 \u0915</td><td/><td/></tr><tr><td colspan=\"2\">\u0909\u0938 \u093f\u0935\u091a\u093e\u0930\u0927\u093e\u0930\u093e \u0915\u0947 \u0916\u0932\u093e\u092b \u0939\u092e \u092f\u0939\u093e\u0901 \u0916\u095c\u0947 \u0939 The ideology</td><td/><td/></tr><tr><td colspan=\"2\">of the BJP is to divide</td><td/><td/></tr><tr><td colspan=\"2\">the country, crush the</td><td/><td/></tr><tr><td colspan=\"2\">Dalits, crush the tribals,</td><td/><td/></tr><tr><td colspan=\"2\">crush the minorities and</td><td/><td/></tr><tr><td colspan=\"2\">are against that ideol-</td><td/><td/></tr><tr><td colspan=\"2\">ogy of the BJP.</td><td/><td/></tr></table>"
},
"TABREF2": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"6\">: Distribution of labels combinations in PEI</td></tr><tr><td>data. Tasks</td><td/><td>Labels</td><td/><td colspan=\"2\">Total-Post</td></tr><tr><td>Task A</td><td>HOF</td><td>NOT</td><td>-</td><td colspan=\"2\">Train Test</td></tr><tr><td colspan=\"5\">Task B HATE OFFN NONE 1519</td><td>488</td></tr><tr><td>Task C</td><td>UNT</td><td>TNT</td><td>NONE</td><td/></tr></table>"
},
"TABREF4": {
"type_str": "table",
"text": "Classifier performance on PEI-2019 for English data",
"num": null,
"html": null,
"content": "<table><tr><td>Tasks</td><td>Model</td><td/><td>MNB</td><td/><td>SGD</td><td/><td/><td>LR</td><td colspan=\"3\">Linear SVM</td></tr><tr><td/><td>Labels</td><td>Pre</td><td>Rec F_1</td><td>Pre</td><td>Rec</td><td>F_1</td><td>Pre</td><td>Rec F_1</td><td>Pre</td><td>Rec</td><td>F_1</td></tr><tr><td>Sub-task A</td><td>HOF</td><td colspan=\"2\">0.97 0.21 0.34</td><td>0.70</td><td colspan=\"5\">0.43 0.53 0.91 0.15 0.26 0.68</td><td>0.40</td><td>0.50</td></tr><tr><td>-</td><td>NOT</td><td>0.81</td><td>1.00 0.90</td><td>0.85</td><td>0.95</td><td>0.90</td><td colspan=\"3\">0.80 1.00 0.89 0.84</td><td>0.95</td><td>0.89</td></tr><tr><td>Sub-task B</td><td>HATE</td><td colspan=\"2\">0.50 0.03 0.05</td><td>0.32</td><td>0.10</td><td>0.15</td><td colspan=\"3\">1.00 0.03 0.05 0.35</td><td>0.10</td><td>0.16</td></tr><tr><td>-</td><td>NONE</td><td>0.78</td><td>1.00 0.88</td><td>0.84</td><td>0.96</td><td>0.89</td><td colspan=\"3\">0.79 1.00 0.88 0.83</td><td>0.97</td><td>0.89</td></tr><tr><td>-</td><td>OFFN</td><td>0.50</td><td colspan=\"7\">0.02 0.03 0.85 0.61 0.71 1.00 0.09 0.16 0.84</td><td>0.46</td><td>0.60</td></tr><tr><td>Sub-task C</td><td>NONE</td><td>0.80</td><td>0.99 0.88</td><td>0.84</td><td>0.93</td><td>0.88</td><td colspan=\"3\">0.79 0.98 0.88 0.84</td><td>0.93</td><td>0.88</td></tr><tr><td>-</td><td>TIN</td><td colspan=\"2\">0.67 0.15 0.24</td><td>0.55</td><td>0.39</td><td colspan=\"6\">0.45 0.64 0.13 0.22 0.55 0.37 0.44</td></tr><tr><td>-</td><td>UNT</td><td>0.00</td><td>0.00 0.00</td><td>0.80</td><td>0.29</td><td>0.42</td><td colspan=\"3\">0.00 0.00 0.00 0.75</td><td>0.21</td><td>0.33</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"text": "Classifier result of SemEval 2019 task 6 dataset at Precision, Recall, F-score and Accuracy.",
"num": null,
"html": null,
"content": "<table><tr><td>Tasks</td><td>Model</td><td/><td>MNB</td><td/><td>SGD</td><td/><td>LR</td><td colspan=\"3\">Linear SVM</td></tr><tr><td/><td>Labels</td><td>Pre</td><td>Rec F_1</td><td>Pre</td><td>Rec F_1</td><td>Pre</td><td>Rec F_1</td><td>Pre</td><td>Rec</td><td>F_1</td></tr><tr><td>Sub-task A</td><td>OFF</td><td>0.85</td><td colspan=\"3\">0.15 0.25 0.92 0.10 0.18</td><td>0.83</td><td colspan=\"4\">0.37 0.51 0.78 0.46 0.58</td></tr><tr><td>-</td><td>NOT</td><td>0.70</td><td>0.99 0.82</td><td>0.69</td><td>1.00 0.82</td><td>0.76</td><td colspan=\"2\">0.96 0.85 0.78</td><td>0.94</td><td>0.85</td></tr><tr><td>Sub-task B</td><td>GRP</td><td>0.00</td><td>0.00 0.00</td><td>0.00</td><td colspan=\"4\">0.00 0.00 0.50 0.03 0.06 0.48</td><td>0.05</td><td>0.10</td></tr><tr><td>-</td><td>IND</td><td>0.83</td><td>0.01 0.02</td><td>1.00</td><td>0.00 0.01</td><td>0.65</td><td colspan=\"2\">0.14 0.23 0.65</td><td>0.23</td><td>0.34</td></tr><tr><td>-</td><td>NULL</td><td>0.69</td><td>1.00 0.82</td><td>0.69</td><td>1.00 0.82</td><td>0.72</td><td colspan=\"2\">0.99 0.83 0.73</td><td>0.98</td><td>0.84</td></tr><tr><td>-</td><td>OTH</td><td>0.00</td><td>0.00 0.00</td><td>0.00</td><td>0.00 0.00</td><td>0.00</td><td colspan=\"2\">0.00 0.00 0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>Sub-task C</td><td>NULL</td><td>0.69</td><td>0.99 0.81</td><td>0.68</td><td>1.00 0.81</td><td>0.73</td><td colspan=\"2\">0.97 0.83 0.76</td><td>0.94</td><td>0.84</td></tr><tr><td>-</td><td>TIN</td><td colspan=\"2\">0.77 0.10 0.17</td><td>0.73</td><td>0.04 0.08</td><td>0.72</td><td colspan=\"3\">0.28 0.40 0.67 0.39</td><td>0.49</td></tr><tr><td>-</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF6": {
"type_str": "table",
"text": "Classifier result of FIRE 2019 task HASOC dataset at Precision, Recall, F-score and Accuracy.",
"num": null,
"html": null,
"content": "<table><tr><td>Tasks</td><td>Model</td><td/><td>MNB</td><td/><td>SGD</td><td/><td>LR</td><td>Linear SVM</td></tr><tr><td/><td>Labels</td><td>Pre</td><td>Rec F_1</td><td>Pre</td><td colspan=\"3\">Rec F_1 Precision Recall F_1</td><td>Pre</td><td>Rec F_1</td></tr><tr><td>Sub-task A</td><td>HOF</td><td colspan=\"4\">0.70 0.18 0.29 0.78 0.07 0.12</td><td>0.67</td><td>0.28</td><td>0.40 0.64 0.36 0.46</td></tr><tr><td>-</td><td>NOT</td><td colspan=\"4\">0.64 0.95 0.76 0.62 0.99 0.76</td><td>0.66</td><td>0.91</td><td>0.77 0.68 0.87 0.76</td></tr><tr><td>Sub-task B</td><td>HATE</td><td colspan=\"4\">0.00 0.00 0.00 0.00 0.00 0.00</td><td>0.29</td><td>0.03</td><td>0.05 0.29 0.06 0.10</td></tr><tr><td>-</td><td colspan=\"5\">NONE 0.62 1.00 0.77 0.63 1.00 0.77</td><td>0.64</td><td>0.98</td><td>0.78 0.65 0.95 0.77</td></tr><tr><td>-</td><td>OFFN</td><td colspan=\"4\">0.00 0.00 0.00 0.00 0.00 0.00</td><td>0.60</td><td>0.03</td><td>0.06 0.57 0.08 0.14</td></tr><tr><td>-</td><td>PRFN</td><td colspan=\"4\">0.86 0.04 0.07 0.78 0.12 0.20</td><td>0.78</td><td>0.12</td><td>0.20 0.79 0.18 0.29</td></tr><tr><td>Sub-task C</td><td colspan=\"5\">NONE 0.65 0.96 0.77 0.64 1.00 0.78</td><td>0.67</td><td>0.92</td><td>0.78 0.68 0.87 0.76</td></tr><tr><td>-</td><td>TIN</td><td colspan=\"4\">0.65 0.14 0.23 0.86 0.06 0.12</td><td>0.64</td><td>0.26</td><td>0.37 0.57 0.34 0.43</td></tr><tr><td>-</td><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF7": {
"type_str": "table",
"text": "Classifier result on testing dataset of PEI data",
"num": null,
"html": null,
"content": "<table><tr><td>Task/Model</td><td/><td>Sub-task A</td><td/><td/><td>Sub-task B</td><td/><td/><td>Sub-task C</td><td/></tr><tr><td>Model</td><td colspan=\"9\">Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy</td></tr><tr><td>Multinomial_NB</td><td>0.62</td><td>0.77</td><td>0.82</td><td>0.32</td><td>0.69</td><td>0.78</td><td>0.37</td><td>0.73</td><td>0.79</td></tr><tr><td>SGD</td><td>0.71</td><td>0.81</td><td>0.83</td><td>0.59</td><td>0.78</td><td>0.82</td><td>0.59</td><td>0.79</td><td>0.80</td></tr><tr><td>LR</td><td>0.58</td><td>0.75</td><td>0.81</td><td>0.36</td><td>0.70</td><td>0.79</td><td>0.37</td><td>0.73</td><td>0.79</td></tr><tr><td>Linear SVM</td><td>0.70</td><td>0.81</td><td>0.82</td><td>0.55</td><td>0.77</td><td>0.81</td><td>0.55</td><td>0.78</td><td>0.80</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"text": "Classifier result on testing dataset of SemEval 2019 Task 6 dataset",
"num": null,
"html": null,
"content": "<table><tr><td>Task/Model</td><td/><td>Subtask A</td><td/><td/><td>Subtask B</td><td/><td/><td>Subtask C</td><td/></tr><tr><td>Model</td><td colspan=\"9\">Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy</td></tr><tr><td>Multinomial_NB</td><td>0.54</td><td>0.63</td><td>0.71</td><td>0.33</td><td>0.59</td><td>0.69</td><td>0.21</td><td>0.57</td><td>0.69</td></tr><tr><td>SGD</td><td>0.50</td><td>0.61</td><td>0.70</td><td>0.30</td><td>0.56</td><td>0.68</td><td>0.21</td><td>0.57</td><td>0.69</td></tr><tr><td>LR</td><td>0.68</td><td>0.74</td><td>0.77</td><td>0.41</td><td>0.67</td><td>0.73</td><td>0.28</td><td>0.62</td><td>0.71</td></tr><tr><td>Linear SVM</td><td>0.71</td><td>0.76</td><td>0.78</td><td>0.44</td><td>0.70</td><td>0.74</td><td>0.32</td><td>0.65</td><td>0.72</td></tr></table>"
},
"TABREF9": {
"type_str": "table",
"text": "Classifier result on testing dataset of FIRE 2019 HASOC task dataset",
"num": null,
"html": null,
"content": "<table><tr><td>Task/Model</td><td/><td>Sub-task A</td><td/><td/><td>Sub-task B</td><td/><td/><td>Sub-task C</td><td/></tr><tr><td>Model</td><td colspan=\"9\">Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy</td></tr><tr><td>Multinomial_NB</td><td>0.53</td><td>0.58</td><td>0.65</td><td>0.21</td><td>0.49</td><td>0.62</td><td>0.33</td><td>0.56</td><td>0.65</td></tr><tr><td>SGD</td><td>0.44</td><td>0.51</td><td>0.62</td><td>0.24</td><td>0.51</td><td>0.63</td><td>0.30</td><td>0.52</td><td>0.64</td></tr><tr><td>LR</td><td>0.58</td><td>0.62</td><td>0.66</td><td>0.29</td><td>0.53</td><td>0.64</td><td>0.38</td><td>0.61</td><td>0.67</td></tr><tr><td>Linear SVM</td><td>0.61</td><td>0.64</td><td>0.67</td><td>0.35</td><td>0.56</td><td>0.64</td><td>0.40</td><td>0.62</td><td>0.66</td></tr></table>"
},
"TABREF10": {
"type_str": "table",
"text": "Classifier result of PEI-2019 dataset at Precision, Recall, F-score and Accuracy for Hindi data",
"num": null,
"html": null,
"content": "<table><tr><td>Tasks</td><td>Model</td><td/><td>MNB</td><td/><td>SGD</td><td/><td/><td>LR</td><td>Linear SVM</td></tr><tr><td/><td>Labels</td><td>Pre</td><td>Rec F_1</td><td>Pre</td><td>Rec</td><td colspan=\"4\">F_1 Precision Recall F_1</td><td>Pre</td><td>Rec F_1</td></tr><tr><td>Sub-task A</td><td>HOF</td><td colspan=\"5\">0.85 0.38 0.52 0.73 0.64 0.68</td><td>0.78</td><td>0.39</td><td>0.52 0.75 0.61 0.67</td></tr><tr><td>-</td><td>NOT</td><td>0.72</td><td colspan=\"2\">0.96 0.83 0.80</td><td>0.87</td><td>0.83</td><td>0.72</td><td>0.94</td><td>0.82 0.79 0.88 0.83</td></tr><tr><td>Sub-task B</td><td>HATE</td><td>0.33</td><td colspan=\"2\">0.02 0.04 0.59</td><td>0.34</td><td>0.43</td><td>0.57</td><td>0.17</td><td>0.26 0.63 0.36 0.46</td></tr><tr><td>-</td><td>NONE</td><td>0.64</td><td colspan=\"2\">0.99 0.78 0.76</td><td>0.93</td><td>0.83</td><td>0.68</td><td>0.98</td><td>0.80 0.73 0.96 0.83</td></tr><tr><td>-</td><td>OFFN</td><td>0.00</td><td colspan=\"2\">0.00 0.00 0.41</td><td>0.28</td><td>0.33</td><td>0.00</td><td>0.00</td><td>0.00 0.71 0.20 0.31</td></tr><tr><td>Sub-task C</td><td>NONE</td><td>0.73</td><td colspan=\"2\">0.98 0.84 0.81</td><td>0.89</td><td>0.84</td><td>0.75</td><td>0.98</td><td>0.85 0.82 0.91 0.86</td></tr><tr><td>-</td><td>TIN</td><td>0.79</td><td colspan=\"2\">0.30 0.44 0.67</td><td>0.59</td><td>0.63</td><td>0.81</td><td>0.35</td><td>0.49 0.72 0.60 0.66</td></tr><tr><td>-</td><td>UNT</td><td>0.00</td><td colspan=\"2\">0.00 0.00 0.00</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00 0.00 0.00 0.00</td></tr></table>"
},
"TABREF11": {
"type_str": "table",
"text": "Classifier result on testing dataset of PEI Hindi data",
"num": null,
"html": null,
"content": "<table><tr><td>Task/Model</td><td/><td>Sub-task A</td><td/><td/><td>Sub-task B</td><td/><td/><td>Sub-task C</td><td/></tr><tr><td>Model</td><td colspan=\"9\">Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy Mac_f1 W_f1 Accuracy</td></tr><tr><td>Multinomial_NB</td><td>0.67</td><td>0.71</td><td>0.74</td><td>0.20</td><td>0.50</td><td>0.64</td><td>0.42</td><td>0.69</td><td>0.74</td></tr><tr><td>SGD</td><td>0.76</td><td>0.78</td><td>0.78</td><td>0.40</td><td>0.67</td><td>0.70</td><td>0.49</td><td>0.76</td><td>0.77</td></tr><tr><td>LR</td><td>0.67</td><td>0.71</td><td>0.735</td><td>0.27</td><td>0.57</td><td>0.67</td><td>0.44</td><td>0.71</td><td>0.76</td></tr><tr><td>Linear_SV</td><td>0.75</td><td>0.77</td><td>0.78</td><td>0.40</td><td>0.68</td><td>0.72</td><td>0.51</td><td>0.77</td><td>0.79</td></tr></table>"
},
"TABREF12": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>and 8 show results of SemEval 2019 Task 6</td></tr><tr><td>dataset for English. The highest accuracy scores are</td></tr><tr><td>0.78, 0.74 and 0.72 for Subtask A, Subtask B and</td></tr><tr><td>subtask C respectively.</td></tr></table>"
}
}
}
}