ACL-OCL / Base_JSON /prefixA /json /alta /2020.alta-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:47.894573Z"
},
"title": "Automated Detection of Cyberbullying Against Women and Immigrants and Cross-domain Adaptability",
"authors": [
{
"first": "Thushari",
"middle": [],
"last": "Atapattu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Adelaide",
"location": {
"settlement": "Adelaide",
"country": "Australia"
}
},
"email": "thushari.atapattu@adelaide.edu.au"
},
{
"first": "Mahen",
"middle": [],
"last": "Herath",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Moratuwa",
"location": {
"settlement": "Katubedda",
"country": "Sri Lanka"
}
},
"email": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Adelaide",
"location": {
"settlement": "Adelaide",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Katrina",
"middle": [],
"last": "Falkner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Adelaide",
"location": {
"settlement": "Adelaide",
"country": "Australia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Cyberbullying is a prevalent and growing social problem due to the surge of social media technology usage. Minorities, women, and adolescents are among the common victims of cyberbullying. Despite the advancement of NLP technologies, the automated cyberbullying detection remains challenging. This paper focuses on advancing the technology using state-of-the-art NLP techniques. We use a Twitter dataset from SemEval 2019-Task 5 (HatEval) on hate speech against women and immigrants. Our best performing ensemble model based on DistilBERT has achieved 0.73 and 0.74 of F1 score in the task of classifying hate speech (Task A) and aggressiveness and target (Task B) respectively. We adapt the ensemble model developed for Task A to classify offensive language in external datasets and achieved \u223c0.7 of F1 score using three benchmark datasets, enabling promising results for cross-domain adaptability. We conduct a qualitative analysis of misclassified tweets to provide insightful recommendations for future cyberbullying research.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Cyberbullying is a prevalent and growing social problem due to the surge of social media technology usage. Minorities, women, and adolescents are among the common victims of cyberbullying. Despite the advancement of NLP technologies, the automated cyberbullying detection remains challenging. This paper focuses on advancing the technology using state-of-the-art NLP techniques. We use a Twitter dataset from SemEval 2019-Task 5 (HatEval) on hate speech against women and immigrants. Our best performing ensemble model based on DistilBERT has achieved 0.73 and 0.74 of F1 score in the task of classifying hate speech (Task A) and aggressiveness and target (Task B) respectively. We adapt the ensemble model developed for Task A to classify offensive language in external datasets and achieved \u223c0.7 of F1 score using three benchmark datasets, enabling promising results for cross-domain adaptability. We conduct a qualitative analysis of misclassified tweets to provide insightful recommendations for future cyberbullying research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Cyberbullying is \"the repetitive use of aggressive language amongst peers with the intention to harm others through digital media\" (Rosa et al., 2019) . Due to the surge of social media technology use, cyberbullying has become a prevalent and growing social problem. Unlike in the physical environment, cyberspace, in particular, online social platforms are not yet evolved sufficiently to prevent people from communicating without disclosing identities, spreading rumours, and harassing others. The risk of and potential consequences caused by cyberbullying are critical including both physical and mental health risk to victims. The impact and consequences are common to all generations (e.g. young, elderly) including emotional and psychological dis-tress, decline in personal/academic development, anti-social behaviour, and, potentially, suicide.",
"cite_spans": [
{
"start": 131,
"end": 150,
"text": "(Rosa et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "The criticality of this societal problem is demonstrated from a study by Yale University, commenting \"cyberbullying victims are 2 to 9 times more likely to consider committing suicide\" across the globe. 1 Within Australia, the eSafety Commissioner comments \"one in every five Australian children aged eight to seventeen are victims of cyberbullying (2018)\". 2 Adolescents, minorities (e.g. refugees, LGBTQI) and women are among common targets of cyberbullying. According to Bullying Statistics 3 , over half of adolescents are victims of cyberbullying and about the same percentage are involved in bullying.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "Despite recent research advancement in hate speech detection (Fortuna and Nunes, 2018) , automated identification of cyberbullying attempts (i.e. repetitive hate speech against an individual or a group) remains as a challenging subtask of NLP. Due to diverse variants of language (e.g. hate, intimidation, sarcasm, metaphors) used by bullies and the evolution of language (e.g. slang), particularly among adolescents, the automated detection of cyberbullying is extremely challenging. The example below appears to be misogynistic as it includes the term 'b***h'; however, it is manually classified as not misogyny since the slang 'gay a*s b***h' is commonly used for a male or gay person.",
"cite_spans": [
{
"start": 61,
"end": 86,
"text": "(Fortuna and Nunes, 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "\"you a gay a*s b***h who seeks attention, STOP! I knew ever since you gonna switch up on me... I guess you did F***ING SNAKE A*S H*E!\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "To mitigate the research and social problem of cyberbullying, this paper focuses on advancing the technology to classify cyberbullying using stateof-the-art NLP techniques. As a case study, we focus on cyberbulling against women and immigrants. Accordingly, our first research question (RQ1) asks, Can we build machine learning models to outperform current cyberbullying classification systems on women and immigrants?. The findings of RQ1 will lead us to explore the limitations of our models and explanations for misclassification. Hence, our second research question (RQ2) investigates, What is the content of misclassified tweets and how can we categorise them?. Finally, to evaluate the validity of our models across external cyberbullying/hate speech datasets, our third research question (RQ3) investigates, Can we successfully validate machine learning models developed for cyberbullying detection within the context of women and immigrants for other benchmark datasets?.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "To answer our research questions, we utilise a Twitter dataset developed for SemEval 2019 -Task 5 (HatEval) (Basile et al., 2019) that includes labels for three sub tasks: 1) hate speech, 2) aggressiveness, and 3) target (individual or group). We adopt a mixed-method study, using a combination of the building of machine learning models (RQ1 & RQ3) and qualitative content analysis (RQ2) as our methodology. We make the following main contributions:",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "\u2022 We developed and evaluated cyberbullying classification models using state-of-the-art NLP technology. Even though our model performance on Task A is either equal or slightly lower than baselines, we outperformed all previous best systems and baselines on Task B. Therefore, our ensemble models based on Dis-tilBERT (Sanh et al., 2019) serves as the best system as yet to classify aggressiveness and target (Task B).",
"cite_spans": [
{
"start": 317,
"end": 336,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "\u2022 We conducted a qualitative study to categorise misclassified tweets into meaningful codes. We distinguished six categories: lack of context (CNTX), gender-related issues (GEND), issue with resolving slangs (SLNG), issues in the original annotation (ERROR), misclassified by our model (MSCL), and issues not belong to any category (OTHER) emerged from our data, establishing a point of reference for future researchers in cyberbullying, particularly, within the context of minorities (e.g. women, LGBTQI, immigrants).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "\u2022 We adopted our best pre-trained model to evaluate other benchmark datasets, including OffensEval challenge (Zampieri et al., 2019 (Zampieri et al., , 2020 and Hate Offense task (Davidson et al., 2017) . Our model generalised reasonably well (\u223c0.7) with both tasks, contributing to developing a generalised model across different cyberbullying-related tasks.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Zampieri et al., 2019",
"ref_id": "BIBREF22"
},
{
"start": 132,
"end": 156,
"text": "(Zampieri et al., , 2020",
"ref_id": "BIBREF14"
},
{
"start": 179,
"end": 202,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Motivation",
"sec_num": "1"
},
{
"text": "Cyberbullying is a complex phenomenon that needs multiple psychological, linguistic, and social theories to understand its nature. The identification of cyberbullying is inherently more complex even for humans (except victims) as it involves repetitive behaviour, peer-oriented nature, and intentionality to harm. Therefore, we utilise a definition stated in a recent systematic literature review on cyberbullying (Rosa et al., 2019) as \"repetitive use of aggressive language amongst peers with the intention to harm others through digital media\". Some recent studies (Fortuna and Nunes, 2018) including WOAH 4 (previously known as ALW) workshop (Roberts et al., 2019) have focused on hate speech detection as a more general field. Despite recent advancement in hate speech detection, recognising cyberbullying in everyday problems is primarily manual based on victim reports or manual moderation. Recent studies rely on contextual features such as demography, social network, and sentiments/emotions as features to train cyberbullying classifiers (Chatzakou et al., 2019) .",
"cite_spans": [
{
"start": 414,
"end": 433,
"text": "(Rosa et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 646,
"end": 668,
"text": "(Roberts et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 1048,
"end": 1072,
"text": "(Chatzakou et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Conversely, some related workshops such as TRAC (Kumar et al., 2018) and challenges such as HatEval (Basile et al., 2019) , OffensEval (Zampieri et al., 2019 (Zampieri et al., , 2020 contributed to advance the research field by developing systems using cuttingedge NLP techniques like Universal Encoder -Fermi (Indurthi et al., 2019) , LT3 (Bauwelinck et al., 2019) , ensemble of deep learning models like OpenAI's GPT and Transformer models (Team NLPR@SAPOL (Seganti et al., 2019) ), and BERT (NULI (Liu et al., 2019) ). Some of these systems have surpassed baselines and earned recognition as the best-performing systems in specific subtasks (e.g. NULI achieved 0.82 of F1 score and ranked 1st place in subtask A to classify offensive language while it ranked only in 18th place for subtask C to classify targets such as individuals, group).",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "(Kumar et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 100,
"end": 121,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 135,
"end": 157,
"text": "(Zampieri et al., 2019",
"ref_id": "BIBREF22"
},
{
"start": 158,
"end": 182,
"text": "(Zampieri et al., , 2020",
"ref_id": "BIBREF14"
},
{
"start": 310,
"end": 333,
"text": "(Indurthi et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 340,
"end": 365,
"text": "(Bauwelinck et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 459,
"end": 481,
"text": "(Seganti et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 500,
"end": 518,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Despite the promise of current systems, these models are not consistent enough to perform reasonably well within all sub tasks of cyberbullying (i.e. hate speech, aggressiveness and target). Additionally, these models were not validated across other cyberbullying-related tasks to ensure generalisability. Related literature also lacks comprehensive contributions to draw implications on why machine learning models fail to improve further. Our work focuses on addressing these three drawbacks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Research Questions. Our research is guided by three research questions,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "\u2022 RQ1: Can we build machine learning models to outperform current cyberbullying classification systems?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "\u2022 RQ2: What is the content of misclassified tweets, and how can we categorise them?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "\u2022 RQ3: Can we successfully validate machine learning models developed for cyberbullying detection within the context of women and immigrants for other benchmark datasets?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "Dataset. We utilise a dataset collected from Twitter during July to September 2018 for SemEval 2019 -Task 5 (HatEval) challenge (Basile et al., 2019) . This challenge was organised to advance the technology to classify cyberbullying against women and immigrants. Tweets were collected both from English and Spanish language. We utilise only the English dataset in this paper. The dataset contains a set of tweets and their labels; HS -Hate Speech (0 -No, 1 -Yes), TR -Target Range (0 -generic group, 1 -individual), AG -Aggressiveness (0 -No, 1 -Yes). The challenge was divided into two subtasks, Task A -classification of HS, and Task B -classification of AG and TR. The dataset was labelled via AllCloud crowdsourcing platform and added two more experienced annotators to determine the final labels. Inter-rater reliability for HS, TR, and AG is 0.83, 0.7, 0.73 respectively. The dataset consists of a total of 13,000 tweets with 10,000 for training set (5,000 each for women and immigrant) and 3,000 for test set (1,500 each for women and immigrant). Table 1 of the work by Basile et al. (2019) demonstrates more information about data distribution.",
"cite_spans": [
{
"start": 128,
"end": 149,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1077,
"end": 1097,
"text": "Basile et al. (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1054,
"end": 1061,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "We adopt a mixed-method study, using a combination of the building of machine learning models (RQ1 RQ3) and content analysis (RQ2) as our methodology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "Pre-processing. We conducted text preprocessing using standard techniques including tokenisation and removal of non-ASCII characters such as decoding emoticons 5 . Additionally, other preprocessing steps such as removal of punctuations and shortened URLs were performed while fine-tuning deep learning based models like DistilBERT (Sanh et al., 2019) . We retained hashtags as these were important features of our models.",
"cite_spans": [
{
"start": 331,
"end": 350,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "Building of machine learning models. We adopted state-of-the-art NLP and deep learning techniques for text classification to solve cyberbullying detection problem, and built our models using DistilBERT (Sanh et al., 2019) , a lighter and a faster pre-trained language model based on BERT (Devlin et al., 2018 ). To answer RQ1 through model comparisons, we utilised MFC and top-ranked systems in each task of HatEval challenge (Basile et al., 2019) as our baselines.To answer RQ3, we apply our pre-trained models on HatEval into other benchmark datasets related to cyberbullying. For this, we utilise three external datasets developed for SemEval Task 12 -OffensEval2020 (Zampieri et al., 2020) , SemEval Task 6 -OffensEval2019 (Zampieri et al., 2019) and Hate Offensive language detection by Davidson et al. (2017) .",
"cite_spans": [
{
"start": 202,
"end": 221,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 288,
"end": 308,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF8"
},
{
"start": 426,
"end": 447,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 670,
"end": 693,
"text": "(Zampieri et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 727,
"end": 750,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 792,
"end": 814,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "We adopted open coding (Corbin and Strauss, 1990 ), a qualitative content analysis technique as our method to answer RQ2 on exploring the content of misclassified tweets and categorisation them into a coding schema.",
"cite_spans": [
{
"start": 23,
"end": 48,
"text": "(Corbin and Strauss, 1990",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content analysis.",
"sec_num": null
},
{
"text": "To address the Task A we created three classification models named A, B and C (see Figure 1 ) based on the DistilBERT model with a sequence classification head on top (Sanh et al., 2019 ). An imbalanced subset of training data where the majority class was positive was used to train model A, and an imbalanced subset of training data where the majority class was negative was used to train model B. Inspired by the approach described in Khoussainov et al. (2005) , model C was trained on a balanced subset of training data which were classified differently by the biased classifiers A and B. We fine tuned all three classifiers with a learning rate of 5e-05 for 3 epochs using a batch size of 32.Finally, we used simple voting to create an ensemble classifier combining the models A, B and C.",
"cite_spans": [
{
"start": 167,
"end": 185,
"text": "(Sanh et al., 2019",
"ref_id": "BIBREF20"
},
{
"start": 437,
"end": 462,
"text": "Khoussainov et al. (2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ensemble model -Task A",
"sec_num": "4.1"
},
{
"text": "Task B can be modelled as a multi-class (i.e. 5 classes) classification problem with the individual classes being (HS=0,TR=0,AG=0), (HS=1, TR=1, AG=0), (HS=1, TR=1, AG=1), (HS=1, TR=0, AG=1), and (HS=1, TR=0, AG=0) (Gertner et al., 2019) . We developed 5 binary classifiers, one for each class, using the DistilBERT model with a sequence classification head on top. Each classifier was fine tuned with a learning rate of 5e-5 and a batch size of 32 for 3 epochs. We then combined the predictions from these classifiers using probabilities to derive the final class labels. If only one classifier predicted a given data instance as positive, we assigned the class label of that classifier to the data instance. Whenever several classifiers predicted the positive class label for a given instance, we selected the prediction with the highest probability. If all the classifiers predicted the negative class label for a given instance, we selected the prediction with the lowest probability.",
"cite_spans": [
{
"start": 215,
"end": 237,
"text": "(Gertner et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble model -Task B",
"sec_num": "4.2"
},
{
"text": "Evaluation Metric. To calculate the classification effectiveness, we use different metrics in each subtask. Task A uses the macro-averaged F1 score while Task B uses Exact Match Ratio (EMR) along with macro-averaged F1 score (Basile et al., 2019) .",
"cite_spans": [
{
"start": 225,
"end": 246,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "\u2022 F1 Score. The harmonic mean of precision and recall where precision is the proportion of predicted positive instances that are actually positive while recall is the proportion of actual positive instances that are predicted as positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "\u2022 Exact Matching Ratio (EMR). Since Task B is a multi-label classification problem, EMR is calculated by combining all the dimensions (i.e. HS, TR, AG) to be predicted. The calculation of EMR is discussed in Basile et al. 3. MFC baseline. MFC is a trivial model that assigns the most frequent label in training set to all instances in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The performance of our ensemble model using the official HatEval test set is shown in Figure 2 . The results demonstrate that our ensemble model has achieved 0.49 F1 score for Task A. In Task A, even though we outperformed MFC baseline (F1 score = 0.37), our scores did not exceed the best Hat-Eval system -Fermi (F1 score = 0.65) (Indurthi et al., 2019) . Nevertheless, our Task A performance scores are not promising for real-world adoption.",
"cite_spans": [
{
"start": 331,
"end": 354,
"text": "(Indurthi et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Answering RQ1 -Model Evaluation",
"sec_num": "5.1"
},
{
"text": "Conversely, our ensemble model has obtained 0.62 of F1 score for task B which exceeds the best systems of HatEval Task B -LT3 (Bauwelinck et al., 2019 ) (F1 score = 0.47) and MFC baseline (F1 score = 0.42) (Basile et al., 2019) . In Task B of HatEval, no system has been able to outperform the EMR score of MFC baseline, which achieved 0.58 of EMR (Note: Exact Matching Ratio was the metric used for HatEval Task B evaluation). LT3 system and our ensemble model both equally achieved 0.57 of EMR which ranked us in the top place for Task B followed by MFC baseline. Since our DistilBERT-based ensemble model achieved an F1 score over 0.9 in another cyberbullying-related task (SemEval Task 12 -OffenseEval 2020) (Zampieri et al., 2020) (Herath et al., 2020) , we further analysed the peculiar behaviour of model performance with HatEval challenge by unpacking the dataset.",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "(Bauwelinck et al., 2019",
"ref_id": "BIBREF1"
},
{
"start": 206,
"end": 227,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 712,
"end": 735,
"text": "(Zampieri et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 736,
"end": 757,
"text": "(Herath et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ1 -Model Evaluation",
"sec_num": "5.1"
},
{
"text": "We plotted the percentages of tweets annotated as having hate speech when some common hashtags or derogatory tokens (e.g. #buildthatwall, b***h) were found in tweets. Figure 3a) depicts the variation of data across training, dev and test sets. According to Figure 3a ), it appears that training and dev set are slightly similar yet drastically different from the test set. For example, it appears that the likelihood of tweets with the token '#buildthatwall (token 1)' being annotated as having hate speech is 100% in train and dev set, however, it is approximately 20% in the test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 177,
"text": "Figure 3a)",
"ref_id": null
},
{
"start": 257,
"end": 266,
"text": "Figure 3a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Answering RQ1 -Model Evaluation",
"sec_num": "5.1"
},
{
"text": "In order to examine whether discrepancies in the dataset had any impact on the poor performance, we merged development, training and test sets, shuffled the rows, and randomly split them again (referred to as 'adjusted' dataset) according to the proportions in the 'original' HatEval dataset (see Section 3 -'dataset'). Figure 3b) demonstrates that there was a disparity with data distribution in the 'original' dataset. For example, in the 'adjusted' dataset, the percentage of '#buildthatwall' being annotated as having hate speech is approximate (\u223c60%) across train, dev and test sets. This finding led us to train our models with 'adjusted' dataset and fine-tuned the parameters. Figure 3b) and Figure 4 depicts the new data distribution and model performance using 'adjusted' dataset respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 330,
"text": "Figure 3b)",
"ref_id": null
},
{
"start": 684,
"end": 694,
"text": "Figure 3b)",
"ref_id": null
},
{
"start": 699,
"end": 707,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Answering RQ1 -Model Evaluation",
"sec_num": "5.1"
},
{
"text": "According to Figure 4 , our ensemble models have achieved 0.73 of F1 score for Task A and 0.75 of F1 score for Task B on 'adjusted' test set. We also achieved 0.62 EMR for Task B on test set. Due to the difficulty in replicating LT3 system (Bauwelinck et al., 2019) to train on 'adjusted' dataset, we obtained performance of 'SVM+USE' model (Indurthi et al., 2019) using our 'adjusted' dataset. As shown in Figure 4 , our model and baseline demonstrated equal performance in Task A. Conversely, our model outperforms 'SVM+USE' baseline by a margin of 0.06 in Task B. As mentioned in Section 4, we used 3 epochs, a batch size of 32 and a learning rate of 5e-5 to train our models.",
"cite_spans": [
{
"start": 240,
"end": 265,
"text": "(Bauwelinck et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 341,
"end": 364,
"text": "(Indurthi et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 4",
"ref_id": null
},
{
"start": 407,
"end": 415,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Answering RQ1 -Model Evaluation",
"sec_num": "5.1"
},
{
"text": "We can automatically classify cyberbullying against women and immigrant with an F1 score of 0.73 and 0.75 in Task A (hate speech) and Task B (aggressive and targeted) respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1:",
"sec_num": null
},
{
"text": "The primary focus of our research is on improving the recall, i.e. to correctly identify tweets that are cyberbullying attempts against women and immigrants as it will eventually contribute to safe cyberspace for minorities. We have achieved 0.73 and 0.76 of recall for Task A and B respectively using 'adjusted' dataset compared to low recall of baseline systems. We are also interested in controlling true negatives, i.e. tweets that are not actually cyberbullying but are identified as positive. We exceed precision of 0.73 in both tasks using our DistilBERT-based ensemble models. Otherwise, incorrect classification of cyberbullying will have an impact on the reputation of social media platforms, particularly for freedom of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ1:",
"sec_num": null
},
{
"text": "Misclassified Tweets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ2 -Content Analysis of",
"sec_num": "5.2"
},
{
"text": "To answer our RQ2, we extracted misclassified tweets (task A & B) from our ensemble model. A content analysis method ('open coding') (Corbin and Strauss, 1990 ) has been adopted. The second author manually categorised 10 random misclassified tweets into three meaningful codes: gender-related issues (GEND), context-related issues (CNTX), and slangs (SLNG). After defining initial codes, two annotators (first and third author who are experienced in cyberbullying context) trialed them on a random sample of 299 misclassified tweets (population is 626 tweets), resulting in a confidence interval of 4.1 at a confidence level of 95%.",
"cite_spans": [
{
"start": 133,
"end": 158,
"text": "(Corbin and Strauss, 1990",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ2 -Content Analysis of",
"sec_num": "5.2"
},
{
"text": "To measure the inter-annotator agreement we used the Kappa statistic.Due to the complex nature of cyberbullying phenomenon and availability of multiple codes to annotate, we failed to reach a reasonable inter-rater agreement. To overcome this, we refined our codes until we reach an agreement on a coding scheme that contained codes for all misclassified tweets in our sample. Finally, we added three additional codes: errors in original annotation (ER-ROR), misclassified by our model (MSCL), and not belong to any category (OTHER) when both annotators agree that original (HatEval) annotation is dubious, predicted label is incorrect, and when all other possibilities have been exhausted respectively. Table 1 shows the finalised set of codes along with their frequency distribution (%).",
"cite_spans": [],
"ref_spans": [
{
"start": 704,
"end": 711,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Answering RQ2 -Content Analysis of",
"sec_num": "5.2"
},
{
"text": "Our results demonstrate that the lack of contextual information to resolve pronouns or user names in tweets to determine 'gender' (i.e. whether the target is women) is one of the common reasons for misclassification. Based on the frequency distribution ('last column' in Table 1 ), the most frequent category of misclassification is 'CNTX'. Lack of contextual information is a widely raised constraint within the majority of previous works which aligns with our findings. The least frequent category of misclassification is 'SLNG'. One possible explanation for this behaviour could be due to the dataset is extracted from an 'adult' group, and they are less likely to introduce new slang words compared to adolescents. Also, our results suggest that 3% of misclassified tweets are due to 'errors' (\u223c10 tweets) in the original annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Answering RQ2 -Content Analysis of",
"sec_num": "5.2"
},
{
"text": "Conversely, we admit that our model predicted inaccurate labels in 3% of cases (\u223c10 tweets). Our findings suggest that 30% of instances belong to 'OTHER' category. Through manual inspection, we observed that this might be due to reasons like sarcasm, swearing with friends, abbreviations, complaints, and negations. However, the analysis reported in this paper is not comprehensive to include adequate evidence to report subcategories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ2 -Content Analysis of",
"sec_num": "5.2"
},
{
"text": "RQ2: Misclassified tweets can be categorised into six types, with the contextrelated issues ('CNTX') being the most frequent reason for misclassification, followed by issues to resolve gender ('GEND') and slang ('SLNG'). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ2 -Content Analysis of",
"sec_num": "5.2"
},
{
"text": "To answer our RQ3 about the generalisability of our models on different cyberbullying-related tasks, we applied and tested our pre-trained ensemble model in other three tasks, 1) SemEval 2020 -Task 12 (Of-fensEval2020) (Zampieri et al., 2020) , 2) SemEval 2019 -Task 6 (OffensEval2019) (Zampieri et al., 2019) , and 3) Hate & Offensive language (refer as 'Hate & Offense') dataset (Davidson et al., 2017) .",
"cite_spans": [
{
"start": 219,
"end": 242,
"text": "(Zampieri et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 286,
"end": 309,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 381,
"end": 404,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ3 -Cross-task Evaluation",
"sec_num": "5.3"
},
{
"text": "1. OffensEval datasets (Zampieri et al., 2019 (Zampieri et al., , 2020 . These datasets include three subtasks to determine whether a tweet expresses cyberbullying based on whether it is, 1) offensive or not, 2) targeted or not, and 3) if targeted, whether it is toward an individual, group, or other.",
"cite_spans": [
{
"start": 23,
"end": 45,
"text": "(Zampieri et al., 2019",
"ref_id": "BIBREF22"
},
{
"start": 46,
"end": 70,
"text": "(Zampieri et al., , 2020",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ3 -Cross-task Evaluation",
"sec_num": "5.3"
},
{
"text": "2. Hate & Offense dataset (Davidson et al., 2017) . This dataset also has three subtasks to determine whether a tweet include hate and offensive language based on, 1) hate speech or not, 2) offensive but not hate speech, and 3) neither offensive nor hate speech.",
"cite_spans": [
{
"start": 26,
"end": 49,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ3 -Cross-task Evaluation",
"sec_num": "5.3"
},
{
"text": "The tasks of these two datasets were different from HatEval challenge except the first subtask to determine hate (or offensive) language. Therefore, we report the results of cross-domain validation using Task A (i.e. hate speech or not) only.We extracted a random sample of 2,971 tweets from each dataset to align with the test size of our orig- inal Task A when the official test set was unavailable publicly. 2 shows the outcome using our pretrained ensemble model (Task A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answering RQ3 -Cross-task Evaluation",
"sec_num": "5.3"
},
{
"text": "Current state of the art models have reportedly achieved F1 scores of 0.82, 0.92 and 0.90 for Of-fensEval2019, OffensEval2020 and Hate & Offense datasets respectively (Zampieri et al., 2019 (Zampieri et al., , 2020 (Davidson et al., 2017) . According to Table 2 , we have achieved a satisfactory performance with approximately 0.7 of accuracy/F1 score for all task pairs (i.e. training on HatEval dataset and testing on another dataset).These results suggest that our pre-trained ensemble model on HatEval is generalised reasonably well (Accuracy/F1 score \u223c0.7) when classifying hate speech irrespective of the context (e.g. misogyny etc.). Due to the misalignment between datasets, we did not apply our models to other tasks of external datasets.",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "(Zampieri et al., 2019",
"ref_id": "BIBREF22"
},
{
"start": 190,
"end": 214,
"text": "(Zampieri et al., , 2020",
"ref_id": "BIBREF14"
},
{
"start": 215,
"end": 238,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 254,
"end": 262,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Answering RQ3 -Cross-task Evaluation",
"sec_num": "5.3"
},
{
"text": "Our pre-trained models from HatEval dataset can automatically classify hate speech in other benchmarking datasets with a reasonable accuracy (\u223c0.7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RQ3:",
"sec_num": null
},
{
"text": "The ultimate goal of our work is to advance the technology to detect and classify cyberbullying using state-of-the-art NLP techniques, with the longterm aim of enabling social media as a safe space for all users. We developed DistilBERT-based ensemble model per task as a basis to answer our RQ1. With an initial poor performance using a test set of 'original' HatEval dataset, we suggest developing a novel version of the original dataset (i.e. 'adjusted') through merging, shuffling and splitting. The 'adjusted' dataset contributed to better performance of F1 score of 0.73 and 0.74 for Task A and B respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The six categories of misclassified tweets that emerged from our qualitative analysis (RQ2) build a point of reference for the content of such misclassifications. This initial categories can help researchers to understand the grounds to improve automated cyberbullying classification. Also, the categories identified through this research can serve as a guide which could extend as a conceptual framework for future qualitative and quantitative cyberbullying research. Additionally, the categories along with the frequencies that we report in this work provide implications for researchers to collect, annotate, and revise their datasets that could minimise the likelihood of misclassification produced by machine learning models including providing additional contextual information about data. Conversely, this raises new research questions on whether we could improve the performance of machine learning models further without relying on demographic data such as gender and data on language evolution such as out-of-vocabulary slang and abbreviations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The findings from our RQ3 on generalisability of pre-trained models on other cyberbullying-related tasks demonstrated reasonable accuracy (\u223c0.7). A possible explanation of not achieving more could be due to pre-trained models might biased within women and immigrant context (e.g. specific hashtags, misogyny) and not be the best option for classifying 'general' offense-related tasks. As a solution, future models could augment data from gen-eral as well as specific datasets (e.g., racial (Davidson et al., 2019) , gendered (Kumar et al., 2020) ).",
"cite_spans": [
{
"start": 490,
"end": 513,
"text": "(Davidson et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 525,
"end": 545,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In addition to the lack of contextual information that limits our model improvement further, this research is subject to implicit bias of annotators when judging categories to answer RQ2. As a solution, our future work will incorporate a semiautomated approach for misclassification annotation by reusing readily available lexical resources like MRC psycholinguistic database (Coltheart and Wilson, 1987) , LIWC (Pennebaker et al., 2001) to obtain initial codes and employ at least three annotators to refine the codes. Furthermore, our 'adjusted' dataset may not provide a robust solution in terms of replicability. Therefore, we intend to create a couple of 'adjusted' datasets and report the average of performance in our future works. We also share our current 'adjusted' dataset to enable replication of experiments.",
"cite_spans": [
{
"start": 376,
"end": 404,
"text": "(Coltheart and Wilson, 1987)",
"ref_id": "BIBREF4"
},
{
"start": 412,
"end": 437,
"text": "(Pennebaker et al., 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In summary, we propose that future cyberbullying classification models need to concentrate on incorporating state-of-the-art solution to common NLP problems like language evolution, sarcasm detection, and pronoun resolution. Additionally, future research should also focus on advancing the prediction of demographic information such as gender, age, and personality from data within an ethical framework without reidentifying Twitter profiles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Due to massive participation in social media, manual moderation of cyberbullying is an extremely labour-intensive task which leads to delay in taking action against bullies while protecting victims. Accordingly, automated classification of cyberbullying emerged and remains as a challenging NLP task. This research contributes to develop machine learning models for cyberbullying classification. Through a qualitative content analysis, we also contributed to develop a coding schema to deepen the understanding of misclassifications produced by models, enabling future researchers to minimise the impact of data for poor model performance. When social media platforms are equipped with effective cyberbullying detection models, victimised communities will be able to discuss their concerns openly, without harassment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "https://theorganicagency.com/blog/life-deathconsequences-cyber-bullying/ 2 https://www.theguardian.com/society/2018/oct/03/onein-five-australian-children-are-victims-ofcyberbullying-esafety-commissioner-says 3 http://www.bullyingstatistics.org/content/cyberbullying-statistics.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.workshopononlineabuse.com/home",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/carpedm20/emoji",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "LT3 at SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter (hatEval)",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Bauwelinck",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "436--440",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2077"
]
},
"num": null,
"urls": [],
"raw_text": "Nina Bauwelinck, Gilles Jacobs, V\u00e9ronique Hoste, and Els Lefever. 2019. LT3 at SemEval-2019 task 5: Multilingual detection of hate speech against immi- grants and women in Twitter (hatEval). In Proceed- ings of the 13th International Workshop on Semantic Evaluation, pages 436-440. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Universal sentence encoder",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Sheng Yi Kong",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Detecting cyberbullying and cyberaggression in social media",
"authors": [
{
"first": "Despoina",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Emiliano",
"middle": [],
"last": "De Cristofaro",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Trans. Web",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3343484"
]
},
"num": null,
"urls": [],
"raw_text": "Despoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Emiliano De Cristofaro, Gianluca Stringhini, Athena Vakali, and Nicolas Kourtellis. 2019. De- tecting cyberbullying and cyberaggression in social media. ACM Trans. Web, 13(3).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "MRC psycholinguistic database machine usable dictionary : expanded shorter oxford english dictionary entries / max coltheart and michael wilson",
"authors": [
{
"first": "M",
"middle": [],
"last": "Coltheart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Max",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "John",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coltheart, M. (Max) and 1939 Wilson, Michael John. 1987. MRC psycholinguistic database machine usable dictionary : expanded shorter oxford english dictionary entries / max coltheart and michael wilson. Oxford Text Archive.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Grounded theory research: Procedures, canons, and evaluative criteria",
"authors": [
{
"first": "J",
"middle": [],
"last": "Corbin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Strauss",
"suffix": ""
}
],
"year": 1990,
"venue": "Qualitative Sociology",
"volume": "13",
"issue": "",
"pages": "3--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Corbin and A. Strauss. 1990. Grounded theory re- search: Procedures, canons, and evaluative criteria. Qualitative Sociology, 13:3-21.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Davidson, Dana Warmsley, M. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In ICWSM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Racial bias in hate speech and abusive language detection datasets",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Debasmita",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "25--35",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3504"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceed- ings of the Third Workshop on Abusive Language Online, pages 25-35. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Comput. Surv",
"volume": "51",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3232676"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on au- tomatic detection of hate speech in text. ACM Com- put. Surv., 51(4).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mitre at semeval2019 task 5: Transfer learning for multilingual hate speech detection",
"authors": [
{
"first": "Abigail",
"middle": [
"S"
],
"last": "Gertner",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Henderson",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Marsh",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [
"M"
],
"last": "Merkhofer",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Zarrella",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "453--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail S. Gertner, John C. Henderson, Amy Marsh, Elizabeth M. Merkhofer, Ben Wellner, and Guido Zarrella. 2019. Mitre at semeval2019 task 5: Trans- fer learning for multilingual hate speech detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, page 453-459.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ade-laideCyC at SemEval-2020 Task 12: Ensemble of Classifiers for Offensive Language Detection in Social Media",
"authors": [
{
"first": "Mahen",
"middle": [],
"last": "Herath",
"suffix": ""
},
{
"first": "Thushari",
"middle": [],
"last": "Atapattu",
"suffix": ""
},
{
"first": "Hoang",
"middle": [],
"last": "Dung",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Treude",
"suffix": ""
},
{
"first": "Katrina",
"middle": [],
"last": "Falkner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahen Herath, Thushari Atapattu, Hoang Dung, Christoph Treude, and Katrina Falkner. 2020. Ade- laideCyC at SemEval-2020 Task 12: Ensemble of Classifiers for Offensive Language Detection in So- cial Media. In Proceedings of SemEval.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "FERMI at SemEval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Vijayasaradhi",
"middle": [],
"last": "Indurthi",
"suffix": ""
},
{
"first": "Bakhtiyar",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chakravartula",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2009"
]
},
"num": null,
"urls": [],
"raw_text": "Vijayasaradhi Indurthi, Bakhtiyar Syed, Manish Shri- vastava, Nikhil Chakravartula, Manish Gupta, and Vasudeva Varma. 2019. FERMI at SemEval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 70-74.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ensembles of biased classifiers",
"authors": [
{
"first": "Rinat",
"middle": [],
"last": "Khoussainov",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "He\u00df",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Kushmerick",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd International Conference on Machine Learning, ICML '05",
"volume": "",
"issue": "",
"pages": "425--432",
"other_ids": {
"DOI": [
"10.1145/1102351.1102405"
]
},
"num": null,
"urls": [],
"raw_text": "Rinat Khoussainov, Andreas He\u00df, and Nicholas Kush- merick. 2005. Ensembles of biased classifiers. In Proceedings of the 22nd International Conference on Machine Learning, ICML '05, page 425-432, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evaluating aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2020. Evaluating aggression iden- tification in social media. In Proceedings of the Sec- ond Workshop on Trolling, Aggression and Cyber- bullying, pages 1-5. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ojha, Marcos Zampieri, and Shervin Malmasi, editors",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Marcos Zampieri, and Shervin Malmasi, editors. 2018. Proceedings of the First Workshop on Trolling, Aggression and Cy- berbullying (TRAC-2018). Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "NULI at SemEval-2019 task 6: Transfer learning for offensive language detection using bidirectional transformers",
"authors": [
{
"first": "Ping",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "87--91",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2011"
]
},
"num": null,
"urls": [],
"raw_text": "Ping Liu, Wen Li, and Liang Zou. 2019. NULI at SemEval-2019 task 6: Transfer learning for offen- sive language detection using bidirectional trans- formers. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 87-91. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Liwc: Linguistic inquiry and word count",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennebaker",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Booth",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pennebaker, L. Francis, and R. Booth. 2001. Liwc: Linguistic inquiry and word count.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Vinodkumar Prabhakaran, and Zeerak Waseem",
"authors": [
{
"first": "Sarah",
"middle": [
"T"
],
"last": "Roberts",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah T. Roberts, Joel Tetreault, Vinodkumar Prab- hakaran, and Zeerak Waseem, editors. 2019. Pro- ceedings of the Third Workshop on Abusive Lan- guage Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic cyberbullying detection: A systematic review",
"authors": [
{
"first": "H",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "P",
"middle": [
"C"
],
"last": "Ferreira",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Carvalho",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oliveira",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Coheur",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Paulino",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Veiga Sim\u00e3o",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2019,
"venue": "Computers in Human Behavior",
"volume": "93",
"issue": "",
"pages": "333--345",
"other_ids": {
"DOI": [
"10.1016/j.chb.2018.12.021"
]
},
"num": null,
"urls": [],
"raw_text": "H. Rosa, N. Pereira, R. Ribeiro, P.C. Ferreira, J.P. Car- valho, S. Oliveira, L. Coheur, P. Paulino, A.M. Veiga Sim\u00e3o, and I. Trancoso. 2019. Automatic cyberbul- lying detection: A systematic review. Computers in Human Behavior, 93:333 -345.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Nlpr@srpol at semeval-2019 task 6 and task 5: Linguistically enhanced deep learning offensive sentence classifier",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Seganti",
"suffix": ""
},
{
"first": "Helena",
"middle": [],
"last": "Sobol",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Orlova",
"suffix": ""
},
{
"first": "Hannam",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jakub",
"middle": [],
"last": "Staniszewski",
"suffix": ""
},
{
"first": "Tymoteusz",
"middle": [],
"last": "Krumholc",
"suffix": ""
},
{
"first": "Krystian",
"middle": [],
"last": "Koziel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Seganti, Helena Sobol, Iryna Orlova, Hannam Kim, Jakub Staniszewski, Tymoteusz Krumholc, and Krystian Koziel. 2019. Nlpr@srpol at semeval-2019 task 6 and task 5: Linguistically enhanced deep learning offensive sentence classifier. CoRR, abs/1904.05152.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. SemEval-2019 task 6: Identifying and catego- rizing offensive language in social media (OffensE- val). In Proceedings of the 13th International Work- shop on Semantic Evaluation, pages 75-86.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media (offenseval 2020).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Deriving the final labels of Task A",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Model performance of Task A & B using 'original' HatEval test dataset",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Variation of data across training, dev and test sets in (a) 'original' (b) 'adjusted' dataset; 1:#buildthatwall, 2:#buildthewall, 3:#nodaca, 4:#sendthemback, 5:#stoptheinvasion, 6:#womens**k, 7:b***h, 8:h*e Model performance of Task A & B using 'adjusted' HatEval test dataset",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Coding reference of misclassified tweets.",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>: Performance (weighted average) of our pre-</td></tr><tr><td>trained Task A model on other cyberbullying-related</td></tr><tr><td>tasks; Acc.:Accuracy, P:Precision, R:Recall.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
}
}
}
}