ACL-OCL / Base_JSON /prefixW /json /woah /2021.woah-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:10:08.103816Z"
},
"title": "DALC: the Dutch Abusive Language Corpus",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "t.caselli@rug.nl"
},
{
"first": "Arjan",
"middle": [],
"last": "Schelhaas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "a.j.schelhaas@student.rug.nl"
},
{
"first": "Marieke",
"middle": [],
"last": "Weultjes",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "m.i.weultjes@student.rug.nl"
},
{
"first": "Folkert",
"middle": [],
"last": "Leistra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "f.a.leistra@student.rug.nl"
},
{
"first": "Hylke",
"middle": [],
"last": "Van Der Veen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "h.f.van.der.veen@student.rug.nl"
},
{
"first": "Gerben",
"middle": [],
"last": "Timmerman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "gerbentimmerman@protonmail.com"
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "m.nissim@rug.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manually annotated for abusive language. The resource address a gap in language resources for Dutch and adopts a multi-layer annotation scheme modeling the explicitness and the target of the abusive messages. Baselines experiments on all annotation layers have been conducted, achieving a macro F1 score of 0.748 for binary classification of the explicitness layer and .489 for target classification.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manually annotated for abusive language. The resource address a gap in language resources for Dutch and adopts a multi-layer annotation scheme modeling the explicitness and the target of the abusive messages. Baselines experiments on all annotation layers have been conducted, achieving a macro F1 score of 0.748 for binary classification of the explicitness layer and .489 for target classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The growth of online user generated content poses challenges to manual content moderation efforts (Nobata et al., 2016) . In a 2016 Eurobarometer survey, 75% of people who follow or participate in online discussions have witnessed or experienced abuse, threat, or hate speech. 1 The increasing polarization of online debates and conversations, together with the amount of associated toxic and abusive behaviors, call for some form of automatic content moderation. Currently, the mainstream approach in automatic content moderation uses reactive interventions, i.e., blocking or deleting 'bad' messages (Seering et al., 2019) . There is an open debate on its efficacy (Chandrasekharan et al., 2017) and on the risks of perpetrating bias and discrimination (Sap et al., 2019) . Alternative, less drastic, and more interactive methods have been proposed, such as the generation of counternarratives (Chung et al., 2019) . In either case, the first step towards full or semi-automatic moderation is the detection of potentially abusive lan-guage. Such step relies on language-specific resources to train tools to distinguish the \"good\" messages from the harmful ones. As a contribution in this direction, we have developed the Dutch Abusive Language Corpus, or DALC v1.0, a manually annotated corpus of tweets for abusive language detection in Dutch. 2 The resource is unique in the Dutch-speaking panorama because of the approach used to collect the data, the annotation guidelines, and the final data curation.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Nobata et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 277,
"end": 278,
"text": "1",
"ref_id": null
},
{
"start": 602,
"end": 624,
"text": "(Seering et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 667,
"end": 697,
"text": "(Chandrasekharan et al., 2017)",
"ref_id": null
},
{
"start": 755,
"end": 773,
"text": "(Sap et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 896,
"end": 916,
"text": "(Chung et al., 2019)",
"ref_id": null
},
{
"start": 1347,
"end": 1348,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "DALC is compatible with previous work on abusive language in other languages (Waseem and Hovy, 2016a; Papegnies et al., 2017; Founta et al., 2018; Mishra et al., 2018; Davidson et al., 2019; Poletto et al., 2020) but presents innovations both with respect to the application of the label \"abusive\" to messages and the adoption of a multi-layered annotation to distinguish the explicitness of the abusive message and its target (Waseem et al., 2017) .",
"cite_spans": [
{
"start": 77,
"end": 101,
"text": "(Waseem and Hovy, 2016a;",
"ref_id": "BIBREF34"
},
{
"start": 102,
"end": 125,
"text": "Papegnies et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 126,
"end": 146,
"text": "Founta et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 147,
"end": 167,
"text": "Mishra et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 168,
"end": 190,
"text": "Davidson et al., 2019;",
"ref_id": null
},
{
"start": 191,
"end": 212,
"text": "Poletto et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 427,
"end": 448,
"text": "(Waseem et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 the promotion of a bottom-up approach to collect potentially abusive messages combining multiple strategies in an attempt to minimize biases that may be introduced by developers;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 the release of a manually annotated corpus for abusive language detection in Dutch, DALC v1.0;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 a series of baseline experiments using different architectures (i.e., a dictionary based approach, a Linear SVM, a Dutch transformerbased language model) showing the complexity of the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work on abusive language phenomena and behaviors is extensive and varied. However, limitations exist and they mainly concentrate along three dimensions: (i) definitions; (ii) data sources and collection methods; and (iii) language diversity. The development of automatic methods for detecting forms of abusive language has been rapid and has seen a boom of definitions, labels, and phenomena being investigated, including racism (Waseem and Hovy, 2016a; Davidson et al., , 2019 , hate speech (Alfina et al., 2017; Founta et al., 2018; Mishra et al., 2018; Basile et al., 2019) , toxicity 3 and verbal aggression (Kumar et al., 2018) , misogyny (Frenda et al., 2018; Pamungkas et al., 2020; Guest et al., 2021) , and offensive language (Wiegand et al., 2018; Zampieri et al., 2019a; Rosenthal et al., 2020) . Variations in definitions and in annotation guidelines have given rise to isolated datasets, limiting the portability of trained systems and reuse of resources (Swamy et al., 2019; Fortuna et al., 2021) . Comprehensive frameworks that integrate and harmonize the variety of definitions and investigate the interactions across the annotated phenomena are still at early stages (Poletto et al., 2020) . DALC v1.0 is compatible with existing definitions of abusive language and promotes a multi-layered annotation scheme compatible with previous initiatives, with a special attention to the reusability of datasets.",
"cite_spans": [
{
"start": 438,
"end": 462,
"text": "(Waseem and Hovy, 2016a;",
"ref_id": "BIBREF34"
},
{
"start": 463,
"end": 486,
"text": "Davidson et al., , 2019",
"ref_id": null
},
{
"start": 501,
"end": 522,
"text": "(Alfina et al., 2017;",
"ref_id": null
},
{
"start": 523,
"end": 543,
"text": "Founta et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 544,
"end": 564,
"text": "Mishra et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 565,
"end": 585,
"text": "Basile et al., 2019)",
"ref_id": null
},
{
"start": 621,
"end": 641,
"text": "(Kumar et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 653,
"end": 674,
"text": "(Frenda et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 675,
"end": 698,
"text": "Pamungkas et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 699,
"end": 718,
"text": "Guest et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 744,
"end": 766,
"text": "(Wiegand et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 767,
"end": 790,
"text": "Zampieri et al., 2019a;",
"ref_id": "BIBREF38"
},
{
"start": 791,
"end": 814,
"text": "Rosenthal et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 977,
"end": 997,
"text": "(Swamy et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 998,
"end": 1019,
"text": "Fortuna et al., 2021)",
"ref_id": "BIBREF4"
},
{
"start": 1193,
"end": 1215,
"text": "(Poletto et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Collecting good representative data for abusive language is a challenging task. The majority of existing datasets focuses on messages from social media platforms, with Twitter being the most used Vidgen and Derczynski (2021). Unlike other language phenomena, e.g., named entities, abusive language is less widespread and cannot be easily captured by means of random sampling. Schematically, we identify three major methods to collect data: namely: (i) use of communities (Tulkens et al., 2016; Del Vigna et al., 2017; Merenda et al., 2018; Kennedy et al., 2018) which targets online communities known to be more likely to have abusive behaviors; (ii) use of keywords (Waseem and Hovy, 2016b; Alfina et al., 2017; Sanguinetti et al., 2018; ElSherief et al., 2018; Founta et al., 2018) , where manually compiled lists of words corresponding either to potential targets (e.g, \"women\", \"migrants\", a.o.) or profanities are employed; (iii) use of seed users (Wiegand et al., 2018; Ribeiro et al., 2018) , which collects messages from users that have been identified to post abusive texts via some heuristics. Each of these methods has advantages and disadvantages. For instance, the use of keywords may create denser datasets, but at the same time risks of developing biased data are very high (Wiegand et al., 2019) . Furthermore, according to the specific platform used, some of the methods cannot be reliably applied. For instance, in a platform like Twitter targeting online communities is not trivial. Recently, refinements have been proposed to address limitations of each approach. In some cases controversial posts, videos or keywords are used as proxies for communities (Hammer, 2016; Graumans et al., 2019) , in other cases hybrid approaches are proposed by combining keywords and seed users (Basile et al., 2019), others exploit platform pre-filtering functionalities (Zampieri et al., 2019a) . DALC v1.0 integrates different bottom-up approaches to collect data providing a first crossfertlization attempt across two social media platforms and paying attention to minimize the introduction of biases.",
"cite_spans": [
{
"start": 471,
"end": 493,
"text": "(Tulkens et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 494,
"end": 517,
"text": "Del Vigna et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 518,
"end": 539,
"text": "Merenda et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 540,
"end": 561,
"text": "Kennedy et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 667,
"end": 691,
"text": "(Waseem and Hovy, 2016b;",
"ref_id": "BIBREF35"
},
{
"start": 692,
"end": 712,
"text": "Alfina et al., 2017;",
"ref_id": null
},
{
"start": 713,
"end": 738,
"text": "Sanguinetti et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 739,
"end": 762,
"text": "ElSherief et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 763,
"end": 783,
"text": "Founta et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 953,
"end": 975,
"text": "(Wiegand et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 976,
"end": 997,
"text": "Ribeiro et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 1289,
"end": 1311,
"text": "(Wiegand et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 1674,
"end": 1688,
"text": "(Hammer, 2016;",
"ref_id": "BIBREF10"
},
{
"start": 1689,
"end": 1711,
"text": "Graumans et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1874,
"end": 1898,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Vidgen and Derczynski (2021) provides a comprehensive survey covering 63 datasets all targeting a specific abusive phenomenon/behavior. The majority of them (25 datasets) is for English, with a long tail of other languages mostly belonging to the Indo-European family, although limited in their diversity. The lack of publicly available datasets for any Sino-Tibetan, Niger-Congo, or Afro-Asiatic languages is striking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "When it comes to abusive language datasets, Dutch is less-resourced. Notable previous work has been conducted by Tulkens et al. (2016) who developed a dataset and systems for detecting racist discourse in Dutch social media. DALC v1.0 differentiates because it is a \"generic\" resource for abusive language where all possible types of abusive phenomena are valid. This leaves room to refinement in the proposed corpus to investigate potential sub-types of abusive phenomena and their associated linguistic devices.",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "Tulkens et al. (2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Data Collection DALC v1.0 is based on a sample of a large ongoing collection of Twitter messages in Dutch at the University of Groningen (Tjong Kim Sang, 2011) . For its construction, rather than focusing individually on any of the mentioned approaches, we propose a combination of three methods that only partially overlap with previous work.",
"cite_spans": [
{
"start": 150,
"end": 161,
"text": "Sang, 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The first method is based on van Rosendaal et al. 2020, where keyword collection is refined via cross-fertilization between two social media platforms, namely Reddit and Twitter. Controversial posts from the subreddit r/thenetherlands, the biggest Reddit community in Dutch, at specific time periods are scraped, and a list of unigram keywords is extracted using TF-IDF. The top 50 unigrams are used as search terms in the corresponding time period in Twitter. This approach avoids the introduction of bias from the developers in the compilation of lists of search term. Obtaining them from controversial posts in Reddit may lead to denser samples of data in Twitter for abusive language phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword extraction",
"sec_num": null
},
{
"text": "We identified 8 different time periods between 2015 and 2020. We include both periods of time that may contain \"historically significant events\" (e.g., the Paris Attack in November 2015; the Dutch General Election in March 2017; the Sinterklaas intocht in December 20218; the Black Lives Matter protests after the killing of George Floyd in August 2020) and random time periods where no major events occurred, at least to our knowledge (e.g., April 2015; June 2018; May and September 2019). This results in a total of 12,884,560 retrieved tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword extraction",
"sec_num": null
},
{
"text": "To ease the annotation process, we have sampled the retrieved data in smaller annotations batches. From each time period, we have generated samples of 10k messages composed as follows: 5k messages are randomly sampled, while the remaining 5k (non-overlapping) messages are extracted using two Dutch lexicon of potentially offensive/hateful terms, namely HADES (Tulkens et al., 2016) and HurtLex v1.2 (Bassignana et al., 2018). The actual manual annotation is performed on randomly extracted batches of 500 messages each. Table 1 provides an overview of the number of messages extracted per time period and the amount that has been manually annotated.",
"cite_spans": [
{
"start": 360,
"end": 382,
"text": "(Tulkens et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 521,
"end": 528,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Keyword extraction",
"sec_num": null
},
{
"text": "Geolocation The second method is inspired by previous work showing that in the Western areas of the (north hemisphere of the) world hatred messages tend to be more frequent in geographical areas that are economically depressed and where disenfranchised communities live (Medina et al., 2018; Gerstenfeld, 2017) . 4 We use data from the Dutch Centraal Bureau voor de Statistiek (CBS) about unemployment to proxy such communities in the Netherlands, identifying two provinces: Zuid-Holland and Groningen. 5 We develop a set of heuristics, including the use of city names in these two provinces, to randomly collect messages from these areas. This is needed since the geolocation of the users is optional and does not have a fixed format. We managed to successfully extract 356,401 messages that can be reliably assigned to one of the two provinces. Similar to the keywords method, a sample of 5k messages is extracted using the lexicons and an additional 5k randomly. Four batches of 500 instances each have been manually annotated.",
"cite_spans": [
{
"start": 270,
"end": 291,
"text": "(Medina et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 292,
"end": 310,
"text": "Gerstenfeld, 2017)",
"ref_id": "BIBREF7"
},
{
"start": 313,
"end": 314,
"text": "4",
"ref_id": null
},
{
"start": 503,
"end": 504,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword extraction",
"sec_num": null
},
{
"text": "Seed users The last method uses seed users. We manually compile an ad-hoc list of 67 profanities, swearwords, and slurs by extending our lexicons. We then search for messages containing any of these elements in a ten-day window in December 2018 (namely 2018-11-12 -2018-11-22) . This results in a total of 3,105,833 messages. We rank each users according to the number of messages containing at least one of the target words. We select the top 50 users as seed users. We then extract for each of the selected user a maximum of 100 messages in a different time period, namely between May and June 2020, for a total of 5k tweets. Contrary to the other two methods, we directly created batches of 500 messages each for the manual annotation.",
"cite_spans": [
{
"start": 245,
"end": 276,
"text": "(namely 2018-11-12 -2018-11-22)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword extraction",
"sec_num": null
},
{
"text": "Since we are interested in original content, all messages sampled for the manual annotation do not contain retweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword extraction",
"sec_num": null
},
{
"text": "DALC v1.0 has been manually annotated using internally developed guidelines. The guidelines provides the annotators with a definition of abusive language that refines proposals in previous work (Papegnies et al., 2017; Founta et al., 2018; Caselli et al., 2020) . In particular, abusive language is defined as: impolite, harsh, or hurtful language (that may contain profanities or vulgar language) that result in a debasement, harassment, threat, or Notably, this definition requires that an identifiable target must be present in the message to qualify as potentially abusive. This is a necessary requirement in our definition and it also helps us to discriminate abusive language from more generic phenomena like offensive language, forms of harsh criticism, and other socially unacceptable language phenomena. We have specifically introduced harsh criticism to restrict the application of the abusive language label. Indeed expressing heavy criticisms against an institution (e.g., the E.U. Commission, or a government) may result in inappropriate and offensive language but it does not entail being abusive. Exceptions, however, hold: cases of synecdoches where an institution, an entity, or a concept are used to attack the members of a social group are considered instances of abusive language.",
"cite_spans": [
{
"start": 194,
"end": 218,
"text": "(Papegnies et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 219,
"end": 239,
"text": "Founta et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 240,
"end": 261,
"text": "Caselli et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "Following Waseem et al. (2017) and Zampieri et al. (2019a) we perform a multi-layered annotation distinguishing the levels of explicitness of the abusive messages and the targets. Explicitness combines three factors: (i) the surface evidence of the message; (ii) the assumed intentions of the user (i.e., is the message debasing someone?); and (iii) its effects on the receiver(s) (i.e., can the message be perceived as debasing by a targeted individual or a community?). While the last two factors (intentions and effects) help to identify the abusiveness nature of the message, the surface forms is essential to distinguish overtly abusive messages from more subtle forms. A distinguishing criterion, in fact, is the presence of profanities, slurs, and offensive terms. We define three values:",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Waseem et al. (2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "\u2022 Explicit (EXP): A message is marked as explicit if it is interpreted as potentially abusive and if it contains a profanity or a slur;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "\u2022 Implicit (IMP): A message is marked as implicit if it is interpreted as potentially abusive but it DOES NOT contain any identifiable profanity or slur;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "\u2022 Not abusive (NOT): A message is marked as a not abusive if it is interpreted as lacking an intention of the user to debase/harass/threat a target and there is no debasing effect on the receiver. The mere presence of a profanity does not provide sufficient ground for annotating the message as abusive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "A further differentiating criteria is that all messages where the author debases or offends him-/herself (e.g., messages that contain the first person singular or plural pronoun) are considered as not abusive",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "The target layer makes explicit to whom the message is directed. We reuse the values and definitions from Zampieri et al. (2019a) . In particular, we have:",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "Zampieri et al. (2019a)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "\u2022 Individual (IND): any message that targets a person, being it named or unnamed, or a famous person;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "\u2022 Group (GRP): any message that targets a group of people considered as a unity because of ethnicity, gender, political affiliation, religion, disabilities, or other common properties; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "\u2022 Other (OTH): any abusive message that addresses an organisation, an institution, or a concept. Instances of synecdoches are marked with this value rather than with group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "The annotation has been conducted in two phases. Phase 1 (March-May 2020) has seen five annotators, all bachelor students in Information Science. The students conducted the annotation of the data as part of their bachelor thesis project. Phase 2 (November-December 2020) has been conducted by one master student in Information Science with previous experience in this task. All annotators are native speakers of Dutch. More details are reported in the Data Statement A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "During Phase 1, we validate the annotation guidelines by means of a pairwise inter-annotator agreement (IAA) study on two independent subsets of 100 messages each. The first sample is obtained using the keyword method and the second using the geolocation. For the keywords sample, Cohen's kappa is 0.572 for the explicitness and 0.670 for the target. For the geolocation sample, the kappa for explicitness is comparable (0.522) although that for target is lower (0.466). The results are comparable previous work (Caselli et al., 2020) indicating substantial agreement. Cases of disagreement have been discussed between the annotators and resolved. The data used for the IAA has been integrated in DALC v1.0. No IAA has been computed for the messages collected using seed authors. In phase 2 we further expanded the initial data annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "The final corpus has been manually curated by one of the authors of this paper. The data curation phase focuses on the creation of the Train, Dev, Test splits in such a way that there is no overlap for time periods and, most importantly, users. Table 2 reports an overview of the data of each split and the number of annotated messages included. Overall, DALC v1.0 contains 8,156 tweets. In each split, the abusive messages correspond roughly to 1/3 of the messages. Maintaining this balance is not a trivial task. As it appears from Ta-ble 2, the different methods we used to collect the data results in different proportions of messages. Concerning the use of keywords, the combination of controversial keywords and historically relevant events works best, i.e., returns more densely annotated batches for the positive class, than the use of controversial keywords in random time periods. The geolocation method has been excluded due to the extremely low number of messages belonging to the positive class. Furthermore, a closer inspection revealed that these messages could be easily aggregated by their authors. We thus merge them with the seed users. Indeed, seed users results as the most successful method. Out of 5,000 messages collected, we managed to annotate and keep 2,520 of them. Excluding the merged users from the geolocation data, the Train/Dev split contains 38 unique users with an average of 54 messages each. On the other hand, the Test set contains 11 unique users and 23 messages each on average. To avoid any possibility of data overlap, we check that no message retrieved using the keyword method in one data split (e.g. Train) belongs to a seed users in a different data split (e.g., Test). For instance, we have found that 8 messages from the Paris Attack source have the same seed users of the test split. Only 118 messages were involved in these adjustments. In Table 2 we have marked these changes by showing the additional messages in parenthesis next to the seed users rows. Table 3 shows DALC v1.0's label distribution per split. Overall, 1,879 messages have been annotated as containing forms of abusive language. The majority of them, 65.40%, has been classified as explicit. When focusing on the Train and Test splits, the most remarkable difference concerns the number of abusive messages labeled as implicit: 38.25% vs. 28.10%, respectively. As for the targets, the majority is realized by IND (55.18%) followed by GRP (34.64%) and OTH (10.69%). Interestingly, the distributions of the target is comparable to that of other datasets in other languages such as OLID (Zampieri et al., 2019a) .",
"cite_spans": [
{
"start": 2603,
"end": 2627,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1891,
"end": 1898,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 2007,
"end": 2014,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "The average length of a message in DALC v1.0 is 25.94 words. Tokenization has been done by using the Dutch tokenizer available in SpaCy (Honnibal et al., 2020) . In general, abusive messages are significantly 6 longer than the non abusive ones, with an average of 27.58 words compared to 22.77. While the differences between explicit and implicit We further investigate the composition of the DALC v1.0 by analysing the top 50 keywords per class between the Train and Test distributions by applying a TF-IDF approach. Table 4 illustrates a sample of the extracted keywords. As expected, clear instances of profanities and slurs appear in the EXP class. The IMP class does not present surface cues linked to specific lexical items. Actually, without knowing the class label and simply comparing the keywords, it is impossible to distinguish the IMP messages from those labeled as NOT. A further take-away of the keyword analysis is the lack of prevalence of any topic specific items (Wiegand et al., 2019) . This, however, does not necessarily means that DALC v1.0 does not contain biases: indeed, the messages are not equally distributed across the time periods and seed users. On the other hand, our inspection of keywords has shown the lack of topic-specific keywords across the three classes.",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "(Honnibal et al., 2020)",
"ref_id": null
},
{
"start": 982,
"end": 1004,
"text": "(Wiegand et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 518,
"end": 525,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "We complete our analysis by exploring the similarities and differences between Train and Test splits. We investigate these aspects by means of two metrics: the Jensen-Shannon (J-S) divergence and the Out-of-Vocabulary rate (OOV). The J-S divergence assesses the similarity between two probability distributions, q and r. On the other hand, the OOV rate helps in assessing the differences between the Train and Test splits as it highlights the percentage of unknown tokens. We obtain a J-S score of 73% and an OOV rate of 64.6%. This 7 Statistical test: Mann-Whitney Test; p < 0.05 means that while the Train and Test distributions are quite similar to each other, the gap in terms of lexical items between the two is quite large. This supports the validity of our data curation approach where overlap between Training and Test split is not allowed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation and Data Curation",
"sec_num": "4"
},
{
"text": "We present a set of baseline experiments that accompany the release of DALC v1.0 for the two annotation layers. For the explicitness layer, we first experiment a simplified setting by framing the problem as a binary classification task. In this setting the distinction between EXP and IMP labels is collapsed into a new unique value for all abusive messages (i.e., ABU). The follow-up experiment, on the other hand, maintains the fine-grained distinction in the three classes (i.e., EXP vs. IMP vs. NOT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "For the target layer no simplification of the labels is possible since each oh them identified a specific referent. Thus, target experiments preserve the original three labels (i.e., IND vs. GRP vs. OTH).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "In all experiments we adopt a common preprocessing of the data. All user mentions and links to external web pages are replaced with dedicated placeholders symbols, respectively USER and URL. Emojis are replaced with their corresponding text using the emoji package. Hashtags symbols have been removed but we have not split hashtags composed by multiple words in separate tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "The models are trained on the Train split and evaluated on the held out Test set. The Dev split is used for parameter tuning. As illustrated in Table 3, the distributions of the labels in the classes for both annotation layers is unbalanced. We thus evaluate and compare our models using the macroaverage F1. Furthermore, we report Precision and Recall for each class. In each annotation layer, we compare the models to a majority class baseline (MFC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "Abusive vs. Not Abusive This binary setting allows to test the classification abilities of different architectures in a simplified setting. It also provides evidence of the complexity of the task given the lack of overlap across time periods and seed users between Train and Test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "We experimented with three models. The first is a dictionary-based approach. The approach is very simple: given a reference dictionary of profanities, abusive terms, slurs in Dutch, if any message contains one or more of the terms in the dictionary, then it is labeled as abusive (i.e., ABU). We have created a new lexicon of 847 potentially abusive term by refining the original Dutch entries in HurtLex v1.2 (Bassignana et al., 2018) and integrating the list with 256 culturally specific terms. In particular, most of the new entries concerned names of diseases (e.g., kanker [cancer]) that in Dutch are commonly used to debase or harass people. Each term has also been classified as belonging to one of two macro-categories, namely \"negative stereotypes\" (representing 45.1% of the entries) and \"hate words and slurs beyond stereotypes\" (including the remaining 54.9% of the entries). The list has not been extended with additional terms from the EXP messages in the Train split of DALC v1.0. The second model is a Linear Support Vector Machine (SVM) model. We used the available implementation in scikit-learn (Pedregosa et al., 2011) . Each message is represented by a TF-IDF vector combining word and character ngrams. We run a grid search to find the best ngram combination and parameter tuning. The final configuration uses bigrams and character ngrams in the range 3-5, a C values of 1.0, and removal of stopwords.",
"cite_spans": [
{
"start": 410,
"end": 435,
"text": "(Bassignana et al., 2018)",
"ref_id": null
},
{
"start": 1114,
"end": 1138,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "The last model is based on a monolingual Dutch pre-trained language model, BERTje (de Vries et al., 2019), available through the Hugging Face transformers library. 8 The model is fine tuned for five epochs, with a standard learning rate of 2e-5, AdamW optimizer (with eps equals to 1e-8), and batch size of 32.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "The results of the experiments are reported in however, the task proves to be challenging. BERTje obtains by far the best results with a macro F1 of 0.748. Quite surprisingly, the Dictionary model has more competitive results than the SVM. The gap in scores can be explained by the large OOV rate between Train and Test split. SVMs usually are very competitive models but one of their shortcoming is the heavy dependence on a shared vocabulary between training and test distributions. A further element of attention is the low Recall that all models have for the positive class. While this behavior is expected due to the unbalanced distributions of the classes, we claim that this is an additional cue with respect to the data distribution of DALC v1.0. To further confirm this intuition, we ran an additional set of experiments on a different data split. We maintained exactly the same amount of messages and distribution in the classes. On the other hand, we did allow for overlap across time periods and seed users. The OOV rate between Train and Test splits drops to 55.21%. At the same time, by re-running the experiments with the same settings for all models, the Dictionary model is the weakest, with a macro F1 of 0.680. On the other hand, the Linear SVM achieves competitive results when compared to BERTje (macro F1 of 0.749 vs. 0.786, respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "Explicit vs. Implicit For the fine-grained classification, we compare only two architectures, the linear SVM and BERTje. As already stated, this is a more challenging setting namely due to a combination of factors such as the number of classes, the data distributions, and the class imbalance. The grid search for the SVM confirmed the same settings as for the binary experiment. We re-used the same settings for BERTje. BERTje is again the model achieving the best results, with a macro F1 of 0.561. Both models, however, struggle to correctly classify the IMP messages correctly. Observing the distribution of the errors for this class, both models tend to be misclassify the IMP messages as NOT, further confirming the observations from the keyword analysis. The increased granularity of the classes has a negative impact on the performance of the SVM also for the EXP messages. While Precision is comparable to the binary setting, the system largely suffers in Recall. This is not the case for BERTje, where Precision and Recall for the EXP and NOT classes are in line with the results of the binary setting. On the other hand, the results for the IMP classes are encouraging, although far from being satisfying.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "Target Classification Models for this task are trained to distinguish among the three target classes: individuals (IND), group(s) (GRP), and other (OTH). For this experiment the amount of training data is smaller since only abusive messages have been used. We experimented with two models' architectures only: a Linear SVM and BERTje. The grid search for the SVM results in the same settings of for the explicitness layer. When it comes to BERTje, we apply the same settings: fine tuninig for five epochs, standard learning rate of 2e-5, AdamW optimizer (with eps equals to 1e-8), and batch size of 32. Results are reported in Table 7 . Both models clearly outperform the MFC baseline. However, the gap between the two is very small differently than for the explicitness layer. Both models struggle with the OTH class. The lower amount of training examples for this class (only 109) is a factor the impact the performance. However, this class is also less homogeneous than the others. It contains different types of targets such as institutions, events, and entities that do not fit in the other two classes. When focusing on the results for the IND and OTH classes, it seems that models suffer less when compared to the explicitness layer. This suggest that there may be a reduced variation in the expressions of the targets. Finally, the results are in line with previous work on target detection in English (Zampieri et al., 2019b) .",
"cite_spans": [
{
"start": 1410,
"end": 1434,
"text": "(Zampieri et al., 2019b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 627,
"end": 634,
"text": "Table 7",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5"
},
{
"text": "This paper introduces DALC v1.0, the first \"generic\" resource for abusive language detection in Dutch. DALC v1.0 contains more thn 8k Twitter messages manually labeled using a multi-layer annotation scheme targeting the explicitness of the message and the targets. A further peculiarity of the dataset is the complete lack of overlap for time periods and users between Train and Test splits, making the task more challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "The combination of multiple data collection strategies aims at promoting new bottom-up approaches less prone to additional biases in the data other than those from the manual labeling. DALC v1.0 adopts a definition of abusive language and an annotation philosophy compatible with previous work, paying attention to promote interoperability across language resources, languages, and abusive language phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "The baseline experiments and systems that have been developed further indicate the challenges of this dataset. The best results are obtained with a fine tuned transformer-based pre-trained language model, BERTje. Fine-grained distinction for the explicitness layer is particularly difficult for implicitly abusive messages. Furthermore, target classification is a challenging task, with overall macro-F1 below 0.50. Future work will focus on an in-depth investigation of the errors to identify easy and complex cases. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Data set name: Dutch Abusive Language Corpus (DALC) v1.0 Data will be released to the public in compliance with GDPR and Twitter's Terms of Service.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "A. CURATION RATIONALE The corpus is composed by tweets in Dutch extracted using different strategies and covering different time windows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "\u2022 Keywords: we have used a cross-platform approach to identify relevant keywords and reduce bias that may be introduced in manual selection of the data. We first identified a time window in Reddit, extracted all posts that received a controversial label. We then identified keywords (unigram) and retained the top 50 keywords per time window. We then used the keywords to extract tweets in corresponding periods. For each time period, we selected a sample 5,000 messages using two dictionaries containing know profanities in Dutch. An additional 5,000 messages are randomly selected. The messages are then re-shuffled and annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "\u2022 Geolocation: following Denti and Faggian (2019) that show the existence of a correlation between hateful messages and disenfranchised and economic poor areas, we selected two geo-graphical areas (Zuid-Holland and Groningen) that according to a 2015 study by the Ducth Buraeu of Statistics (CBS) have the highest unemployement rates of the country. We collected 706,044 tweets posted by users whose location was set to the two target areas. The amount of messages was further filtered by removing noise (i.e., messages containing URLs), dropping to 356,401 tweets. Similarly to the keywords approach, we further filtered 2,500 messages using one profanity dictionary and collected an additional 2,500 randomly.",
"cite_spans": [
{
"start": 25,
"end": 49,
"text": "Denti and Faggian (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "\u2022 Authors: we looked for seed users, i.e., users that are likely to post/use abusive language in their tweets. We created an ad-hoc list of 67 profanities, swearwords, and slurs and then searched for messages containing any of these elements in a ten-day window in December 2018 (namely 2018-11-12 -2018-11-22) , corresponding to a moment of heated debate in the country about Zwarte Piet. We collected an initial amount of 3,105,833 tweets. We terms of potentially available messages, thus making replicability of experiments and comparison with future work impossible. To obviate to this limitation, we make available another version of the corpus, DALC Full Text. This version of the corpus allows users to access to the full text message of all 8,156 tweets. The DALC Full Text dataset is released with a BY-NC 4.0 licence. In this case, we make available only the text, removing any information related to the time periods or seed users. We have also anonymized all users' mentions and external URLs. The CC licence is extended with further restrictions explicitly preventing users to actively search for the text of the messages in any form. We deem these sufficient steps to protect users' privacy and rights to do research using internet material.",
"cite_spans": [
{
"start": 279,
"end": 310,
"text": "(namely 2018-11-12 -2018-11-22)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Data Statement",
"sec_num": null
},
{
"text": "https://what-europe-does-for-me.eu/ en/portal/2/H19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The corpus, the annotation guidelines, and the baselines models are publicly available at https://github.com/ tommasoc80/DALC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Toxic Comment Clas-sification Challenge https: //bit.ly/2QuHKD6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See also https://bit.ly/3aDqoLd. 5 https://bit.ly/2RPGSt5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Statistical test: Mann-Whitney Test; p < 0.05",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "then selected as seed users the top 15, i.e., the top 15 users who most frequently use in their messages any of the 67 keywords. For each of them we further collected a maximum of 100 tweets randomly, summing up to a total of 1390 tweets \u2022 Dictionaries used: HADES (Tulkens et al., 2016); HurtLex v1.2 (Bassignana et al., 2018) Time periods (DD-MM-YYYY): -11-2015/22-11-2015 ",
"cite_spans": [
{
"start": 265,
"end": 288,
"text": "(Tulkens et al., 2016);",
"ref_id": "BIBREF30"
},
{
"start": 289,
"end": 327,
"text": "HurtLex v1.2 (Bassignana et al., 2018)",
"ref_id": null
},
{
"start": 355,
"end": 374,
"text": "-11-2015/22-11-2015",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Dual use DALC v1.0 and the accompanying models are exposed to risks of dual use from malevolent agents. However, we think that by making publicly available the resource, documenting the process behind its creation and the models, we may mitigate such risks.Privacy Collection of data from Twitter's users has been conducted in compliance with Twitter's Terms of Service. Given the large amount of users that may be involved, we could not collect informed consent from each of them. To comply with this limitations, we have made publicly available only the tweet IDs. This will protect the users' rights to delete their messages or accounts. However, releasing only IDs exposes DALC to fluctuations in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Ethical considerations",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hate me, hate me not: Hate speech detection on facebook",
"authors": [
{
"first": "Fabio",
"middle": [
"Del"
],
"last": "Vigna",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Cimino",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Marinella",
"middle": [],
"last": "Petrocchi",
"suffix": ""
},
{
"first": "Maurizio",
"middle": [],
"last": "Tesconi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Italian Conference on Cybersecurity (ITASEC17)",
"volume": "",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Del Vigna, Andrea Cimino, Felice Dell'Orletta, Marinella Petrocchi, and Maurizio Tesconi. 2017. Hate me, hate me not: Hate speech detection on face- book. In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), pages 86-95.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "5th International Conference on Hate Studies",
"authors": [
{
"first": "Daria",
"middle": [],
"last": "Denti",
"suffix": ""
},
{
"first": "Alessandra",
"middle": [],
"last": "Faggian",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daria Denti and Alessandra Faggian. 2019. In 5th In- ternational Conference on Hate Studies, Spokane, USA. non-archival. [link].",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Peer to peer hate: Hate speech instigators and their targets",
"authors": [
{
"first": "Mai",
"middle": [],
"last": "Elsherief",
"suffix": ""
},
{
"first": "Shirin",
"middle": [],
"last": "Nilizadeh",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Vigna",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Belding",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mai ElSherief, Shirin Nilizadeh, Dana Nguyen, Gio- vanni Vigna, and Elizabeth Belding. 2018. Peer to peer hate: Hate speech instigators and their targets. In Proceedings of the International AAAI Confer- ence on Web and Social Media, volume 12.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "How well do hate speech, toxicity, abusive and offensive language classification models generalize across datasets?",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Soler-Company",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2021,
"venue": "Information Processing & Management",
"volume": "58",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna, Juan Soler-Company, and Leo Wanner. 2021. How well do hate speech, toxicity, abusive and offensive language classification models gener- alize across datasets? Information Processing & Management, 58(3):102524.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Large scale crowdsourcing and characterization of twitter abusive behavior",
"authors": [
{
"first": "Constantinos",
"middle": [],
"last": "Antigoni Maria Founta",
"suffix": ""
},
{
"first": "Despoina",
"middle": [],
"last": "Djouvas",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2018,
"venue": "Twelfth International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Twelfth International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Exploration of misogyny in spanish and english tweets",
"authors": [
{
"first": "Simona",
"middle": [],
"last": "Frenda",
"suffix": ""
},
{
"first": "Ghanem",
"middle": [],
"last": "Bilal",
"suffix": ""
}
],
"year": 2018,
"venue": "Third Workshop on Evaluation of Human Language Technologies for Iberian Languages",
"volume": "2150",
"issue": "",
"pages": "260--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simona Frenda, Ghanem Bilal, et al. 2018. Exploration of misogyny in spanish and english tweets. In Third Workshop on Evaluation of Human Language Tech- nologies for Iberian Languages (IberEval 2018), volume 2150, pages 260-267. Ceur Workshop Pro- ceedings.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hate crimes: Causes, controls, and controversies",
"authors": [
{
"first": "B",
"middle": [],
"last": "Phyllis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gerstenfeld",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phyllis B Gerstenfeld. 2017. Hate crimes: Causes, controls, and controversies. Sage Publications.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Twitter-based polarised embeddings for abusive language detection",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Graumans",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ACIIW.2019.8925049"
]
},
"num": null,
"urls": [],
"raw_text": "Leon Graumans, Roy David, and Tommaso Caselli. 2019. Twitter-based polarised embeddings for abu- sive language detection. In 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An expert annotated dataset for the detection of online misogyny",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Guest",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Mittos",
"suffix": ""
},
{
"first": "Nishanth",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Gareth",
"middle": [],
"last": "Tyson",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1336--1350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ella Guest, Bertie Vidgen, Alexandros Mittos, Nis- hanth Sastry, Gareth Tyson, and Helen Margetts. 2021. An expert annotated dataset for the detection of online misogyny. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1336-1350.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic detection of hateful comments in online discussion",
"authors": [
{
"first": "Hugo Lewi",
"middle": [],
"last": "Hammer",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Industrial Networks and Intelligent Systems",
"volume": "",
"issue": "",
"pages": "164--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Lewi Hammer. 2016. Automatic detection of hateful comments in online discussion. In Interna- tional Conference on Industrial Networks and Intel- ligent Systems, pages 164-173. Springer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "spaCy: Industrial-strength Natural Language Processing in Python",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1212303"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The gab hate corpus: A collection of 27k posts annotated for hate speech",
"authors": [
{
"first": "Brendan",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Atari",
"suffix": ""
},
{
"first": "Aida",
"middle": [
"M"
],
"last": "Davani",
"suffix": ""
},
{
"first": "Leigh",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Omrani",
"suffix": ""
},
{
"first": "Yehsong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Coombs",
"suffix": ""
},
{
"first": "Shreya",
"middle": [],
"last": "Havaldar",
"suffix": ""
},
{
"first": "Gwenyth",
"middle": [],
"last": "Portillo-Wightman",
"suffix": ""
},
{
"first": "Elaine",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.31234/osf.io/hqjxn"
]
},
"num": null,
"urls": [],
"raw_text": "Brendan Kennedy, Mohammad Atari, Aida M Da- vani, Leigh Yeh, Ali Omrani, Yehsong Kim, Kris Coombs, Shreya Havaldar, Gwenyth Portillo- Wightman, Elaine Gonzalez, and et al. 2018. The gab hate corpus: A collection of 27k posts annotated for hate speech.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyber- bullying (TRAC-2018), pages 1-11, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Geographies of organized hate in america: a regional analysis. Annals of the American Association of Geographers",
"authors": [
{
"first": "M",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Medina",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Nicolosi",
"suffix": ""
},
{
"first": "Andrew M",
"middle": [],
"last": "Brewer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Linke",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "108",
"issue": "",
"pages": "1006--1021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard M Medina, Emily Nicolosi, Simon Brewer, and Andrew M Linke. 2018. Geographies of or- ganized hate in america: a regional analysis. An- nals of the American Association of Geographers, 108(4):1006-1021.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Source-driven representations for hate speech detection",
"authors": [
{
"first": "Flavio",
"middle": [],
"last": "Merenda",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Zaghi",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2018,
"venue": "CLiC-it",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flavio Merenda, Claudia Zaghi, Tommaso Caselli, and Malvina Nissim. 2018. Source-driven representa- tions for hate speech detection. In CLiC-it.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Author profiling for abuse detection",
"authors": [
{
"first": "Pushkar",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Del"
],
"last": "Tredici",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1088--1098",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pushkar Mishra, Marco Del Tredici, Helen Yan- nakoudakis, and Ekaterina Shutova. 2018. Author profiling for abuse detection. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1088-1098.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Abusive language detection in online user content",
"authors": [
{
"first": "Chikashi",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Achint",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web, WWW '16",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {
"DOI": [
"10.1145/2872427.2883062"
]
},
"num": null,
"urls": [],
"raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th International Conference on World Wide Web, WWW '16, pages 145-153, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Misogyny detection in twitter: a multilingual and cross-domain study",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Endang Wahyu Pamungkas",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "Information Processing & Management",
"volume": "57",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas, Valerio Basile, and Vi- viana Patti. 2020. Misogyny detection in twitter: a multilingual and cross-domain study. Information Processing & Management, 57(6):102360.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Detection of abusive messages in an on-line community",
"authors": [
{
"first": "Etienne",
"middle": [],
"last": "Papegnies",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Labatut",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Dufour",
"suffix": ""
},
{
"first": "Georges",
"middle": [],
"last": "Linar\u00e8s",
"suffix": ""
}
],
"year": 2017,
"venue": "CORIA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Etienne Papegnies, Vincent Labatut, Richard Dufour, and Georges Linar\u00e8s. 2017. Detection of abusive messages in an on-line community. In CORIA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scikit-learn: Machine learning in python. the",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine Learning research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Resources and benchmark corpora for hate speech detection: a systematic review",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2020. Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evalu- ation, pages 1-47.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Characterizing and detecting hateful users on twitter",
"authors": [
{
"first": "Pedro",
"middle": [
"H"
],
"last": "Manoel Horta Ribeiro",
"suffix": ""
},
{
"first": "Yuri",
"middle": [
"A"
],
"last": "Calais",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "A",
"middle": [
"F"
],
"last": "Virg\u00edlio",
"suffix": ""
},
{
"first": "Wagner",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Meira",
"suffix": ""
}
],
"year": 2018,
"venue": "Twelfth International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manoel Horta Ribeiro, Pedro H Calais, Yuri A Santos, Virg\u00edlio AF Almeida, and Wagner Meira Jr. 2018. Characterizing and detecting hateful users on twit- ter. In Twelfth International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Lower bias, higher density abusive language datasets: A recipe",
"authors": [
{
"first": "Juliet",
"middle": [],
"last": "Van Rosendaal",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Workshop on Resources and Techniques for User and Author Profiling in Abusive Language",
"volume": "",
"issue": "",
"pages": "14--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juliet van Rosendaal, Tommaso Caselli, and Malvina Nissim. 2020. Lower bias, higher density abusive language datasets: A recipe. In Proceedings of the Workshop on Resources and Techniques for User and Author Profiling in Abusive Language, pages 14-19, Marseille, France. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A largescale semi-supervised dataset for offensive language identification",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14454"
]
},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, and Preslav Nakov. 2020. A large- scale semi-supervised dataset for offensive language identification. arXiv preprint arXiv:2004.14454.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An Italian Twitter Corpus of Hate Speech against Immigrants",
"authors": [
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Stranisci",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuela Sanguinetti, Fabio Poletto, Cristina Bosco, Vi- viana Patti, and Marco Stranisci. 2018. An Ital- ian Twitter Corpus of Hate Speech against Immi- grants. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The risk of racial bias in hate speech detection",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1668--1678",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1668-1678, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Moderator engagement and community development in the age of algorithms",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Seering",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jina",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Kaufman",
"suffix": ""
}
],
"year": 2019,
"venue": "New Media & Society",
"volume": "21",
"issue": "7",
"pages": "1417--1443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and com- munity development in the age of algorithms. New Media & Society, 21(7):1417-1443.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Studying generalisability across abusive language detection datasets",
"authors": [
{
"first": "Steve",
"middle": [
"Durairaj"
],
"last": "Swamy",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Jamatia",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "940--950",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steve Durairaj Swamy, Anupam Jamatia, and Bj\u00f6rn Gamb\u00e4ck. 2019. Studying generalisability across abusive language detection datasets. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 940-950.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Het gebruik van twitter voor taalkundig onderzoek",
"authors": [
{
"first": "Erik Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2011,
"venue": "TABU: Bulletin voor Taalwetenschap",
"volume": "39",
"issue": "1/2",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Tjong Kim Sang. 2011. Het gebruik van twitter voor taalkundig onderzoek. TABU: Bulletin voor Taalwetenschap, 39(1/2):62-72.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The automated detection of racist discourse in dutch social media",
"authors": [
{
"first": "St\u00e9phan",
"middle": [],
"last": "Tulkens",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Hilte",
"suffix": ""
},
{
"first": "Elise",
"middle": [],
"last": "Lodewyckx",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Verhoeven",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics in the Netherlands Journal",
"volume": "6",
"issue": "",
"pages": "3--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phan Tulkens, Lisa Hilte, Elise Lodewyckx, Ben Verhoeven, and Walter Daelemans. 2016. The au- tomated detection of racist discourse in dutch so- cial media. Computational Linguistics in the Nether- lands Journal, 6:3-20.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Directions in abusive language training data, a systematic review: Garbage in, garbage out",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2021,
"venue": "PLOS ONE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0243300"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen and Leon Derczynski. 2021. Directions in abusive language training data, a systematic re- view: Garbage in, garbage out. PLOS ONE, 15.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Bertje: A dutch bert model",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Wietse De Vries",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.09582"
]
},
"num": null,
"urls": [],
"raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. Bertje: A dutch bert model. arXiv preprint arXiv:1912.09582.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Understanding abuse: A typology of abusive language detection subtasks",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Lan- guage Online, pages 78-84.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016a. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL student research workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016b. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Detection of Abusive Language: the Problem of Biased Datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "602--608",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1060"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Inducing a lexicon of abusive words-a feature-based approach",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1046--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words-a feature-based approach. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 1046-1056.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Predicting the Type and Target of Offensive Posts in Social Media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of NAACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (Of-fensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. SemEval-2019 task 6: Identifying and cat- egorizing offensive language in social media (Of- fensEval). In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 75- 86, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"text": "DALC v1.0 -Keywords: overview of the data collected and annotated",
"content": "<table><tr><td>aggression of an individual or a (social) group,</td></tr><tr><td>but not necessarily of an entity, an institution,</td></tr><tr><td>an organization, or a concept.</td></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"text": "DALC v10: distribution of the sources across Train, Dev, Test. Numbers in parentheses indicate adjustments to prevent data overlap.",
"content": "<table/>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>: DALC v1.0: Distribution of Train, Dev, and</td></tr><tr><td>Test splits for explicitness and target.</td></tr><tr><td>messages are basically non existent in the Train</td></tr><tr><td>split, we observe significantly 7 longer implicit mes-</td></tr><tr><td>sages in the test data, with an average of 27.99</td></tr><tr><td>words against the 24.16 of the explicit ones. Stan-</td></tr><tr><td>dard deviations suggest that the length of the mes-</td></tr><tr><td>sages is skewed both in training and test for the</td></tr><tr><td>three classes, with values ranging between 16.23</td></tr><tr><td>(EXPLICIT) and 13.71 (NOT) in Train, and 15.57</td></tr><tr><td>(IMPLICIT) and 14.03 (NOT) in Test.</td></tr></table>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"html": null,
"text": "DALC v1.0: Top 10 keywords per class in Train and Test. Explicitly offensive/abusive content have been masked with *",
"content": "<table/>"
},
"TABREF8": {
"num": null,
"type_str": "table",
"html": null,
"text": "All models outperform the MFC baseline,",
"content": "<table><tr><td>8 https://huggingface.co/GroNLP/</td></tr><tr><td>bert-base-dutch-cased</td></tr></table>"
},
"TABREF9": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table/>"
},
"TABREF10": {
"num": null,
"type_str": "table",
"html": null,
"text": "summarizes the results.",
"content": "<table><tr><td colspan=\"5\">System Class Precision Recall Macro-F1</td></tr><tr><td/><td>EXP</td><td>0.805</td><td>0.270</td><td/></tr><tr><td>SVM</td><td>IMP</td><td>0.461</td><td>0.033</td><td>0.433</td></tr><tr><td/><td>NOT</td><td>0.719</td><td>0.986</td><td/></tr><tr><td/><td>EXP</td><td>0.759</td><td>0.447</td><td/></tr><tr><td>BERTje</td><td>IMP</td><td>0.373</td><td>0.189</td><td>0.561</td></tr><tr><td/><td>NOT</td><td>0.790</td><td>0.962</td><td/></tr></table>"
},
"TABREF11": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>: DALC v1.0: Explicitness classification. Best</td></tr><tr><td>scores in bold.</td></tr></table>"
},
"TABREF13": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>: DALC v1.0: Target classification. Best scores</td></tr><tr><td>in bold.</td></tr></table>"
},
"TABREF14": {
"num": null,
"type_str": "table",
"html": null,
"text": "Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35, Florence, Italy. Association for Computational Linguistics.",
"content": "<table><tr><td>Thomas Davidson,</td></tr><tr><td>Ika Alfina, Rio Mulia, Mohamad Ivan Fanany, and</td></tr><tr><td>Yudo Ekanata. 2017. Hate speech detection in the</td></tr><tr><td>indonesian language: A dataset and preliminary</td></tr><tr><td>study. In 2017 International Conference on Ad-</td></tr><tr><td>vanced Computer Science and Information Systems</td></tr><tr><td>(ICACSIS), pages 233-238. IEEE.</td></tr><tr><td>Valerio Basile, Cristina Bosco, Elisabetta Fersini,</td></tr><tr><td>Debora Nozza, Viviana Patti, Francisco Manuel</td></tr><tr><td>Rangel Pardo, Paolo Rosso, and Manuela San-</td></tr><tr><td>guinetti. 2019. SemEval-2019 task 5: Multilin-</td></tr><tr><td>gual detection of hate speech against immigrants and</td></tr><tr><td>women in twitter. In Proceedings of the 13th Inter-</td></tr><tr><td>national Workshop on Semantic Evaluation, pages</td></tr><tr><td>54-63, Minneapolis, Minnesota, USA. Association</td></tr><tr><td>for Computational Linguistics.</td></tr><tr><td>Elisa Bassignana, Valerio Basile, and Viviana Patti.</td></tr><tr><td>2018. Hurtlex: A multilingual lexicon of words to</td></tr><tr><td>hurt. In 5th Italian Conference on Computational</td></tr><tr><td>Linguistics, CLiC-it 2018, volume 2253, pages 1-6.</td></tr><tr><td>CEUR-WS.</td></tr><tr><td>Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, Inga</td></tr><tr><td>Kartoziya, and Michael Granitzer. 2020. I feel of-</td></tr><tr><td>fended, don't be abusive! implicit/explicit messages</td></tr><tr><td>in offensive and abusive language. In Proceedings of</td></tr><tr><td>the 12th Language Resources and Evaluation Con-</td></tr><tr><td>ference, pages 6193-6202, Marseille, France. Euro-</td></tr><tr><td>pean Language Resources Association.</td></tr><tr><td>Eshwar Chandrasekharan, Umashanthi Pavalanathan,</td></tr><tr><td>Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein,</td></tr><tr><td>and Eric Gilbert. 2017. You can't stay here: The</td></tr><tr><td>efficacy of reddit's 2015 ban examined through</td></tr><tr><td>hate speech. Proceedings of the ACM on Human-</td></tr><tr><td>Computer Interaction, 1(CSCW):1-22.</td></tr><tr><td>Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem</td></tr><tr><td>Tekiroglu, and Marco Guerini. 2019. CONAN -</td></tr><tr><td>COunter NArratives through nichesourcing: a mul-</td></tr><tr><td>tilingual dataset of responses to fight online hate</td></tr><tr><td>speech. In Proceedings of the 57th Annual Meet-</td></tr><tr><td>ing of the Association for Computational Linguis-</td></tr><tr><td>tics, pages 2819-2829, Florence, Italy. Association</td></tr><tr><td>for Computational Linguistics.</td></tr></table>"
}
}
}
}