| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T06:07:50.889061Z" |
| }, |
| "title": "Exploring Stylometric and Emotion-Based Features for Multilingual Cross-Domain Hate Speech Detection", |
| "authors": [ |
| { |
| "first": "Ilia", |
| "middle": [], |
| "last": "Markov", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "CLIPS Research Center University of Antwerp", |
| "location": { |
| "country": "Belgium" |
| } |
| }, |
| "email": "ilia.markov@uantwerpen.be" |
| }, |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Ljube\u0161i\u0107", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "nikola.ljubesic@ijs.si" |
| }, |
| { |
| "first": "Darja", |
| "middle": [], |
| "last": "Fi\u0161er", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Ljubljana", |
| "location": { |
| "country": "Slovenia" |
| } |
| }, |
| "email": "darja.fiser@ff.uni-lj.si" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Antwerp", |
| "location": { |
| "country": "Belgium" |
| } |
| }, |
| "email": "walter.daelemans@uantwerpen.be" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we describe experiments designed to evaluate the impact of stylometric and emotion-based features on hate speech detection: the task of classifying textual content into hate or non-hate speech classes. Our experiments are conducted for three languages-English, Slovene, and Dutch-both in indomain and cross-domain setups, and aim to investigate hate speech using features that model two linguistic phenomena: the writing style of hateful social media content operationalized as function word usage on the one hand, and emotion expression in hateful messages on the other hand. The results of experiments with features that model different combinations of these phenomena support our hypothesis that stylometric and emotionbased features are robust indicators of hate speech. Their contribution remains persistent with respect to domain and language variation. We show that the combination of features that model the targeted phenomena outperforms words and character n-gram features under cross-domain conditions, and provides a significant boost to deep learning models, which currently obtain the best results, when combined with them in an ensemble.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we describe experiments designed to evaluate the impact of stylometric and emotion-based features on hate speech detection: the task of classifying textual content into hate or non-hate speech classes. Our experiments are conducted for three languages-English, Slovene, and Dutch-both in indomain and cross-domain setups, and aim to investigate hate speech using features that model two linguistic phenomena: the writing style of hateful social media content operationalized as function word usage on the one hand, and emotion expression in hateful messages on the other hand. The results of experiments with features that model different combinations of these phenomena support our hypothesis that stylometric and emotionbased features are robust indicators of hate speech. Their contribution remains persistent with respect to domain and language variation. We show that the combination of features that model the targeted phenomena outperforms words and character n-gram features under cross-domain conditions, and provides a significant boost to deep learning models, which currently obtain the best results, when combined with them in an ensemble.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Hate speech is commonly defined as communication that disparages a person or a group on the basis of some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics (Nockleby, 2000) . The exact definition of hate speech, however, remains a disputed topic, as it is a subjective and multi-interpretable concept (Waseem et al., 2017; Poletto et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 229, |
| "end": 245, |
| "text": "(Nockleby, 2000)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 374, |
| "end": 395, |
| "text": "(Waseem et al., 2017;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 396, |
| "end": 417, |
| "text": "Poletto et al., 2020)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The lack of a consensus on its definition poses a challenge to hate speech annotation. Annotating hateful content remains prone to personal bias and is culture-dependent, which often results in low inter-annotator agreement and therefore scarcity of high quality training data for developing supervised hate speech detection systems (Ross et al., 2016; Waseem, 2016; Sap et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 333, |
| "end": 352, |
| "text": "(Ross et al., 2016;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 353, |
| "end": 366, |
| "text": "Waseem, 2016;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 367, |
| "end": 384, |
| "text": "Sap et al., 2019)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Hate speech online presents additional challenges for natural language processing (NLP): offensive vocabulary and keywords evolve fast due to their relatedness with the hate speech triggering events (Florio et al., 2020) , moreover, users may adapt their lexical choices as a countermeasure against identification or introduce minor misspellings to bypass filtering systems (Berger and Perez, 2006; Vidgen et al., 2019) . Therefore, we intend to investigate more abstract features, less susceptible to specific vocabulary, topic or corpus bias, which we examine in in-domain and crossdomain settings: training and testing on social media datasets belonging to same/different domains, for three languages: English, Slovene, and Dutch.", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 220, |
| "text": "(Florio et al., 2020)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 374, |
| "end": 398, |
| "text": "(Berger and Perez, 2006;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 399, |
| "end": 419, |
| "text": "Vidgen et al., 2019)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our hypothesis is that the style and emotional dimension of hateful textual content may provide useful cues for its detection. We investigate this through a binary hate speech classification task using features that model such information, i.e., function words and emotion-based features. The latter are operationalized in terms of the types of emotions expressed and the frequency of emotionconveying words in the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Function word usage is one of the most important and revealing aspects of style in written language, as shown by numerous studies in stylometric analysis for authorship attribution (Grieve, 2007; Kestemont, 2014; Markov et al., 2018) . While stylometric characteristics have been implicitly included in some hate speech detection studies (e.g., in bag-of-words or character-level models), their impact on the task has not been studied. We propose the hypothesis that stylometric characteristics of hateful writing are distinctive enough to contribute to the hate speech detection task. In other words, hate speech acts as a specific text type with an associated writing style.", |
| "cite_spans": [ |
| { |
| "start": 181, |
| "end": 195, |
| "text": "(Grieve, 2007;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 196, |
| "end": 212, |
| "text": "Kestemont, 2014;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 213, |
| "end": 233, |
| "text": "Markov et al., 2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "On the other hand, we are motivated by psychological and sociological studies, which correlate toxic behaviour online with the emotional profile of the user (Kokkinos and Kipritsi, 2012) . However, unlike previous research that used sentiment information for detecting unacceptable content Dani et al., 2017; Van Hee et al., 2018; Brassard-Gourdeau and Khoury, 2019) , we test whether we are able to capture some of these phenomena by going beyond the sentiment level (positive / negative / neutral) to a more fine-grained emotion level.", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 186, |
| "text": "(Kokkinos and Kipritsi, 2012)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 290, |
| "end": 308, |
| "text": "Dani et al., 2017;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 309, |
| "end": 330, |
| "text": "Van Hee et al., 2018;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 331, |
| "end": 366, |
| "text": "Brassard-Gourdeau and Khoury, 2019)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We compare the performance of stylometric and emotion-based features with commonly used features for hate speech detection: words, character n-grams, and their combination, and with more recent deep learning models that currently provide the state-of-the-art results for the hate speech detection task (Mandl et al., 2019; Basile et al., 2019) : convolutional neural networks (CNN), long shortterm memory networks (LSTM), and bidirectional encoder representations from transformers (BERT). The results of these experiments indicate that the combination of stylometric and emotion-based features performs better than words and character ngrams under cross-domain conditions, and allows to further improve the results of deep learning models when combined with them in an ensemble.", |
| "cite_spans": [ |
| { |
| "start": 302, |
| "end": 322, |
| "text": "(Mandl et al., 2019;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 323, |
| "end": 343, |
| "text": "Basile et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In summary, the contributions of the research presented here are the following: (i) evaluating the contribution of stylometric and emotion-based features to hate speech detection, (ii) examining how robust and persistent their contribution is with respect to domain and language variation, (iii) comparing their performance with commonly used features for the hate speech detection task: words and character n-grams, and with the state-of-the-art deep learning models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To investigate the role of stylometric and emotion information in the hate speech detection task, we conducted experiments on several recent social media datasets in hate speech detection research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology 2.1 Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2.1.1 In-domain datasets FRENK (Ljube\u0161i\u0107 et al., 2019) The FRENK dataset consists of Facebook comments in English and Slovene covering LGBT and migrant topics. The dataset was manually annotated for finegrained types of socially unacceptable discourse (e.g., violence, offensiveness, threat). We used the coarse-grained (binary) hate speech classes: hate speech or non-hate speech messages, selecting the messages for which more than four out of eight annotators agreed upon the class. The detailed description of the dataset collection and annotation procedures can be found in (Ljube\u0161i\u0107 et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 54, |
| "text": "(Ljube\u0161i\u0107 et al., 2019)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 579, |
| "end": 602, |
| "text": "(Ljube\u0161i\u0107 et al., 2019)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology 2.1 Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "LiLaH The LiLaH dataset consists of Facebook comments on LGBT and migrant topics in Dutch. The dataset was collected using the same procedure and annotated following the same annotation guidelines as the FRENK dataset by two trained annotators and one expert annotator. For the binary classes used in this paper, the Percent Agreement for two annotators equals 78.7% and Cohen's Kappa 0.56, which corresponds to an interannotator agreement halfway between \"fair\" and \"good\" (Fleiss, 1981) .", |
| "cite_spans": [ |
| { |
| "start": 474, |
| "end": 488, |
| "text": "(Fleiss, 1981)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology 2.1 Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The statistics of the datasets used are shown in Table 1 . We used training and test partitions splitting the datasets by post boundaries in order to avoid comments from the same discussion thread to appear in both training and test sets, that is, to avoid within-post bias. The partitions were performed in such a way that the distribution of hate ('1') and non-hate speech ('0') classes is as balanced as possible, while the proportion of 80% training and 20% test messages for the addressed languages is preserved.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 49, |
| "end": 56, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Methodology 2.1 Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The balanced subsets of the datasets, in terms of the number of messages for each of the languages, were used for 10-fold cross-validation experiments in order to provide a fair comparison across the targeted languages (the maximum number of available hate and non-hate speech examples across the three languages was selected; marked with '*' in Table 1), while the merged training and test partitions were used as training sets for the cross-domain experiments in order to provide more examples for training the supervised models described further in the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology 2.1 Datasets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For cross-domain experiments, we merged the training and test splits of the FRENK and LiLaH datasets (see Table 1 ) and used it as training data, while the following social media datasets belonging to other domains were used as test sets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 106, |
| "end": 113, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Cross-domain datasets", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "HASOC (Mandl et al., 2019) We used the training subset in English of the HASOC-2019 (Hate Speech and Offensive Content Identification in Indo-European Languages) shared task dataset. It contains Twitter and Facebook messages covering various topics (e.g., Brexit, cricket).", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 26, |
| "text": "(Mandl et al., 2019)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-domain datasets", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "Ask.fm (Van Hee et al., 2015) . We used the Dutch cyberbullying dataset, which contains 85,485 posts from the social networking website Ask.fm annotated with fine-grained cyberbullying categories (e.g., general insult, sexual harassment, sexism, racism). We selected the same number of positive and negative messages as for English in order to provide a fair comparison between the two languages.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 29, |
| "text": "(Van Hee et al., 2015)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-domain datasets", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "The statistics of the datasets used as test sets for the cross-domain experiments is shown in ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-domain datasets", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "We performed tokenization, lemmatization, and POS tagging using the StanfordNLP library (Qi et al., 2018) for all the languages addressed in this work, removing metadata and URL mentions in preprocessing. We used the sets of features described below, a term frequency (tf) weighting scheme and the liblinear scikit-learn (Pedregosa et al., 2011) implementation of Support Vector Machines (SVM) with optimized parameters for classification (we selected the optimal liblinear classifier parameters: penalty parameter (C), loss function (loss), and tolerance for stopping criteria (tol) based on grid search). The effectiveness of SVM has been shown by numerous experiments on hate speech detection (Fortuna and Nunes, 2018; Basile et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 105, |
| "text": "(Qi et al., 2018)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 321, |
| "end": 345, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 696, |
| "end": 721, |
| "text": "(Fortuna and Nunes, 2018;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 722, |
| "end": 742, |
| "text": "Basile et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment setup", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We used a CNN model (Kim, 2014) to learn discriminative word-level features with the following architecture: first, an embedding layer transforms sparse vectors into dense vector representations. To process the word embeddings, we used a convolutional layer (kernel size: 3) followed by a global max pooling layer. Then, a dense layer with a ReLU activation is applied, followed by a dropout of 0.6, and finally, a dense layer with a sigmoid activation to make the prediction for the binary classification. We used an LSTM model (Hochreiter and Schmidhuber, 1997) , which takes a sequence of words as input and aims at capturing long-term dependencies. We processed the sequence of word embeddings with a unidirectional LSTM layer with 300 units, followed by a dropout of 0.4, and a dense layer with a sigmoid activation for predictions. The multilingual BERT model (BERT-base, multilingual cased (Devlin et al., 2019) ) was used for all the languages addressed in this work. The implementation was done in PyTorch (Paszke et al., 2019) using the simple transformers library. 2 Deep learning models currently achieve the state-of-the-art results for the hate speech detection task, which are in 80%-90% F1-score range in in-domain set-tings, depending on the languages being considered, amount of data, etc. (Mandl et al., 2019; Basile et al., 2019; Zampieri et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 31, |
| "text": "(Kim, 2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 529, |
| "end": 563, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 897, |
| "end": 918, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1015, |
| "end": 1036, |
| "text": "(Paszke et al., 2019)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1308, |
| "end": 1328, |
| "text": "(Mandl et al., 2019;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1329, |
| "end": 1349, |
| "text": "Basile et al., 2019;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1350, |
| "end": 1372, |
| "text": "Zampieri et al., 2020)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment setup", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We report the results in terms of precision, recall, and F1-score (macro-averaged). Note that we used similar settings, tools, and models, i.e., size of the training and test data, StanfordNLP for tokenization, lemmatization, and POS tagging, multilingual BERT models, in order to provide a fair comparison across the different languages covered in the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment setup", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The experiments we report were designed to investigate and quantify the impact of stylometric information, modeled through function word usage, and emotion-based features on hate speech detection. While representations of documents through word/character n-grams provide good results for detecting abusive language (Nobata et al., 2016; Van Hee et al., 2018) , these features cover -and at the same time obscure -a wide range of phenomena, and therefore, it is not clear what the impact is of subsets of these features representing specific linguistic information. Moreover, these features include content words and are susceptible to topic, genre, and domain bias, which often results in overfitting the data. Because of this we abstract away from domain-dependent content word patterns and use more abstract POS n-gram features, to which we add stylometric features (function words) and emotion-based features, to evaluate their impact on the hate speech detection task.", |
| "cite_spans": [ |
| { |
| "start": 315, |
| "end": 336, |
| "text": "(Nobata et al., 2016;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 337, |
| "end": 358, |
| "text": "Van Hee et al., 2018)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Part-of-speech (POS) POS features capture the morpho-syntactic patterns in a text, and are indicative of hate speech, especially when used in combination with other types of features (Warner and Hirschberg, 2012; Robinson et al., 2018) . POS tags were obtained with the Stanford POS Tagger (Toutanova et al., 2003) . We used the same 17 universal POS tags for the three languages and built n-grams from this representation with n = 1-3.", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 212, |
| "text": "(Warner and Hirschberg, 2012;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 213, |
| "end": 235, |
| "text": "Robinson et al., 2018)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 290, |
| "end": 314, |
| "text": "(Toutanova et al., 2003)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Stylometric features Function words (FW) are considered one of the most important stylometric feature types (Kestemont, 2014) . They clarify the relationships between the content-carrying elements of a sentence, and introduce syntactic structures like verbal complements, relative clauses, and questions (Smith and Witten, 1993) . With respect to emotion features, FW can appear as quantifiers, intensifiers (e.g., very good) or modify the emotion expressed in other ways. We used linguisticallydefined FW, that is, words belonging to the closed syntactic classes. 3 FW are incorporated into the POS representation, as shown in Table 3 .", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 125, |
| "text": "(Kestemont, 2014)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 304, |
| "end": 328, |
| "text": "(Smith and Witten, 1993)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 628, |
| "end": 635, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Emotion-based features To encode emotion information in our data, we used the 14,182 emotion words and their associations with eight emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and two sentiments (negative and positive) from the NRC emotion lexicon (Mohammad and Turney, 2013). We used the LiLaH emotion lexicon for Slovene and Dutch (Ljube\u0161i\u0107 et al., 2020; , which contains manual translations of the NRC emotion lexicon entries. The emotion information was modeled through (i) incorporating emotion-conveying words into the POS & FW representation, as shown in Table 3 , (ii) counting the number of such words in a message (count), and (iii) using the emotion associations of the emotion words in a message. These features were used to encode the types of emotions in a message and to capture how high-emotional or low-emotional a message is.", |
| "cite_spans": [ |
| { |
| "start": 371, |
| "end": 394, |
| "text": "(Ljube\u0161i\u0107 et al., 2020;", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 600, |
| "end": 607, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Consider the following English comment from our data belonging to the hate speech class: Mental illness on parade. Table 3 shows an example of the representation of this message through the features described above. From the POS & FW & emotion word representations, n-grams (n = 1-3) are built. 4 The count of emotionally-charged words and the emotion associations were added as additional feature vectors. ", |
| "cite_spans": [ |
| { |
| "start": 295, |
| "end": 296, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 115, |
| "end": 122, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The main goal of this paper is to identify and analyze specific linguistic phenomena with respect to their role in hate speech detection. In particular, we focused on function word usage (as an expression of style of the hateful content) and emotion (as personality and psychological state indicators with respect to the usage of emotion terms in written messages). First, we perform a separate analysis of the contribution of the phenomena we target, and then compare the performance of combining features that encode these phenomena with the commonly used features for the hate speech detection task: words, character n-grams, and their combination (tf weighting scheme and the liblinear SVM classifier with optimized parameters), and with more recent deep learning models that achieve state-ofthe-art results for hate speech detection (Mandl et al., 2019; Basile et al., 2019) : CNN, LSTM, and BERT (see Section 2.2).", |
| "cite_spans": [ |
| { |
| "start": 838, |
| "end": 858, |
| "text": "(Mandl et al., 2019;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 859, |
| "end": 879, |
| "text": "Basile et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Sections 3.1 and 3.2 show the results of these experiments in in-domain and cross-domain settings, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "3" |
| }, |
| { |
| "text": "10-fold cross-validation First, we analyze the features that capture the phenomena we target in isolation and in combination on the balanced subsets of the datasets (see Table 1 ) for 10-fold crossvalidation results. The results of these experiments, presented in Table 4 , are compared with words (BoW), character 1-3-grams (char), and their combination. We also present the results when all the feature sets are combined. As a reference point we provide the random baseline (stratified strategy).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 170, |
| "end": 177, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 264, |
| "end": 271, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "In-domain experiments", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The results of the experiments presented in Table 4 indicate that stylometric features indeed contribute to the hate speech detection task, as evidenced by their positive impact to the POS representation for all the considered languages (POS & FW representation). Likewise, emotion-conveying words (POS & FW & emotion words), the count of such words in a message (POS & FW & emo words & count) and ten features that correspond to the type of emotions being conveyed (POS & FW & emo words & emo feats) further contribute to the results. While their performance in isolation is moderate, higher results are achieved when features representing each of these phenomena are combined, indicating that they are complementary for the hate speech detection task (this representation is marked in bold and in the remainder of this paper referred to as 'our' approach). Feature importance analysis revealed that negative emotion words, such as 'disgusting', 'sick', 'invasion', are the most indicative features in our model. We also note that words (BoW) and character n-gram features perform well in in-domain conditions and achieve higher results than our approach. While words are the best unique features for English, the combination of words and character ngrams shows the highest results for Slovene and Dutch. This may be related to the fact that Slovene and Dutch are morphologically richer languages than English, as character n-grams are able to capture morphological affixes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "In-domain experiments", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "When stylometric and emotion features are combined with words and character n-grams, the best results are obtained for English and Slovene. For the Dutch language, the combination of words and character n-grams performs very well in in-domain settings, as confirmed by the additional experiments we present further in the paper. It is also interesting to note that for the English language adding the combination of BoW and character ngrams to our approach provides a higher boost than adding BoW features only, the best unique feature type for this language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "In-domain experiments", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Train-test partitions Next, we present the results when splitting the training and test sets by post boundaries in order to avoid within-post bias, as described in Section 2.1. In this scenario, we additionally compare the performance of stylometric, emotion, and BoW and character n-gram features with more recent deep learning models: CNN, LSTM, and BERT. The results for these experiments are shown in Table 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 405, |
| "end": 412, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "In-domain experiments", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The results presented in Table 5 indicate that splitting the data by post boundaries is a more challenging scenario, as evidenced by the drop in performance for the BoW approach for all the languages. Nonetheless, the trends observed in the 10-fold cross-validation experiments remain consistent: stylometric and emotion-based features provide substantial improvements when added to the POS representation, while the gap in performance when compared to BoW and character n-grams is smaller for all the languages but Dutch. For the Dutch language, character n-grams provide higher results than in the 10-fold cross-validation experiments, and higher boost when combined with BoW. The combination of character n-grams and BoW for this language shows even higher results than BERT: the best deep learning model across the three languages. For all the targeted languages, adding BoW and character n-grams to our approach further improves the results, outperforming BoW, character ngrams, and their combination, and achieving competitive results with the deep learning models. Having confirmed that due to stylometric choices and emotion expression we can distinguish the hateful messages in in-domain settings, we proceed with a cross-domain analysis of the targeted phenomena.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 25, |
| "end": 32, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "In-domain experiments", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In this section, we evaluate the robustness of stylometric and emotion-based features under crossdomain conditions: training and testing on outof-domain social media datasets described in Section 2.1. Cross-domain scalability is essential to identify features of online hate speech that gener-alize well across domains. Table 6 presents the results for the cross-domain experiments.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 320, |
| "end": 327, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Cross-domain experiments", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The results in Table 6 show that using out-ofdomain data for testing leads to a drop in performance for all the models. The drop for the English language is much higher than for Dutch, despite that for English we used a dataset annotated for the same task (hate speech detection) and for a different task, cyberbullying detection, for Dutch. 5 The descriptive analysis showed that the Jaccard similarity coefficient (Jaccard, 1901) for the cross-domain training and test sets is 20.6% for English and 12.4% for Dutch, which implies that a large part of the training and test vocabularies do not overlap. Therefore, the asymmetric drop in performance across the two languages cannot be explained by lexical overlap. The lower drop for Dutch may be related to the relative non-complexity of the cyberbullying content. An analogous effect was observed in (Emmery et al., 2020) , where a similar behaviour is reported when training on toxic messages and using Ask.fm as out-of-domain test set, and which is also evidenced by the high precision scores obtained for the models used in this paper when testing on the Dutch cyberbullying data.", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 343, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 416, |
| "end": 431, |
| "text": "(Jaccard, 1901)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 852, |
| "end": 873, |
| "text": "(Emmery et al., 2020)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 22, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Cross-domain experiments", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We note that, similarly to the in-domain experiments, stylometric and emotion-based features provide substantial improvements when combined with the POS representation. Being more abstract features, they cope well with domain variation and show a lower drop in cross-domain conditions when compared to the baseline models. This indicates that the features that capture the targeted phenomena are robust and portable across social media domains.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-domain experiments", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Words and character n-gram features, on the contrary, show a high drop in cross-domain settings and provide marginal improvement for English and no improvement for Dutch when combined with our approach. For the Dutch language, stylometric and emotion-based features partly compensate for the loss in performance brought by words and character n-grams under cross-corpus conditions. We can also observe that our approach outperforms commonly used features for the hate speech detection task: words, character n-grams, and their combination both for English and Dutch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-domain experiments", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The deep learning models provide high results in cross-domain settings. Combining our approach with the best performing deep learning models: CNN and BERT, using a hard majority-voting ensemble, significantly improves the results when compared to the performance of CNN and BERT in isolation (according to McNemar's statistical significance test (McNemar, 1947) with \u03b1 < 0.05) and achieves the highest cross-domain results for the both languages. We conclude that this significant improvement is brought by the ability of our approach to capture the style of hateful content and the emotional peculiarities present in hateful messages, which are hard to encode by deep learning models on relatively small datasets. A detailed analysis presented below provides deeper insights into the nature of these improvements.", |
| "cite_spans": [ |
| { |
| "start": 346, |
| "end": 361, |
| "text": "(McNemar, 1947)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-domain experiments", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this section, we report on a manual error analysis performed on the difference in the output of the BERT model (the best-performing deep learning model in our experiments) and the stylometryemotion-based approach (POS & FW & emo, 'our' model) by inspecting hate speech instances that were correctly identified by one model but not by the other. We perform the analysis on the Slovene in-domain dataset and the English cross-domain dataset. We inspect 50 random misclassified hate speech instances per dataset (in-domain vs. crossdomain) and the model correctly identifying hate", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 245, |
| "text": "(POS & FW & emo, 'our' model)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In-domain Cross-domain Type Element Style-emo (%) BERT (%) Style-emo (%) BERT (%) explicit violence 4 22 0 4 insult 26 40 40 34 swearword 2 2 4 50 implicit violence 4 4 4 0 argument 10 6 0 0 accusation 0 0 24 2 othering 20 10 0 0 other quotation 2 2 0 0 multilingual 0 0 4 0 unclear 32 16 24 10 Table 7 : Distribution of correctly classified hate speech instances by one model and not by the other, with respect to the type and element of hate speech, in in-domain and cross-domain settings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 78, |
| "end": 355, |
| "text": "(%) explicit violence 4 22 0 4 insult 26 40 40 34 swearword 2 2 4 50 implicit violence 4 4 4 0 argument 10 6 0 0 accusation 0 0 24 2 othering 20 10 0 0 other quotation 2 2 0 0 multilingual 0 0 4 0 unclear 32 16 24 10 Table 7", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "speech (BERT vs. our). Our error analysis is based on annotating each instance with the type of hate speech (explicit vs. implicit), the element of hate speech (call to violence, insult, swearword, argument, accusation, and othering) and, where relevant, the reason why it was undetected (informal expression, unconventional spelling, foreign language, creative language, metaphorical language, unconventional tokenization). In Table 7 , we present the quantitative results of this manual analysis, reporting for each model and dataset the distribution of correctly classified hate speech instances, the other model failing on these instances, given the type and element of hate speech.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 428, |
| "end": 435, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "On the first level of the type of hate speech, where we discriminate between explicit and implicit hate speech, both in the in-domain and the cross-domain settings, we observed that BERT is better at identifying explicit cases of hate speech (overall 72% of instances correctly identified by BERT but not by the stylometry-emotion-based approach vs. 32% of instances correctly identified by the stylometry-emotion-based approach but not by BERT on the in-domain dataset, on the crossdomain dataset this relation being 84% vs. 44%), while the stylometry-emotion-based approach deals better with implicit instances of hate speech (34% vs. 20% in-domain; 28% vs. 2% cross-domain).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The results for the hate speech elements reflect the difference between the in-domain and the crossdomain datasets. For the explicit cases of hate speech, in both datasets insults tend to be the dominant elements caught by one model but not by the other, while the in-domain dataset contains much more calls to violence, and swearing prevails in the cross-domain dataset. For the explicit cases of hate speech, arguments and othering strategies were most frequently misclassified in the in-domain dataset, while accusations were the most frequent issues in the cross-domain dataset. These differences can be followed back to the differences in the medium mostly containing Facebook discussions in the in-domain case, which are more discursive and implicit and Twitter messages in the cross-domain case, which are much shorter and direct.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We also performed a closer inspection of insults and swearwords, which are lexical categories and should in principle be simple to identify via means of supervised machine learning, but were missed because they were highly informal, noncanonically spelled or tokenized, taken from foreign languages, incomplete or idiosyncratic. Another type of undetected insults were words from the general vocabulary from topics such as animals, hygiene, intelligence, etc. that were used metaphorically or with a distinctly negative connotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For the category of misclassifications by one model but not by the other, where the reason for not detecting the element of hate speech was unclear, we observed a trend that our approach is prevailing in that category both for the in-domain and the cross-domain settings. Furthermore, we observed that these instances were rather long, which brought us to question whether there is a consistent difference in the length of instance given which model correctly classified the hate speech instance. The analysis of the median length in characters for hate speech instances correctly classified by one model but not by the other for all the languages both in the in-domain and cross-domain settings revealed that there is a drastic tendency for the longer instances to be correctly identified by our approach, while BERT performs better on shorter instances. The only deviation from this trend is the cross-domain Dutch dataset where the instances are overall very short.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We conclude that the stylometry-emotion-based approach performs better on less explicit and longer instances of hate speech, while it lags behind BERT on capturing the more explicit cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The goal of this work was to evaluate and quantify the role of stylometric and emotion-based features in the hate speech detection task. We showed that stylometric and emotional dimensions of hateful content provide useful cues for its detection, as evidenced by the positive impact of stylometric and emotion-based features in various in-domain experiments for all the considered languages. Their contribution remains persistent with respect to domain variations. Under cross-domain conditions, our approach that combines features that capture the targeted phenomena performs better than commonly used features for hate speech detection such as words, character n-grams, and their combination. Finally, we showed that in cross-domain settings our approach that incorporates stylometric and emotion-based features significantly contributes to the recent deep learning models when combined through a majority-voting ensemble, which allows to achieve the highest results for the languages addressed in this work. A manual error analysis showed that this improvement is brought by the ability of stylometric and emotion-based features to capture implicit and longer instances of hate speech. The consistent and substantial improvement in hate speech detection brought by including stylometric and emotion-based features in the different setups and for different languages explored indicates that their usage is a robust indicator of the hateful content.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The importance of stylometric features points in the direction of the existence of a linguistic register for hate speech messages with specific stylistic properties and a negative emotional load. Focusing on these features in text representation leads to more cross-domain robustness.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For Slovene, we did not find an annotated dataset belonging to a different hate speech domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://simpletransformers.ai/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://universaldependencies.org/u/pos/ 4 Representing the messages in the following way provided higher 10-fold cross-validation results than combining separate feature vectors (0.8%-1.2% depending on the language).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We also tested the robustness of cross-domain settings by experimenting with other subsets for Dutch, e.g., balanced class distribution, more examples, as well as with other English datasets, achieving similar results with the same trends.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been supported by the Slovenian Research Agency and the Flemish Research Foundation through the bilateral research project ARRS N6-0099 and FWO G070619N \"The linguistic landscape of hate speech on social media\", the Slovenian Research Agency research core funding No. P6-0411 \"Language resources and tech-nologies for Slovene language\", and the European Union's Rights, Equality and Citizenship Programme (2014-2020) project IMSyPP (grant no. 875263).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter", |
| "authors": [ |
| { |
| "first": "Valerio", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Cristina", |
| "middle": [], |
| "last": "Bosco", |
| "suffix": "" |
| }, |
| { |
| "first": "Elisabetta", |
| "middle": [], |
| "last": "Fersini", |
| "suffix": "" |
| }, |
| { |
| "first": "Debora", |
| "middle": [], |
| "last": "Nozza", |
| "suffix": "" |
| }, |
| { |
| "first": "Viviana", |
| "middle": [], |
| "last": "Patti", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco Manuel Rangel", |
| "middle": [], |
| "last": "Pardo", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Rosso", |
| "suffix": "" |
| }, |
| { |
| "first": "Manuela", |
| "middle": [], |
| "last": "Sanguinetti", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "54--63", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S19-2007" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. ACL.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Islamic State's diminishing returns on Twitter: How suspensions are limiting the social networks of Englishspeaking ISIS supporters", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "M" |
| ], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "Heather", |
| "middle": [], |
| "last": "Perez", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "JM Berger and Heather Perez. 2006. The Islamic State's diminishing returns on Twitter: How suspen- sions are limiting the social networks of English- speaking ISIS supporters. Technical report, George Washington University.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Subversive toxicity detection using sentiment information", |
| "authors": [ |
| { |
| "first": "Eloi", |
| "middle": [], |
| "last": "Brassard", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Gourdeau", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Khoury", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Third Workshop on Abusive Language Online", |
| "volume": "", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eloi Brassard-Gourdeau and Richard Khoury. 2019. Subversive toxicity detection using sentiment infor- mation. In Proceedings of the Third Workshop on Abusive Language Online, pages 1-10, Florence, Italy. ACL.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Nikola Ljube\u0161i\u0107, Ilia Markov, and Damjan Popi\u010d. 2020. The LiLaH Emotion Lexicon of Croatian, Dutch and Slovene", |
| "authors": [ |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "Darja", |
| "middle": [], |
| "last": "Fi\u0161er", |
| "suffix": "" |
| }, |
| { |
| "first": "Jasmin", |
| "middle": [], |
| "last": "Franza", |
| "suffix": "" |
| }, |
| { |
| "first": "Denis", |
| "middle": [], |
| "last": "Kranj\u010di\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Slovenian language resource repository CLARIN.SI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Walter Daelemans, Darja Fi\u0161er, Jasmin Franza, De- nis Kranj\u010di\u0107, Jens Lemmens, Nikola Ljube\u0161i\u0107, Ilia Markov, and Damjan Popi\u010d. 2020. The LiLaH Emo- tion Lexicon of Croatian, Dutch and Slovene. Slove- nian language resource repository CLARIN.SI.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Sentiment informed cyberbullying detection in social media", |
| "authors": [ |
| { |
| "first": "Harsh", |
| "middle": [], |
| "last": "Dani", |
| "suffix": "" |
| }, |
| { |
| "first": "Jundong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Huan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases", |
| "volume": "", |
| "issue": "", |
| "pages": "52--67", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/978-3-319-71249-9_4" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harsh Dani, Jundong Li, and Huan Liu. 2017. Sen- timent informed cyberbullying detection in social media. In Proceedings of the European Confer- ence on Machine Learning and Knowledge Discov- ery in Databases, pages 52-67, Skopje, Macedonia. Springer.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automated hate speech detection and the problem of offensive language", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Davidson", |
| "suffix": "" |
| }, |
| { |
| "first": "Dana", |
| "middle": [], |
| "last": "Warmsley", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Macy", |
| "suffix": "" |
| }, |
| { |
| "first": "Ingmar", |
| "middle": [], |
| "last": "Weber", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. CoRR, abs/1703.04009.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies)", |
| "volume": "", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1423" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies), pages 4171-4186, Minneapolis, MN, USA. ACL.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Current limitations in cyberbullying detection: On evaluation criteria, reproducibility, and data scarcity. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Emmery", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Verhoeven", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [ |
| "De" |
| ], |
| "last": "Pauw", |
| "suffix": "" |
| }, |
| { |
| "first": "Gilles", |
| "middle": [], |
| "last": "Jacobs", |
| "suffix": "" |
| }, |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Van Hee", |
| "suffix": "" |
| }, |
| { |
| "first": "Els", |
| "middle": [], |
| "last": "Lefever", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Desmet", |
| "suffix": "" |
| }, |
| { |
| "first": "V\u00e9ronique", |
| "middle": [], |
| "last": "Hoste", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10579-020-09509-1" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Emmery, Ben Verhoeven, Guy De Pauw, Gilles Jacobs, Cynthia Van Hee, Els Lefever, Bart Desmet, V\u00e9ronique Hoste, and Walter Daelemans. 2020. Cur- rent limitations in cyberbullying detection: On evalu- ation criteria, reproducibility, and data scarcity. Lan- guage Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Statistical methods for rates and proportions", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Fleiss", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Fleiss. 1981. Statistical methods for rates and proportions, 2nd edition. New York: John Wiley, Heidelberg, Germany.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Time of your hate: The challenge of time in hate speech detection on social media", |
| "authors": [ |
| { |
| "first": "Komal", |
| "middle": [], |
| "last": "Florio", |
| "suffix": "" |
| }, |
| { |
| "first": "Valerio", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Polignano", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierpaolo", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Viviana", |
| "middle": [], |
| "last": "Patti", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Applied Sciences", |
| "volume": "10", |
| "issue": "12", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.3390/app10124180" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Komal Florio, Valerio Basile, Marco Polignano, Pier- paolo Basile, and Viviana Patti. 2020. Time of your hate: The challenge of time in hate speech detection on social media. Applied Sciences, 10(12):4180.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A survey on automatic detection of hate speech in text", |
| "authors": [ |
| { |
| "first": "Paula", |
| "middle": [], |
| "last": "Fortuna", |
| "suffix": "" |
| }, |
| { |
| "first": "S\u00e9rgio", |
| "middle": [], |
| "last": "Nunes", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ACM Computing Surveys", |
| "volume": "51", |
| "issue": "4", |
| "pages": "1--30", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3232676" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on au- tomatic detection of hate speech in text. ACM Com- puting Surveys, 51(4):1-30.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Quantitative Authorship Attribution: An Evaluation of Techniques", |
| "authors": [ |
| { |
| "first": "Jack", |
| "middle": [], |
| "last": "Grieve", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Digital Scholarship in the Humanities", |
| "volume": "22", |
| "issue": "3", |
| "pages": "251--270", |
| "other_ids": { |
| "DOI": [ |
| "10.1093/llc/fqm020" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jack Grieve. 2007. Quantitative Authorship Attribu- tion: An Evaluation of Techniques. Digital Schol- arship in the Humanities, 22(3):251-270.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/neco.1997.9.8.1735" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "\u00c9tude comparative de la distribution florale dans une portion des Alpes et des Jura", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Jaccard", |
| "suffix": "" |
| } |
| ], |
| "year": 1901, |
| "venue": "Bulletin de la Soci\u00e9t\u00e9 vaudoise des sciences naturelles", |
| "volume": "37", |
| "issue": "", |
| "pages": "547--579", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Jaccard. 1901.\u00c9tude comparative de la distri- bution florale dans une portion des Alpes et des Jura. Bulletin de la Soci\u00e9t\u00e9 vaudoise des sciences naturelles, 37:547-579.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Function words in authorship attribution. From black magic to theory?", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Kestemont", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 3rd Workshop on Computational Linguistics for Literature", |
| "volume": "", |
| "issue": "", |
| "pages": "59--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Kestemont. 2014. Function words in authorship attribution. From black magic to theory? In Pro- ceedings of the 3rd Workshop on Computational Lin- guistics for Literature, pages 59-66, Gothenburg, Sweden. ACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Convolutional neural networks for sentence classification", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1746--1751", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1746-1751, Doha, Qatar. ACL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The relationship between bullying, victimization, trait emotional intelligence, self-efficacy and empathy among preadolescents", |
| "authors": [ |
| { |
| "first": "Constantinos", |
| "middle": [], |
| "last": "Kokkinos", |
| "suffix": "" |
| }, |
| { |
| "first": "Eirini", |
| "middle": [], |
| "last": "Kipritsi", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Social Psychology of Education", |
| "volume": "15", |
| "issue": "1", |
| "pages": "41--58", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s11218-011-9168-9" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Constantinos Kokkinos and Eirini Kipritsi. 2012. The relationship between bullying, victimization, trait emotional intelligence, self-efficacy and empathy among preadolescents. Social Psychology of Edu- cation, 15(1):41-58.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "The FRENK datasets of socially unacceptable discourse in Slovene and English", |
| "authors": [ |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Ljube\u0161i\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Darja", |
| "middle": [], |
| "last": "Fi\u0161er", |
| "suffix": "" |
| }, |
| { |
| "first": "Toma\u017e", |
| "middle": [], |
| "last": "Erjavec", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 22nd International Conference on Text, Speech, and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "103--114", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/978-3-030-27947-9_9" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikola Ljube\u0161i\u0107, Darja Fi\u0161er, and Toma\u017e Erjavec. 2019. The FRENK datasets of socially unacceptable dis- course in Slovene and English. In Proceedings of the 22nd International Conference on Text, Speech, and Dialogue, pages 103-114, Ljubljana, Slovenia. Springer.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The LiLaH emotion lexicon of Croatian, Dutch and Slovene", |
| "authors": [ |
| { |
| "first": "Nikola", |
| "middle": [], |
| "last": "Ljube\u0161i\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilia", |
| "middle": [], |
| "last": "Markov", |
| "suffix": "" |
| }, |
| { |
| "first": "Darja", |
| "middle": [], |
| "last": "Fi\u0161er", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media", |
| "volume": "", |
| "issue": "", |
| "pages": "153--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikola Ljube\u0161i\u0107, Ilia Markov, Darja Fi\u0161er, and Walter Daelemans. 2020. The LiLaH emotion lexicon of Croatian, Dutch and Slovene. In Proceedings of the Third Workshop on Computational Modeling of Peo- ple's Opinions, Personality, and Emotions in Social Media, pages 153-157, Barcelona, Spain (Online). ACL.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Overview of the HASOC track at FIRE 2019: Hate speech and offensive content identification in Indo-European languages", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mandl", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandip", |
| "middle": [], |
| "last": "Modha", |
| "suffix": "" |
| }, |
| { |
| "first": "Prasenjit", |
| "middle": [], |
| "last": "Majumder", |
| "suffix": "" |
| }, |
| { |
| "first": "Daksh", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohana", |
| "middle": [], |
| "last": "Dave", |
| "suffix": "" |
| }, |
| { |
| "first": "Chintak", |
| "middle": [], |
| "last": "Mandlia", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "14--17", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3368567.3368584" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019. Overview of the HASOC track at FIRE 2019: Hate speech and offensive content identification in Indo-European languages. In Pro- ceedings of the 11th Forum for Information Re- trieval Evaluation, page 14-17, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Improving cross-topic authorship attribution: The role of pre-processing", |
| "authors": [ |
| { |
| "first": "Ilia", |
| "middle": [], |
| "last": "Markov", |
| "suffix": "" |
| }, |
| { |
| "first": "Efstathios", |
| "middle": [], |
| "last": "Stamatatos", |
| "suffix": "" |
| }, |
| { |
| "first": "Grigori", |
| "middle": [], |
| "last": "Sidorov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 18th International Conference on Computational Linguistics and Intelligent Text Processing", |
| "volume": "10762", |
| "issue": "", |
| "pages": "289--302", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/978-3-319-77116-8_21" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilia Markov, Efstathios Stamatatos, and Grigori Sidorov. 2018. Improving cross-topic authorship at- tribution: The role of pre-processing. In Proceed- ings of the 18th International Conference on Compu- tational Linguistics and Intelligent Text Processing, volume 10762, pages 289-302, Budapest, Hungary. Springer.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Note on the sampling error of the difference between correlated proportions or percentages", |
| "authors": [ |
| { |
| "first": "Quinn", |
| "middle": [], |
| "last": "Mcnemar", |
| "suffix": "" |
| } |
| ], |
| "year": 1947, |
| "venue": "Psychometrika", |
| "volume": "12", |
| "issue": "2", |
| "pages": "153--157", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/BF02295996" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Crowdsourcing a word-emotion association lexicon", |
| "authors": [ |
| { |
| "first": "Saif", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Intelligence", |
| "volume": "29", |
| "issue": "", |
| "pages": "436--465", |
| "other_ids": { |
| "DOI": [ |
| "10.1111/j.1467-8640.2012.00460.x" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif Mohammad and Peter Turney. 2013. Crowdsourc- ing a word-emotion association lexicon. Computa- tional Intelligence, 29:436-465.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Abusive language detection in online user content", |
| "authors": [ |
| { |
| "first": "Chikashi", |
| "middle": [], |
| "last": "Nobata", |
| "suffix": "" |
| }, |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Tetreault", |
| "suffix": "" |
| }, |
| { |
| "first": "Achint", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "Yashar", |
| "middle": [], |
| "last": "Mehdad", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 25th International Conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "145--153", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/2872427.2883062" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th International Conference on World Wide Web, pages 145-153, Switzerland. IW3C2.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Hate speech. Encyclopedia of the American Constitution", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Nockleby", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1277--1279", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Nockleby. 2000. Hate speech. Encyclopedia of the American Constitution, pages 1277-1279.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Pytorch: An imperative style, high-performance deep learning library", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Paszke", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Massa", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lerer", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Bradbury", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Chanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Killeen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zeming", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Natalia", |
| "middle": [], |
| "last": "Gimelshein", |
| "suffix": "" |
| }, |
| { |
| "first": "Luca", |
| "middle": [], |
| "last": "Antiga", |
| "suffix": "" |
| }, |
| { |
| "first": "Alban", |
| "middle": [], |
| "last": "Desmaison", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Kopf", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zachary", |
| "middle": [], |
| "last": "Devito", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Raison", |
| "suffix": "" |
| }, |
| { |
| "first": "Alykhan", |
| "middle": [], |
| "last": "Tejani", |
| "suffix": "" |
| }, |
| { |
| "first": "Sasank", |
| "middle": [], |
| "last": "Chilamkurthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Benoit", |
| "middle": [], |
| "last": "Steiner", |
| "suffix": "" |
| }, |
| { |
| "first": "Lu", |
| "middle": [], |
| "last": "Fang", |
| "suffix": "" |
| }, |
| { |
| "first": "Junjie", |
| "middle": [], |
| "last": "Bai", |
| "suffix": "" |
| }, |
| { |
| "first": "Soumith", |
| "middle": [], |
| "last": "Chintala", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "32", |
| "issue": "", |
| "pages": "8026--8037", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learn- ing library. In Advances in Neural Information Pro- cessing Systems 32, pages 8026-8037. Curran Asso- ciates, Inc.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Scikit-learn: Machine learning in Python", |
| "authors": [ |
| { |
| "first": "Fabian", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "Ga\u00ebl", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "Bertrand", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathieu", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| }, |
| { |
| "first": "Jake", |
| "middle": [], |
| "last": "Vanderplas", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Cournapeau", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Brucher", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Perrot", |
| "suffix": "" |
| }, |
| { |
| "first": "Duchesnay", |
| "middle": [], |
| "last": "And\u00e9douard", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Resources and benchmark corpora for hate speech detection: a systematic review", |
| "authors": [ |
| { |
| "first": "Fabio", |
| "middle": [], |
| "last": "Poletto", |
| "suffix": "" |
| }, |
| { |
| "first": "Valerio", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Manuela", |
| "middle": [], |
| "last": "Sanguinetti", |
| "suffix": "" |
| }, |
| { |
| "first": "Cristina", |
| "middle": [], |
| "last": "Bosco", |
| "suffix": "" |
| }, |
| { |
| "first": "Viviana", |
| "middle": [], |
| "last": "Patti", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s10579-020-09502-8" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2020. Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evalu- ation.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Universal dependency parsing from scratch", |
| "authors": [ |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuhao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "160--170", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peng Qi, Timothy Dozat, Yuhao Zhang, and Christo- pher Manning. 2018. Universal dependency parsing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160-170, Brussels, Belgium. ACL.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Hate speech detection on Twitter: Feature engineering v.s. feature selection", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Robinson", |
| "suffix": "" |
| }, |
| { |
| "first": "Ziqi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Tepper", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Satellite Events of the 15th Extended Semantic Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "46--49", |
| "other_ids": { |
| "DOI": [ |
| "https://link.springer.com/chapter/10.1007/978-3-319-98192-5_9" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Robinson, Ziqi Zhang, and Jonathan Tepper. 2018. Hate speech detection on Twitter: Feature en- gineering v.s. feature selection. In Proceedings of the Satellite Events of the 15th Extended Semantic Web Conference, pages 46-49, Heraklion, Greece. Springer.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Measuring the reliability of hate speech annotations: The case of the European refugee crisis", |
| "authors": [ |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Ross", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Rist", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillermo", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Cabrera", |
| "suffix": "" |
| }, |
| { |
| "first": "Nils", |
| "middle": [], |
| "last": "Kurowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Wojatzki", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 3rd Workshop on Natural Language Processing for Computer-Mediated Communication", |
| "volume": "", |
| "issue": "", |
| "pages": "6--9", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bj\u00f6rn Ross, Michael Rist, Guillermo Carbonell, Ben Cabrera, Nils Kurowsky, and Michael Wojatzki. 2016. Measuring the reliability of hate speech an- notations: The case of the European refugee cri- sis. In Proceedings of the 3rd Workshop on Natural Language Processing for Computer-Mediated Com- munication, pages 6-9, Bochum, Germany. Ruhr- Universit\u00e4t Bochum.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "The risk of racial bias in hate speech detection", |
| "authors": [ |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "Sap", |
| "suffix": "" |
| }, |
| { |
| "first": "Dallas", |
| "middle": [], |
| "last": "Card", |
| "suffix": "" |
| }, |
| { |
| "first": "Saadia", |
| "middle": [], |
| "last": "Gabriel", |
| "suffix": "" |
| }, |
| { |
| "first": "Yejin", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1668--1678", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1163" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1668-1678, Florence, Italy. ACL.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Language inference from function words", |
| "authors": [ |
| { |
| "first": "Tony", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tony Smith and Ian Witten. 1993. Language inference from function words. Technical Report 93/3, Depart- ment of Computer Science, University of Waikato. Computer Science Working Papers.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Feature-rich part-ofspeech tagging with a cyclic dependency network", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "252--259", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics, pages 252-259, Edmonton, Canada. ACL.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Automatic detection of cyberbullying in social media text", |
| "authors": [ |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Van Hee", |
| "suffix": "" |
| }, |
| { |
| "first": "Gilles", |
| "middle": [], |
| "last": "Jacobs", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Emmery", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Desmet", |
| "suffix": "" |
| }, |
| { |
| "first": "Els", |
| "middle": [], |
| "last": "Lefever", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Verhoeven", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "PLoS ONE", |
| "volume": "", |
| "issue": "10", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1371/journal.pone.0203794" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cynthia Van Hee, Gilles Jacobs, Chris Emmery, Bart Desmet, Els Lefever, Ben Verhoeven, Guy De Pauw, Walter Daelemans, and V\u00e9ronique Hoste1. 2018. Automatic detection of cyberbullying in social me- dia text. PLoS ONE, 13(10).", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Detection and fine-grained classification of cyberbullying events", |
| "authors": [ |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Van Hee", |
| "suffix": "" |
| }, |
| { |
| "first": "Els", |
| "middle": [], |
| "last": "Lefever", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Verhoeven", |
| "suffix": "" |
| }, |
| { |
| "first": "Julie", |
| "middle": [], |
| "last": "Mennes", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Desmet", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "672--680", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daele- mans, and Veronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events. In Proceedings of the International Conference Re- cent Advances in Natural Language Processing, pages 672-680, Hissar, Bulgaria. INCOMA.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Challenges and frontiers in abusive content detection", |
| "authors": [ |
| { |
| "first": "Bertie", |
| "middle": [], |
| "last": "Vidgen", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Harris", |
| "suffix": "" |
| }, |
| { |
| "first": "Dong", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebekah", |
| "middle": [], |
| "last": "Tromble", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Hale", |
| "suffix": "" |
| }, |
| { |
| "first": "Helen", |
| "middle": [], |
| "last": "Margetts", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Third Workshop on Abusive Language Online", |
| "volume": "", |
| "issue": "", |
| "pages": "80--93", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-3509" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. ACL.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Detecting hate speech on the world wide web", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Warner", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hirschberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Second Workshop on Language in Social Media", |
| "volume": "", |
| "issue": "", |
| "pages": "19--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Me- dia, pages 19-26, Montr\u00e9al, Canada. ACL.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Are you a racist or am I seeing things? Annotator influence on hate speech detection on Twitter", |
| "authors": [ |
| { |
| "first": "Zeerak", |
| "middle": [], |
| "last": "Waseem", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the First Workshop on NLP and Computational Social Science", |
| "volume": "", |
| "issue": "", |
| "pages": "138--142", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W16-5618" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeerak Waseem. 2016. Are you a racist or am I seeing things? Annotator influence on hate speech detec- tion on Twitter. In Proceedings of the First Work- shop on NLP and Computational Social Science, pages 138-142, Austin, TX, USA. ACL.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Understanding abuse: A typology of abusive language detection subtasks", |
| "authors": [ |
| { |
| "first": "Zeerak", |
| "middle": [], |
| "last": "Waseem", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Davidson", |
| "suffix": "" |
| }, |
| { |
| "first": "Dana", |
| "middle": [], |
| "last": "Warmsley", |
| "suffix": "" |
| }, |
| { |
| "first": "Ingmar", |
| "middle": [], |
| "last": "Weber", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First Workshop on Abusive Language Online", |
| "volume": "", |
| "issue": "", |
| "pages": "78--84", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W17-3012" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78-84, Vancouver, Canada. ACL.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media", |
| "authors": [ |
| { |
| "first": "Marcos", |
| "middle": [], |
| "last": "Zampieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Pepa", |
| "middle": [], |
| "last": "Atanasova", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgi", |
| "middle": [], |
| "last": "Karadzhov", |
| "suffix": "" |
| }, |
| { |
| "first": "Hamdy", |
| "middle": [], |
| "last": "Mubarak", |
| "suffix": "" |
| }, |
| { |
| "first": "Leon", |
| "middle": [], |
| "last": "Derczynski", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media (offenseval 2020). CoRR, abs/2006.07235.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "text": "Statistics of the datasets used for in-domain experiments.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>. 1</td></tr></table>" |
| }, |
| "TABREF3": { |
| "text": "Statistics of the datasets used for the crossdomain experiments as test sets.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF5": { |
| "text": "Example of features used.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF7": { |
| "text": "Performance of the features explored in isolation and in combination on the balanced subsets of the datasets under 10-fold cross-validation. The results for bag-of-words (BoW), character 1-3-grams (char), and their combination with each other and with the stylometric and emotion-based features (emo) as well as the number of features for each experiment (# feats) are also provided.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td/><td/><td>English</td><td/><td/><td>Slovene</td><td/><td/><td>Dutch</td><td/></tr><tr><td>Model</td><td colspan=\"9\">Precision Recall F1-score Precision Recall F1-score Precision Recall F1-score</td></tr><tr><td>Random baseline</td><td>50.7</td><td>50.7</td><td>50.7</td><td>50.9</td><td>50.9</td><td>50.9</td><td>48.3</td><td>48.3</td><td>48.3</td></tr><tr><td>(1) BoW</td><td>71.0</td><td>70.8</td><td>70.9</td><td>68.5</td><td>68.5</td><td>68.5</td><td>72.0</td><td>70.9</td><td>71.1</td></tr><tr><td>(2) Char 1-3-grams</td><td>69.0</td><td>69.2</td><td>69.1</td><td>72.1</td><td>72.1</td><td>72.1</td><td>74.5</td><td>73.4</td><td>73.7</td></tr><tr><td>(3) BoW & char</td><td>70.6</td><td>70.6</td><td>70.6</td><td>72.4</td><td>72.4</td><td>72.4</td><td>75.0</td><td>74.4</td><td>74.6</td></tr><tr><td>(4) CNN</td><td>73.4</td><td>73.6</td><td>73.5</td><td>67.7</td><td>67.7</td><td>67.7</td><td>72.6</td><td>72.9</td><td>72.5</td></tr><tr><td>(5) LSTM</td><td>71.0</td><td>69.9</td><td>70.4</td><td>68.5</td><td>67.3</td><td>67.1</td><td>70.5</td><td>70.5</td><td>70.5</td></tr><tr><td>(6) BERT</td><td>74.9</td><td>74.6</td><td>74.8</td><td>73.0</td><td>72.9</td><td>72.9</td><td>74.3</td><td>74.1</td><td>74.2</td></tr><tr><td>(7) POS</td><td>57.3</td><td>57.0</td><td>57.1</td><td>63.2</td><td>63.1</td><td>62.8</td><td>63.9</td><td>62.9</td><td>62.9</td></tr><tr><td>(8) POS & FW</td><td>64.3</td><td>63.6</td><td>63.8</td><td>63.5</td><td>63.4</td><td>63.1</td><td>70.2</td><td>67.7</td><td>67.8</td></tr><tr><td>(9) POS & FW & emo</td><td>70.9</td><td>69.9</td><td>70.3</td><td>68.0</td><td>68.0</td><td>67.8</td><td>73.1</td><td>70.6</td><td>70.8</td></tr><tr><td>(10) POS & FW & emo & BoW & char</td><td>74.4</td><td>73.7</td><td>74.0</td><td>74.3</td><td>74.3</td><td>74.3</td><td>75.1</td><td>74.5</td><td>74.7</td></tr></table>" |
| }, |
| "TABREF8": { |
| "text": "In-domain results (training-test splits by post boundaries).", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF10": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>: Cross-domain results (testing on out-of-domain datasets). The F1 drop column reports the drop in F1</td></tr><tr><td>points for each model when compared to the in-domain experiments. Statistically significant gains of the ensem-</td></tr><tr><td>ble model over the best deep learning models (BERT or CNN) according to McNemar's statistical significance</td></tr><tr><td>test (McNemar, 1947) with \u03b1 < 0.05 are marked with '*'.</td></tr></table>" |
| } |
| } |
| } |
| } |