ACL-OCL / Base_JSON /prefixT /json /trac /2020.trac-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:18.880440Z"
},
"title": "Scmhl5 at TRAC-2 Shared Task on Aggression Identification: Bert Based Ensemble Learning Approach",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"settlement": "Cardiff",
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Pete",
"middle": [],
"last": "Burnap",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"settlement": "Cardiff",
"country": "United Kingdom"
}
},
"email": "burnapp@cardiff.ac.uk"
},
{
"first": "Wafa",
"middle": [],
"last": "Alorainy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"settlement": "Cardiff",
"country": "United Kingdom"
}
},
"email": "alorainyws@cardiff.ac.uk"
},
{
"first": "Matthew",
"middle": [
"L"
],
"last": "Williams",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"settlement": "Cardiff",
"country": "United Kingdom"
}
},
"email": "williamsm7@cardiff.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a system developed during our participation (team name: scmhl5) in the TRAC-2 Shared Task on aggression identification. In particular, we participated in English Sub-task A on three-class classification ('Overtly Aggressive', 'Covertly Aggressive' and 'Non-aggressive') and English Sub-task B on binary classification for Misogynistic Aggression ('gendered' or 'non-gendered'). For both sub-tasks, our method involves using the pre-trained Bert model for extracting the text of each instance into a 768-dimensional vector of embeddings, and then training an ensemble of classifiers on the embedding features. Our method obtained accuracy of 0.703 and weighted F-measure of 0.664 for Sub-task A, whereas for Sub-task B the accuracy was 0.869 and weighted F-measure was 0.851. In terms of the rankings, the weighted F-measure obtained using our method for Sub-task A is ranked in the 10th out of 16 teams, whereas for Sub-task B the weighted F-measure is ranked in the 8th out of 15 teams.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a system developed during our participation (team name: scmhl5) in the TRAC-2 Shared Task on aggression identification. In particular, we participated in English Sub-task A on three-class classification ('Overtly Aggressive', 'Covertly Aggressive' and 'Non-aggressive') and English Sub-task B on binary classification for Misogynistic Aggression ('gendered' or 'non-gendered'). For both sub-tasks, our method involves using the pre-trained Bert model for extracting the text of each instance into a 768-dimensional vector of embeddings, and then training an ensemble of classifiers on the embedding features. Our method obtained accuracy of 0.703 and weighted F-measure of 0.664 for Sub-task A, whereas for Sub-task B the accuracy was 0.869 and weighted F-measure was 0.851. In terms of the rankings, the weighted F-measure obtained using our method for Sub-task A is ranked in the 10th out of 16 teams, whereas for Sub-task B the weighted F-measure is ranked in the 8th out of 15 teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the era of social networks, we have witnessed an increase in people misusing the platforms for propagating messages that are offensive and/or aggressive. Therefore, it has been a priority research topic for people to develop tools for automatic detection of offensive language (Burnap and Williams, 2015; Burnap and Williams, 2016) . Due to the rapid growth of data relating to online social interactions, machine learning approaches have been increasingly popular for natural language processing in social media analysis, such as word embedding through neural network based learning approaches. In this paper, we describe a system based on Bert embedding and ensemble learning, for participating in a shared task on aggression identification (Kumar et al., 2020) in the Second Workshop on Trolling, Aggression and Cyberbullying. In particular, we entered two sub-tasks (A and B) of the above-mentioned shared task, where one is about a three-class classification task for identifying that a text message is 'Overtly Aggressive' (OAG), 'Covertly Aggressive' (CAG) or 'Nonaggressive' (NAG), whereas the other one is about a binary classification task for identifying that a message is 'gendered' (GEN) or 'non-gendered' (NEGN). We obtained accuracy of 0.703 and weighted F-measure of 0.664 for Subtask A, whereas for Sub-task B the accuracy and weighted F-measure were 0.869 and 0.851, respectively. Moreover, the weighted F-measure obtained using our method for Sub-task A is ranked in the 10th out of 16 teams, where the weighted F-measure ranked in the first place is 0.803. For Sub-task B, the weighted F-measure obtained using our method is ranked in the 8th out of 15 teams, where the weighted F-measure ranked in the first place 0.872. The rest of this paper is organized as follows: Section 2 provides a review of recently published works on identification of aggressive languages. In Section 3, we describe the shared task dataset in detail and present the method that we adopted for developing our system for aggression identification. In Section 4, we report the results obtained on both the validation data and the test data. In Section 5, the conclusion of this paper is drawn and some further directions are suggested towards advancing the effectiveness of aggression identification.",
"cite_spans": [
{
"start": 280,
"end": 307,
"text": "(Burnap and Williams, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 308,
"end": 334,
"text": "Burnap and Williams, 2016)",
"ref_id": "BIBREF7"
},
{
"start": 746,
"end": 766,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Since the spread of online offensive and/or aggressive language could lead to disruptive anti-social outcomes, it has become critical in many countries to consider the posting of such language as a legal issue (Banks, 2010) and to take actions against the propagation of aggression, cyberbullying and hate speech (Banks, 2011) . In the context of machine learning based identification of offensive and/or aggressive language, traditional approaches of feature extraction from text include Bag-of-Words (BOW) (Kwok and Wang, 2013; Liu et al., 2019a) , N-grams (NG) in word level (Perez and Luque, 2019; Liu and Forss, 2014; Watanabe et al., 2018) , NG in character level (Gamb\u00e4ck and Sikdar, 2017; Perez and Luque, 2019) , typed dependencies (Burnap and Williams, 2016) , part-ofspeech tags (Davidson et al., 2017) , dictionary based approaches (Tulkens et al., 2016) and othering lexicons (Burnap and Williams, 2016; Alorainy et al., 2019) . Some traditional learning approaches used for training classifiers include Support Vector Machine (SVM) (Burnap and Williams, 2016; Indurthi et al., 2019; Perez and Luque, 2019; Orasan, 2018) , Naive Bayes (NB) (Kwok and Wang, 2013; Liu et al., 2019a) , Decision Trees (DT) (Watanabe et al., 2018; Liu et al., 2019a) , Logistic Regression (LR) (Xiang et al., 2012; Waseem and Hovy, 2016) , decision tree ensembles such as Random Forest (RF) (Burnap and Williams, 2015; Orasan, 2018) and Gradient Boosted Trees (Badjatiya et al., 2017) , ensembles based on SVM (Malmasi and Zampieri, 2018) and fuzzy approaches (Liu et al., 2019a; Liu et al., 2019b) . Moreover, some challenges in terms of discriminating hate speech from profanity have been highlighted in (Malmasi and Zampieri, 2018) for justifying the necessity of extracting deeper features instead of superficial ones (e.g., BOW and NG). From this perspective, embedding learning approaches have recently become the state of the art for automatic extraction of semantic features, e.g. Word2Vec (Nobata et al., 2016) , Glove (Zhang et al., 2018; Badjatiya et al., 2017; Kshirsagar et al., 2018; Orasan, 2018) , Fast-Text (Pratiwi et al., 2018; Herwanto et al., 2019; Galery et al., 2018) . There are also some end-to-end learning approaches of Deep Neural Networks (DNN) (Nina-Alcocer, 2019; Yuan et al., 2016; Ribeiro and Silva, 2019) , e.g. Convolutional Neural Networks (CNN) (Gamb\u00e4ck and Sikdar, 2017; Park and Fung, 2017; Roy et al., 2018; Huang et al., 2018) , Long-Short Term Memory (LSTM) (Badjatiya et al., 2017; Pitsilis et al., 2018; Nikhil et al., 2018; Kumar et al., 2018) and Gated Recurrent Unit (GRU) (Zhang et al., 2018; Galery et al., 2018) or combination of different DNN architectures in an ensemble setting (Madisetty and Desarkar, 2018), which are adopted for enhancement of feature representation and classification, based on word embeddings produced by Word2Vec, Glove or Fast-Text. However, embedding approaches such as Word2Vec can not achieve contextualized representation of words, i.e. the same word used in different contexts is represented in the same numeric vector using the above-mentioned approaches, which could affect the classification performance due to the lack of contextual information from the features. In order to achieve effectively contextualized representation of features, some more advanced embedding approaches including ELMo (Bojkovsky and Pikuliak, 2019) and Bert (Mozafari et al., 2019; Nikolov and Radivchev, 2019) have recently been developed showing the state of the art performance for offensive and/or aggressive language identification and other similar tasks of natural language processing. There are also applications of Bert in the setting of ensemble learning, e.g. an ensemble of Bert models has been applied to an offensive language identification shared task (Risch et al., 2019) .",
"cite_spans": [
{
"start": 210,
"end": 223,
"text": "(Banks, 2010)",
"ref_id": "BIBREF2"
},
{
"start": 313,
"end": 326,
"text": "(Banks, 2011)",
"ref_id": "BIBREF3"
},
{
"start": 508,
"end": 529,
"text": "(Kwok and Wang, 2013;",
"ref_id": "BIBREF21"
},
{
"start": 530,
"end": 548,
"text": "Liu et al., 2019a)",
"ref_id": "BIBREF23"
},
{
"start": 578,
"end": 601,
"text": "(Perez and Luque, 2019;",
"ref_id": "BIBREF34"
},
{
"start": 602,
"end": 622,
"text": "Liu and Forss, 2014;",
"ref_id": "BIBREF22"
},
{
"start": 623,
"end": 645,
"text": "Watanabe et al., 2018)",
"ref_id": "BIBREF44"
},
{
"start": 670,
"end": 696,
"text": "(Gamb\u00e4ck and Sikdar, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 697,
"end": 719,
"text": "Perez and Luque, 2019)",
"ref_id": "BIBREF34"
},
{
"start": 741,
"end": 768,
"text": "(Burnap and Williams, 2016)",
"ref_id": "BIBREF7"
},
{
"start": 790,
"end": 813,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 844,
"end": 866,
"text": "(Tulkens et al., 2016)",
"ref_id": "BIBREF42"
},
{
"start": 889,
"end": 916,
"text": "(Burnap and Williams, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 917,
"end": 939,
"text": "Alorainy et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1046,
"end": 1073,
"text": "(Burnap and Williams, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 1074,
"end": 1096,
"text": "Indurthi et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 1097,
"end": 1119,
"text": "Perez and Luque, 2019;",
"ref_id": "BIBREF34"
},
{
"start": 1120,
"end": 1133,
"text": "Orasan, 2018)",
"ref_id": "BIBREF32"
},
{
"start": 1153,
"end": 1174,
"text": "(Kwok and Wang, 2013;",
"ref_id": "BIBREF21"
},
{
"start": 1175,
"end": 1193,
"text": "Liu et al., 2019a)",
"ref_id": "BIBREF23"
},
{
"start": 1216,
"end": 1239,
"text": "(Watanabe et al., 2018;",
"ref_id": "BIBREF44"
},
{
"start": 1240,
"end": 1258,
"text": "Liu et al., 2019a)",
"ref_id": "BIBREF23"
},
{
"start": 1286,
"end": 1306,
"text": "(Xiang et al., 2012;",
"ref_id": "BIBREF45"
},
{
"start": 1307,
"end": 1329,
"text": "Waseem and Hovy, 2016)",
"ref_id": "BIBREF43"
},
{
"start": 1383,
"end": 1410,
"text": "(Burnap and Williams, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 1411,
"end": 1424,
"text": "Orasan, 2018)",
"ref_id": "BIBREF32"
},
{
"start": 1452,
"end": 1476,
"text": "(Badjatiya et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 1502,
"end": 1530,
"text": "(Malmasi and Zampieri, 2018)",
"ref_id": "BIBREF26"
},
{
"start": 1552,
"end": 1571,
"text": "(Liu et al., 2019a;",
"ref_id": "BIBREF23"
},
{
"start": 1572,
"end": 1590,
"text": "Liu et al., 2019b)",
"ref_id": "BIBREF24"
},
{
"start": 1698,
"end": 1726,
"text": "(Malmasi and Zampieri, 2018)",
"ref_id": "BIBREF26"
},
{
"start": 1990,
"end": 2011,
"text": "(Nobata et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 2020,
"end": 2040,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF47"
},
{
"start": 2041,
"end": 2064,
"text": "Badjatiya et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 2065,
"end": 2089,
"text": "Kshirsagar et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 2090,
"end": 2103,
"text": "Orasan, 2018)",
"ref_id": "BIBREF32"
},
{
"start": 2116,
"end": 2138,
"text": "(Pratiwi et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 2139,
"end": 2161,
"text": "Herwanto et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 2162,
"end": 2182,
"text": "Galery et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 2287,
"end": 2305,
"text": "Yuan et al., 2016;",
"ref_id": "BIBREF46"
},
{
"start": 2306,
"end": 2330,
"text": "Ribeiro and Silva, 2019)",
"ref_id": "BIBREF38"
},
{
"start": 2338,
"end": 2373,
"text": "Convolutional Neural Networks (CNN)",
"ref_id": null
},
{
"start": 2374,
"end": 2400,
"text": "(Gamb\u00e4ck and Sikdar, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 2401,
"end": 2421,
"text": "Park and Fung, 2017;",
"ref_id": "BIBREF33"
},
{
"start": 2422,
"end": 2439,
"text": "Roy et al., 2018;",
"ref_id": "BIBREF41"
},
{
"start": 2440,
"end": 2459,
"text": "Huang et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 2492,
"end": 2516,
"text": "(Badjatiya et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 2517,
"end": 2539,
"text": "Pitsilis et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 2540,
"end": 2560,
"text": "Nikhil et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 2561,
"end": 2580,
"text": "Kumar et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 2612,
"end": 2632,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF47"
},
{
"start": 2633,
"end": 2653,
"text": "Galery et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 3372,
"end": 3402,
"text": "(Bojkovsky and Pikuliak, 2019)",
"ref_id": "BIBREF5"
},
{
"start": 3412,
"end": 3435,
"text": "(Mozafari et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 3436,
"end": 3464,
"text": "Nikolov and Radivchev, 2019)",
"ref_id": "BIBREF29"
},
{
"start": 3821,
"end": 3841,
"text": "(Risch et al., 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In this section, we will provide details of the data set provided for the shared task and present the procedure of our method in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Data",
"sec_num": "3."
},
{
"text": "The dataset (Bhattacharya et al., 2020 ) provided for the shared task contains 6529 text instances in total, which involves a training set of 4263 instances, a validation set of 1066 instances and a test set of 1200 instances. The characteristics of the data set are shown in Table 1 . For Sub-task A, the frequency distribution among the three classes 'NAG', 'CAG' and 'OAG' in the training set is 3375:453:435, whereas the distributions in the validation and test sets are 836:117:113 and 690:224:286, respectively. The above details indicate that the training set has a class frequency distribution very similar to the one in the validation set but the validation set and the test set show considerably different distributions, which may lead to the case that the performance obtained on the validation set is different from the one obtained on the test set. For Sub-task B, the frequency distribution between the two classes 'NGEN' and 'GEN' is 3954:309, whereas the distributions in the validation and test sets are 993:73 and 1025:175, respectively. Similar to the characteristic found for Sub-task A, the above details for Sub-task B indicate again a considerable difference on the class frequency distribution between the validation set and the test set, while the training set and the validation set show very similar distributions. The above characteristic may also result in the case that the performance obtained on the validation set is different from the one obtained on the test set.",
"cite_spans": [
{
"start": 12,
"end": 38,
"text": "(Bhattacharya et al., 2020",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1."
},
{
"text": "The method used for Sub-task A on aggression identification involves two main steps, namely, extraction of embedding features and ensemble learning for classification. Before the two main steps, the text for each instance is preprocessed by removing hashtags, mentions and URLs, converting all words to their lower cases and transforming all emojis to their text descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.2."
},
{
"text": "In the feature extraction step, each text instance is transformed into a 768-dimensional feature vector by using the pre-trained Bert embedding model (Devlin et al., 2018) . In particular, we used the base uncased model of Bert, which consists of 12 layers alongside 768 units per layer. In this setting, each token (word) is transformed into a 768dimensional vector, so an instance that involves m tokens would be represented in the form of a m \u00d7 768 matrix (m vectors). On this basis, the 768-dimensional feature vector of each instance is obtained by averaging the abovementioned m word vectors. In the classification step, the classifier is trained in the setting of ensemble learning. In particular, the creation of an ensemble through our designed approach involves four levels, namely, feature sub-sampling, class imbalance handling, multi-class handling and training of base classifiers. The whole framework of ensemble setting is illustrated in In the top level for feature sub-sampling, the aim is to encourage the creation of diversity among base classifiers, which is achieved by adopting the random subspace (RS) method (Ho, 1998) to draw n subsets of the original feature set, such that n different classifiers are trained on the n feature subsets. In the second level for class imbalance handling, a costsensitive learning method is adopted to enable the classifier trained on each feature subset (drawn in the top level) to be cost-sensitive, no matter which one of the supervised learning algorithms is adopted for training classifiers.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 1133,
"end": 1143,
"text": "(Ho, 1998)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.2."
},
{
"text": "In the third level for multi-class handling, the aim is to transform the 3-class classification problem for suiting a 2-class learning algorithm, i.e. some algorithms cannot directly perform multi-class learning, so a specific strategy of multi-class handling needs to be involved to enable that 2-class learning algorithms can work. Some popular strategies include 'one-against-all', 'one-against-one', 'random error correction code' and 'exhaustive error correction code'. In the fourth level for training of base classifiers, a supervised learning algorithm needs to be adopted, where the Stochastic Gradient Descent algorithm is chosen in our setting for training n linear classifiers on the n feature subsets produced by the RS method. The final classification is made by fusing the outputs of n linear classifiers through majority voting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.2."
},
{
"text": "The method used for Sub-task B on identification of misogynistic aggression is almost the same as the one adopted for Sub-task A, but the only difference is that the third level for multi-class handling is dropped, due to the fact that Sub-task B involves a binary classification problem. Therefore, the method used for Sub-task B involves three levels, namely, feature sub-sampling, class imbalance handling and training of base classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.2."
},
{
"text": "In this section, we describe the experimental setup and discuss the results obtained in the development and testing stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "In the development stage, we conducted experiments by using the pre-trained Bert embedding model and various learning algorithms, namely, Support vector machine (SVM), Naive Bayes (NB), Stochastic Gradient Descent (SGD) and a fuzzy rule learning approach (Fuzzy) (Huehn and Huellermeier, 2009) , due to their relatively low computational complexity and the suitability of this kind of traditional learning algorithms for processing small data (Liu et al., 2019a) . In particular, the results shown in Tables 2 and 3 were obtained by using the validation set for evaluating the performance of classifiers produced by various algorithms and determining which algorithm is used to train the base classifiers in the setting of random subspace based ensemble learning. Before feature extraction, all the instances were preprocessed by removing hashtags, mentions and URLs and converting all words to their lower cases. Also, all the emojis were transformed into their text descriptions by using the emoji-java library 1 .",
"cite_spans": [
{
"start": 263,
"end": 293,
"text": "(Huehn and Huellermeier, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 443,
"end": 462,
"text": "(Liu et al., 2019a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 501,
"end": 516,
"text": "Tables 2 and 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Development Stage",
"sec_num": "4.1."
},
{
"text": "In the feature extraction stage, each text instance was transformed into a 768-dimensional feature vector using the pretrained base uncased model of Bert, which is based on the Java library of easy-bert 2 . The above decision is based on the considerations that a base Bert model requires less memory than a large Bert model and all words in the text for each instance have been converted to lower cases in the pre-processing stage leading to the unnecessity of using a cased Bert model. In the classification stage, we used the implementations of various algorithms from the Weka library (Hall et al., 2009) . In terms of hyper-parameter settings, SVM was set to normalize the training data and train a non-linear classifier using the polynomial kernel and the sequential minimal optimization algorithm (SMO) (Platt, 1998) , where the complexity parameter C is set to 1.0 and the batch size is set to 100. The fuzzy rule learning approach was set to involve 2 runs of rule optimization and using 1/3 of the training data for rule pruning, where the product T-norm was used to compute the degree to which an instance is covered by a fuzzy rule and the rule stretching method (Huehn and Huellermeier, 2009 ) is adopted to classify any instances that are not covered by any fuzzy rules. SGD was set to train a linear classifier using the Hinge loss with the learning rate (lr) of 0.01 through 500 epochs, where the batch size was set to 100 and the regularization constant is set to 0.0001. Moreover, all of the algorithms (SVM, NB, Fuzzy and SGD) were adopted for training classifiers in a cost sensitive setting, i.e. the trained classifiers are made costsensitive by assigning higher cost to the case of misclassifying instances of the minority class. In addition, due to the case that SGD is essentially a two-class learning algorithm, the three-class classification problem was transformed to suit classifiers trained by SGD through using the 'random error correction code' method. For Sub-task A, the results obtained on the validation set are shown in Table 2 , which indicates that SGD and SVM perform considerably better than NB and the fuzzy approach. Although SVM and SGD show almost the same performance in terms of weighted F-measure, SGD outperforms SVM for the minority class 'OAG'. Moreover, SGD is capable of training updateable classifiers in the setting of incremental learning, i.e., previously trained classifiers can be updated by learning incrementally from instances newly added into the training set. This is an essential advantage of SGD in comparison with SVM (based on SMO) that cannot effectively achieve incremental learning. Therefore, we chose to adopt the SGD algorithm for training and optimizing base classifiers in the setting of ensemble learning, in order to achieve a more effective way of advancing the per-formance further using a new/updated data set without the need to retrain each base classifier. The ensemble is created following the procedure shown in Fig. 1 . In particular, the RS method is adopted to draw 10 feature subsets, where the size of each subspace is set to 0.5, so there are totally 10 base classifiers trained on the 10 feature subsets. The hyper-parameter settings of SGD are exactly the same as the ones described above about training a single classifier. The results shown in Table 2 indicate that the creation of an ensemble in the above settings leads to a marginal improvement of the performance in comparison with the production of a single classifier by SGD. For Sub-task B, we followed the same procedure for text pre-processing, feature extraction and classification. For training of the classifiers, we adopted the same set of algorithms (with the same settings of hyper-parameters) for evaluating performance on the validation set. The results shown in Table 3 indicate again the phenomenon that SGD and SVM perform considerably better than NB and the fuzzy approach. Although SGD performs marginally worse than SVM in terms of weighted F-measure, SGD outperforms SVM for the minority class 'GEN'. As mentioned earlier in this section, SGD is capable of updating previously trained classifiers by learning incrementally from instances newly added into the training set, so we chose to adopt the SGD algorithm again for training and optimizing base classifiers in the setting of ensemble learning. Following the same ensemble settings adopted for Sub-task A, an ensemble of SGD classifiers is built with a costsensitive setting for Sub-task B, but the step for multi-class handling is dropped, given that Sub-task B is a binary classification task. The results shown in Table 3 indicate that the creation of an ensemble leads to an improvement of the performance on weighted F-measure and the score for the minority class, in comparison with the production of a single classifier by using any one of the standard learning algorithms.",
"cite_spans": [
{
"start": 589,
"end": 608,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF12"
},
{
"start": 810,
"end": 823,
"text": "(Platt, 1998)",
"ref_id": "BIBREF36"
},
{
"start": 1175,
"end": 1204,
"text": "(Huehn and Huellermeier, 2009",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 2057,
"end": 2064,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 2998,
"end": 3004,
"text": "Fig. 1",
"ref_id": "FIGREF1"
},
{
"start": 3340,
"end": 3347,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 3826,
"end": 3833,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 4642,
"end": 4649,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Development Stage",
"sec_num": "4.1."
},
{
"text": "Based on the results shown in Tables 2 and 3 for the two sub-tasks, we merged the training and validation sets for augmenting the sample size for creating an ensemble of classifiers in the above-described setting (based on Bert, RS and SGD). The results obtained on the test set for the two sub-tasks are shown in Table 4 . It can be seen from Table 4 that the performance obtained on the test set gets considerably lower (by about 7%) in comparison with the one obtained on the validation set for both Sub-tasks A and B, which is likely due to the difference on the data distribution between the two sets of instances, i.e. the weight of the majority class gets lower on the test set, in comparison with the weight on the validation set, for both Sub-tasks. For Sub-task A, comparing the results shown in Table 2 and Table 4 , we can see that the weighted F1-score gets lower on the test set, which seems to be due mainly to the case that the F1-score for the majority class 'NAG' gets lower. Moreover, the F1-scores for the other two classes 'CAG' and 'OAG' get much higher on the test set. Given that the class frequency distribution among the three classes 'NAG', 'CAG' and 'OAG' is 836:117:113 on the validation set and is 690:224:286 on the test set, it seems that the performance difference is likely to result from the difference on the data distribution. For Sub-task B, comparing the results shown in Table 3 and Table 4 , we can see again that the weighted F1-score gets lower on the test set, which seems to be due mainly to the case that the F1-score for the majority class 'NGEN' gets lower. Moreover, for the minority class 'GEN', the F1-score obtained on the test set is almost the same as the score obtained on the validation set. Given that the frequency distribution between the two classes 'NGEN' and 'GEN' is 993:73 on the validation set and is 1025:175 on the test set, it seems that the change in the data distribution does not really impact on the performance for the minority class 'GEN\" but shows a considerable impact on the performance for the majority class 'NGEN'. fusion matrixes, which indicate that the cases of incorrect classifications mainly result from false negatives for the minority class, i.e. some instances of aggressive language were not successfully detected due to the insufficient ability to generalize thoroughly on test instances. Based on the results shown in Table 4 and Figs. 2 and 3 , we tried to reduce the learning rate (lr) from 0.01 to 0.005 towards achieving better optimization of the parameters of the SGD classifiers, i.e. reducing the learning rate can generally help better avoid the case of local optimization. The results obtained by using the lower value of 'lr' are shown in Tables 5 and 6 , which indicate that the performance gets slightly lower after reducing the learning rate for both subtasks A and B. The results suggest that the reduction of the learning rate may increase the chance of overfitting on a small data set and thus lower the generalization performance on test data. ",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 44,
"text": "Tables 2 and 3",
"ref_id": "TABREF1"
},
{
"start": 314,
"end": 321,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 344,
"end": 351,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 806,
"end": 813,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 818,
"end": 825,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1411,
"end": 1418,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1423,
"end": 1430,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 2410,
"end": 2417,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 2422,
"end": 2435,
"text": "Figs. 2 and 3",
"ref_id": null
},
{
"start": 2742,
"end": 2756,
"text": "Tables 5 and 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Testing Stage",
"sec_num": "4.2."
},
{
"text": "We participated in the shared task on aggression identification in the 2nd Workshop on Trolling, Aggression and Cyberbullying. In particular, we entered two English subtasks (A and B) for identifying the intensity of aggression (i.e. 'Overtly Aggressive', 'Covertly Aggressive' or 'Non-aggressive') and detecting misogynistic aggression (i.e. 'gendered' or 'non-gendered'). We built two systems for the above-mentioned sub-tasks, and both systems were built in the setting of ensemble learning based on the embedding features extracted using the pre-trained Bert model. We obtained a weighted F1-score of 0.664 for Subtask A and a score of 0.851 for Sub-task B. In future, we will explore the effectiveness of extracting multiple types of embedding features using various embedding models (e.g. Bert and ELMo), towards achieving more advanced settings of ensemble learning through both early fusion (in the feature level) and late fusion (in the classification level). It is also worth exploring the use of a larger volume of external data for updating the SGD classifiers in the setting of incremental learning, towards advancing the generalization performance further. In addition, we will add a further experiment by selecting a subset of the test set that has the same class frequency distribution as the validation set, in order to investigate whether the performance obtained on the test subset can be more similar to the one obtained on the validation set after making the class frequency distribution consistent between the two data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "https://github.com/vdurmont/emoji-java",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/robrua/easy-bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The enemy among us: Detecting cyber hate speech with threats-based othering language embeddings",
"authors": [
{
"first": "W",
"middle": [],
"last": "Alorainy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Transactions on the Web",
"volume": "13",
"issue": "3",
"pages": "1--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alorainy, W., Burnap, P., Liu, H., and Williams, M. (2019). The enemy among us: Detecting cyber hate speech with threats-based othering language embeddings. ACM Transactions on the Web, 13(3):1-26.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deep learning for hate speech detection in tweets",
"authors": [
{
"first": "P",
"middle": [],
"last": "Badjatiya",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web Companion",
"volume": "",
"issue": "",
"pages": "3--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Badjatiya, P., Gupta, S., Gupta, M., and Varma, V. (2017). Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 759-760, Perth, Australia, 3-7 April.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Regulating hate speech online. International Review of Law",
"authors": [
{
"first": "J",
"middle": [],
"last": "Banks",
"suffix": ""
}
],
"year": 2010,
"venue": "Computers and Technology",
"volume": "24",
"issue": "3",
"pages": "233--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Banks, J. (2010). Regulating hate speech online. Inter- national Review of Law, Computers and Technology, 24(3):233-239.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "European regulation of cross-border hate speech in cyberspace: The limits of legislation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Banks",
"suffix": ""
}
],
"year": 2011,
"venue": "European Journal of Crime, Criminal Law and Criminal Justice",
"volume": "19",
"issue": "1",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Banks, J. (2011). European regulation of cross-border hate speech in cyberspace: The limits of legislation. Euro- pean Journal of Crime, Criminal Law and Criminal Jus- tice, 19(1):1-13.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Developing a multilingual annotated corpus of misogyny and aggression",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Dawer",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lahiri",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhattacharya, S., Singh, S., Kumar, R., Bansal, A., Bhagat, A., Dawer, Y., Lahiri, B., and Ojha, A. K. (2020). Devel- oping a multilingual annotated corpus of misogyny and aggression.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "STUFIIT at SemEval-2019 Task 5: Multilingual hate speech detection on twitter with MUSE and ELMo embeddings",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bojkovsky",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pikuliak",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojkovsky, M. and Pikuliak, M. (2019). STUFIIT at SemEval-2019 Task 5: Multilingual hate speech detec- tion on twitter with MUSE and ELMo embeddings. In Proceedings of the 13th International Workshop on Se- mantic Evaluation, pages 464-468, Minneapolis, Min- nesota, USA, 6-7 June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making",
"authors": [
{
"first": "P",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2015,
"venue": "Policy & Internet",
"volume": "7",
"issue": "2",
"pages": "223--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burnap, P. and Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Pol- icy & Internet, 7(2):223-242.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Us and them: identifying cyber hate on twitter across multiple protected characteristics",
"authors": [
{
"first": "P",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2016,
"venue": "EPJ Data Science",
"volume": "5",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burnap, P. and Williams, M. (2016). Us and them: iden- tifying cyber hate on twitter across multiple protected characteristics. EPJ Data Science, 5(11).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automated Hate Speech Detection and the Problem of Offensive Language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated Hate Speech Detection and the Prob- lem of Offensive Language. In Proceedings of ICWSM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Marilyn Walker, et al., editors, Proceedings of Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Aggression identification and multi lingual word embeddings",
"authors": [
{
"first": "T",
"middle": [],
"last": "Galery",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charitos",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galery, T., Charitos, E., and Tian, Y. (2018). Aggression identification and multi lingual word embeddings. In Ritesh Kumar, et al., editors, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using Convolutional Neural Networks to Classify Hate-speech",
"authors": [
{
"first": "B",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "U",
"middle": [
"K"
],
"last": "Sikdar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "85--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gamb\u00e4ck, B. and Sikdar, U. K. (2017). Using Convolu- tional Neural Networks to Classify Hate-speech. In Pro- ceedings of the First Workshop on Abusive Language On- line, pages 85-90.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The WEKA data mining software: an update. SIGKDD Explorations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "11",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reute- mann, P., and Witten, I. H. (2009). The WEKA data mining software: an update. SIGKDD Explorations, 11(1):10-18.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Hate speech and abusive language classification using fastText",
"authors": [
{
"first": "G",
"middle": [
"B"
],
"last": "Herwanto",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Ningtyas",
"suffix": ""
},
{
"first": "K",
"middle": [
"E"
],
"last": "Nugraha",
"suffix": ""
},
{
"first": "I",
"middle": [
"N P"
],
"last": "Trisna",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI)",
"volume": "",
"issue": "",
"pages": "5--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herwanto, G. B., Ningtyas, A. M., Nugraha, K. E., and Trisna, I. N. P. (2019). Hate speech and abusive lan- guage classification using fastText. In 2019 Interna- tional Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 5-6 December.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The random subspace method for constructing decision forests",
"authors": [
{
"first": "T",
"middle": [
"K"
],
"last": "Ho",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "20",
"issue": "8",
"pages": "832--844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ho, T. K. (1998). The random subspace method for con- structing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8):832-844.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Cyberbullying intervention based on convolutional neural networks",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [
"V"
],
"last": "Bruwaene",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Q., Inkpen, D., Zhang, J., and Bruwaene, D. V. (2018). Cyberbullying intervention based on convolu- tional neural networks. In Ritesh Kumar, et al., editors, Proceedings of the First Workshop on Trolling, Aggres- sion and Cyberbullying (TRAC-2018), Santa Fe, New Mexico, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "FURIA: An algorithm for unordered fuzzy rule induction",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Huehn",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Huellermeier",
"suffix": ""
}
],
"year": 2009,
"venue": "Data Mining and Knowledge Discovery",
"volume": "19",
"issue": "",
"pages": "293--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huehn, J. C. and Huellermeier, E. (2009). FURIA: An al- gorithm for unordered fuzzy rule induction. Data Min- ing and Knowledge Discovery, 19:293-319.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Fermi at SemEval-2019 Task 5: Using sentence embeddings to identify hate speech against immigrants and women on twitter",
"authors": [
{
"first": "V",
"middle": [],
"last": "Indurthi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Chakravartula",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Indurthi, V., Syed, B., Shrivastava, M., Chakravartula, N., Gupta, M., and Varma, V. (2019). Fermi at SemEval- 2019 Task 5: Using sentence embeddings to identify hate speech against immigrants and women on twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 70-74, Minneapolis, Min- nesota, USA, 6-7 June.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Predictive embeddings for hate speech detection on twitter",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kshirsagar",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cukuvac",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mcgregor",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Second Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "26--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kshirsagar, R., Cukuvac, T., McKeown, K., and McGre- gor, S. (2018). Predictive embeddings for hate speech detection on twitter. In Proceedings of the Second Work- shop on Abusive Language Online (ALW2), pages 26-32, Brussels, Belgium, 31 October.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "TRAC-1 shared task on aggression identification: IIT(ISM)@COLING'18",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bhanodai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pamula",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Chennuru",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Bhanodai, G., Pamula, R., and Chennuru, M. R. (2018). TRAC-1 shared task on aggression identifica- tion: IIT(ISM)@COLING'18. In Ritesh Kumar, et al., editors, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Evaluating aggression identification in social media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2020). Evaluating aggression identification in social media. In Ritesh Kumar, et al., editors, Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying (TRAC-2020), Paris, France, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Locate the hate: Detecting Tweets Against Blacks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Kwok",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Twenty-Seventh AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwok, I. and Wang, Y. (2013). Locate the hate: Detecting Tweets Against Blacks. In Twenty-Seventh AAAI Con- ference on Artificial Intelligence.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Combining N-gram based similarity analysis with sentiment analysis in web content classification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Forss",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management",
"volume": "",
"issue": "",
"pages": "21--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, S. and Forss, T. (2014). Combining N-gram based similarity analysis with sentiment analysis in web con- tent classification. In Proceedings of the International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, pages 530- 537, Rome, Italy, 21-24 October.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A fuzzy approach to text classification with two stage training for ambiguous instances",
"authors": [
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Alorainy",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Computational Social Systems",
"volume": "6",
"issue": "2",
"pages": "227--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, H., Burnap, P., Alorainy, W., and Williams, M. L. (2019a). A fuzzy approach to text classification with two stage training for ambiguous instances. IEEE Transac- tions on Computational Social Systems, 6(2):227-240.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Fuzzy multi-task learing for hate speech type identification",
"authors": [
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Alorainy",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2019,
"venue": "WWW '19 The World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "13--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, H., Burnap, P., Alorainy, W., and Williams, M. L. (2019b). Fuzzy multi-task learing for hate speech type identification. In WWW '19 The World Wide Web Confer- ence, pages 3006-3012, San Francisco, CA, USA, 13-17 May.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Aggression detection in social media using deep neural networks",
"authors": [
{
"first": "S",
"middle": [],
"last": "Madisetty",
"suffix": ""
},
{
"first": "M",
"middle": [
"S"
],
"last": "Desarkar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Madisetty, S. and Desarkar, M. S. (2018). Aggression de- tection in social media using deep neural networks. In Ritesh Kumar, et al., editors, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Challenges in Discriminating Profanity from Hate Speech",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Experimental & Theoretical Artificial Intelligence",
"volume": "30",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2018). Challenges in Dis- criminating Profanity from Hate Speech. Journal of Ex- perimental & Theoretical Artificial Intelligence, 30:1- 16.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A BERT-based transfer learning approach for hate speech detection in online social media",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mozafari",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Farahbakhsh",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Crespi",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Complex Networks and Their Applications",
"volume": "",
"issue": "",
"pages": "10--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mozafari, M., Farahbakhsh, R., and Crespi, N. (2019). A BERT-based transfer learning approach for hate speech detection in online social media. In International Con- ference on Complex Networks and Their Applications, pages 928-940, Lisbon, Portugal, 10-12 December.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "LSTMs with attention for aggression detection",
"authors": [
{
"first": "N",
"middle": [],
"last": "Nikhil",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pahwa",
"suffix": ""
},
{
"first": "M",
"middle": [
"K"
],
"last": "Nirala",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Khilnani",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil, N., Pahwa, R., Nirala, M. K., and Khilnani, R. (2018). LSTMs with attention for aggression detection. In Ritesh Kumar, et al., editors, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Nikolov-Radivchev at SemEval-2019 Task 6: Offensive tweet classification with BERT and ensembles",
"authors": [
{
"first": "A",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Radivchev",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolov, A. and Radivchev, V. (2019). Nikolov-Radivchev at SemEval-2019 Task 6: Offensive tweet classification with BERT and ensembles. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 691-695, Minneapolis, Minnesota, USA, 6-7 June.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "HATERecognizer at SemEval-2019 Task 5: Using features and neural networks to face hate recognition",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nina-Alcocer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina-Alcocer, V. (2019). HATERecognizer at SemEval- 2019 Task 5: Using features and neural networks to face hate recognition. In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 409- 415, Minneapolis, Minnesota, USA, 6-7 June.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Abusive Language Detection in Online User Content",
"authors": [
{
"first": "C",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., and Chang, Y. (2016). Abusive Language Detection in On- line User Content. In Proceedings of the 25th Inter- national Conference on World Wide Web, pages 145- 153. International World Wide Web Conferences Steer- ing Committee.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Aggressive language identification using word embeddings and sentiment features",
"authors": [
{
"first": "C",
"middle": [],
"last": "Orasan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orasan, C. (2018). Aggressive language identification us- ing word embeddings and sentiment features. In Ritesh Kumar, et al., editors, Proceedings of the First Work- shop on Trolling, Aggression and Cyberbullying (TRAC- 2018), Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "One-step and two-step classification for abusive language detection on twitter",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Park",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2017,
"venue": "1st Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "41--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Park, J. H. and Fung, P. (2017). One-step and two-step classification for abusive language detection on twitter. In 1st Workshop on Abusive Language Online, pages 41- 45, Vancouver, Canada, 4 August.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Atalaya at SemEval 2019 Task 5: Robust embeddings for tweet classification",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Perez",
"suffix": ""
},
{
"first": "F",
"middle": [
"M"
],
"last": "Luque",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Perez, J. M. and Luque, F. M. (2019). Atalaya at SemEval 2019 Task 5: Robust embeddings for tweet classification. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 64-69, Minneapolis, Min- nesota, USA, 6-7 June.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Effective hate-speech detection in twitter data using recurrent neural networks",
"authors": [
{
"first": "G",
"middle": [
"K"
],
"last": "Pitsilis",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ramampiaro",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Langseth",
"suffix": ""
}
],
"year": 2018,
"venue": "Applied Intelligence",
"volume": "48",
"issue": "12",
"pages": "4730--4742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pitsilis, G. K., Ramampiaro, H., and Langseth, H. (2018). Effective hate-speech detection in twitter data using recurrent neural networks. Applied Intelligence, 48(12):4730-4742.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Fast training of support vector machines using sequential minimal optimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 1998,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Platt, J. (1998). Fast training of support vector machines using sequential minimal optimization. In Bernhard Scholkopf, et al., editors, Advances in Kernel Methods -Support Vector Learning, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Hate speech detection on indonesian instagram comments using FastText approach",
"authors": [
{
"first": "N",
"middle": [
"I"
],
"last": "Pratiwi",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Budi",
"suffix": ""
},
{
"first": "Alfina",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Advanced Computer Science and Information Systems (ICACSIS)",
"volume": "",
"issue": "",
"pages": "27--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pratiwi, N. I., Budi, I., and Alfina, I. (2018). Hate speech detection on indonesian instagram comments us- ing FastText approach. In International Conference on Advanced Computer Science and Information Systems (ICACSIS), Yogyakarta, Indonesia, 27-28 October.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "INF-HatEval at SemEval-2019 Task 5: Convolutional neural networks for hate speech detection against women and immigrants on twitter",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Silva",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "6--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ribeiro, A. and Silva, N. (2019). INF-HatEval at SemEval-2019 Task 5: Convolutional neural networks for hate speech detection against women and immigrants on twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 420-425, Min- neapolis, Minnesota, USA, 6-7 June.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "hpiDEDIS at GermEval 2019: Offensive language identification using a German BERT model",
"authors": [
{
"first": "J",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stoll",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ziegele",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 15th Conference on Natural Language Processing (KONVENS)",
"volume": "",
"issue": "",
"pages": "8--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Risch, J., Stoll, A., Ziegele, M., and Krestel, R. (2019). hpiDEDIS at GermEval 2019: Offensive language iden- tification using a German BERT model. In Proceedings of the 15th Conference on Natural Language Processing (KONVENS), pages 403-408, Erlangen, Germany, 8-11",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Society for Computational Linguistics & Language Technology",
"authors": [
{
"first": "",
"middle": [],
"last": "October",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "German",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "October. German Society for Computational Linguistics & Language Technology.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "An ensemble approach for aggression identification in English and Hindi text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kapil",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Basak",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ekbal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy, A., Kapil, P., Basak, K., and Ekbal, A. (2018). An en- semble approach for aggression identification in English and Hindi text. In Ritesh Kumar, et al., editors, Pro- ceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Santa Fe, New Mex- ico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A Dictionary-based Approach to Racism Detection in Dutch Social Media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Tulkens",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hilte",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Lodewyckx",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Verhoeven",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop Text Analytics for Cybersecurity and Online Safety",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tulkens, S., Hilte, L., Lodewyckx, E., Verhoeven, B., and Daelemans, W. (2016). A Dictionary-based Approach to Racism Detection in Dutch Social Media. In Proceed- ings of the Workshop Text Analytics for Cybersecurity and Online Safety (TA-COS), Portoroz, Slovenia.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT 2016",
"volume": "",
"issue": "",
"pages": "12--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waseem, Z. and Hovy, D. (2016). Hateful symbols or hate- ful people? predictive features for hate speech detection on twitter. In Proceedings of NAACL-HLT 2016, pages 88-93, San Diego, California, USA, 12-17 June.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Hate speech on twitter: A pragmatic approach to collect hateful and offensive expressions and perform hate speech detection",
"authors": [
{
"first": "H",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bouazizi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ohtsuki",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "",
"issue": "99",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Watanabe, H., Bouazizi, M., and Ohtsuki, T. (2018). Hate speech on twitter: A pragmatic approach to collect hate- ful and offensive expressions and perform hate speech detection. IEEE Access, PP(99):1-11.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Detecting offensive tweets via topical feature discovery over a large scale twitter corpus",
"authors": [
{
"first": "G",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Rose",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "1980--1984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang, G., Fan, B., Wang, L., Hong, J., and Rose, C. (2012). Detecting offensive tweets via topical feature discovery over a large scale twitter corpus. In Proceed- ings of the 21st ACM international conference on Infor- mation and knowledge management, pages 1980-1984, Maui, Hawaii, USA, 29 October-2 November.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A two phase deep learning model for identifying discrimination from tweets",
"authors": [
{
"first": "S",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "19th International Conference on Extending Database Technology",
"volume": "",
"issue": "",
"pages": "15--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan, S., Wu, X., and Xiang, Y. (2016). A two phase deep learning model for identifying discrimination from tweets. In 19th International Conference on Extend- ing Database Technology, pages 696-697, Bordeaux, France, 15-18 March.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Detecting Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tepper",
"suffix": ""
}
],
"year": 2018,
"venue": "Lecture Notes in Computer Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Z., Robinson, D., and Tepper, J. (2018). Detect- ing Hate Speech on Twitter Using a Convolution-GRU Based Deep Neural Network. In Lecture Notes in Com- puter Science. Springer Verlag.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": ""
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Framework of Ensemble Setting"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Sub-task EN-A, scmhl5 CodaLab 571565 (An ensemble of SGD classifiers trained on embedding features prepared by Bert and RS) More detailed results obtained on the test set for the two sub-tasks are shown in Figs. 2 and 3 in the form of con-Sub-task EN-B, scmhl5 CodaLab 571564 (An ensemble of SGD classifiers trained on embedding features prepared by Bert and RS)"
},
"TABREF0": {
"content": "<table><tr><td>Task</td><td>Class</td><td colspan=\"3\">Training Set Validation Set Test set</td></tr><tr><td/><td>NAG</td><td>3375</td><td>836</td><td>690</td></tr><tr><td>Sub-task EN-A</td><td>CAG</td><td>453</td><td>117</td><td>224</td></tr><tr><td/><td>OAG</td><td>435</td><td>113</td><td>286</td></tr><tr><td>Sub-task EN-B</td><td>NGEN GEN</td><td>3954 309</td><td>993 73</td><td>1025 175</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Class Frequency on Training, Validation and Test Sets"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"6\">Method F1(NAG) F1(CAG) F1(OAG) F1(Weighted) Accuracy</td></tr><tr><td>SVM</td><td>0.890</td><td>0.016</td><td>0.337</td><td>0.735</td><td>0.796</td></tr><tr><td>NB</td><td>0.557</td><td>0.261</td><td>0.084</td><td>0.475</td><td>0.414</td></tr><tr><td>Fuzzy</td><td>0.868</td><td>0.126</td><td>0.228</td><td>0.719</td><td>0.757</td></tr><tr><td>SGD</td><td>0.886</td><td>0.017</td><td>0.367</td><td>0.736</td><td>0.796</td></tr><tr><td>RS</td><td>0.891</td><td>0.101</td><td>0.269</td><td>0.738</td><td>0.794</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Results on Validation Data for Sub-task EN-A"
},
"TABREF2": {
"content": "<table><tr><td colspan=\"5\">Method F1(NGEN) F1(GEN) F1(Weighted) Accuracy</td></tr><tr><td>SVM</td><td>0.967</td><td>0.171</td><td>0.912</td><td>0.936</td></tr><tr><td>NB</td><td>0.566</td><td>0.152</td><td>0.538</td><td>0.426</td></tr><tr><td>Fuzzy</td><td>0.96</td><td>0.146</td><td>0.904</td><td>0.923</td></tr><tr><td>SGD</td><td>0.959</td><td>0.265</td><td>0.911</td><td>0.922</td></tr><tr><td>RS</td><td>0.965</td><td>0.417</td><td>0.928</td><td>0.934</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Results on Validation Data for Sub-task EN-B"
},
"TABREF3": {
"content": "<table><tr><td>Task</td><td>Class</td><td colspan=\"3\">F1(Class) F1(Weighted) Accuracy</td></tr><tr><td/><td>NAG</td><td>0.8152</td><td/><td/></tr><tr><td>Sub-task EN-A</td><td>CAG</td><td>0.3106</td><td>0.6637</td><td>0.7025</td></tr><tr><td/><td>OAG</td><td>0.5746</td><td/><td/></tr><tr><td>Sub-task EN-B</td><td>NGEN GEN</td><td>0.9264 0.4120</td><td>0.8514</td><td>0.8692</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Performance on Test Data"
},
"TABREF4": {
"content": "<table><tr><td>System</td><td colspan=\"2\">F1 (weighted) Accuracy</td></tr><tr><td>Bert+RS+SGD(lr=0.01)</td><td>0.6637</td><td>0.7025</td></tr><tr><td colspan=\"2\">Bert+RS+SGD(lr=0.005) 0.6300</td><td>0.6842</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Results for Sub-task EN-A (obtained by deploying an ensemble of SGD classifiers trained on embedding features prepared by Bert and RS)."
},
"TABREF5": {
"content": "<table><tr><td>System</td><td colspan=\"2\">F1 (weighted) Accuracy</td></tr><tr><td>Bert+RS+SGD(lr=0.01)</td><td>0.8514</td><td>0.8692</td></tr><tr><td colspan=\"2\">Bert+RS+SGD(lr=0.005) 0.8428</td><td>0.87</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Results for Sub-task EN-B (obtained by deploying an ensemble of SGD classifiers trained on embedding features prepared by Bert and RS)."
}
}
}
}