ACL-OCL / Base_JSON /prefixT /json /trac /2020.trac-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:24.625114Z"
},
"title": "Aggression Identification in English, Hindi and Bangla Text using BERT, RoBERTa and SVM",
"authors": [
{
"first": "Arup",
"middle": [],
"last": "Baruah",
"suffix": "",
"affiliation": {},
"email": "arup.baruah@gmail.com"
},
{
"first": "Kaushik",
"middle": [],
"last": "Amar",
"suffix": "",
"affiliation": {},
"email": "kaushikamardas@gmail.com"
},
{
"first": "Das",
"middle": [
"\u2666"
],
"last": "Ferdous",
"suffix": "",
"affiliation": {},
"email": "ferdous@iiitg.ac.in"
},
{
"first": "Ahmed",
"middle": [],
"last": "Barbhuiya",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": "",
"affiliation": {},
"email": "kuntadey@in.ibm.com"
},
{
"first": "Iiit",
"middle": [],
"last": "Guwahati",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ibm",
"middle": [],
"last": "Research",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "\u2666",
"middle": [],
"last": "Assam",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the results of the classifiers that the team 'abaruah' developed for the shared tasks in aggression identification and misogynistic aggression identification. These two shared tasks were held as part of the second workshop on Trolling, Aggression and Cyberbullying (TRAC). Both the subtasks were held for English, Hindi and Bangla language. In our study, we used English BERT (En-BERT), RoBERTa, DistilRoBERTa, and SVM based classifiers for the English language. For Hindi and Bangla language, multilingual BERT (M-BERT), XLM-RoBERTa and SVM classifiers were used. Our best performing models are EN-BERT for English Subtask A (Weighted F1 score of 0.73, Rank 5/16), SVM for English Subtask B (Weighted F1 score of 0.87, Rank 2/15), SVM for Hindi Subtask A (Weighted F1 score of 0.79, Rank 2/10), XLMRoBERTa for Hindi Subtask B (Weighted F1 score of 0.87, Rank 2/10), SVM for Bangla Subtask A (Weighted F1 score of 0.81, Rank 2/10), and SVM for Bangla Subtask B (Weighted F1 score of 0.93, Rank 4/8). It is seen that the superior performance of the SVM classifier was achieved mainly because of its better prediction of the majority class. BERT based classifiers were found to predict the minority classes better.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the results of the classifiers that the team 'abaruah' developed for the shared tasks in aggression identification and misogynistic aggression identification. These two shared tasks were held as part of the second workshop on Trolling, Aggression and Cyberbullying (TRAC). Both the subtasks were held for English, Hindi and Bangla language. In our study, we used English BERT (En-BERT), RoBERTa, DistilRoBERTa, and SVM based classifiers for the English language. For Hindi and Bangla language, multilingual BERT (M-BERT), XLM-RoBERTa and SVM classifiers were used. Our best performing models are EN-BERT for English Subtask A (Weighted F1 score of 0.73, Rank 5/16), SVM for English Subtask B (Weighted F1 score of 0.87, Rank 2/15), SVM for Hindi Subtask A (Weighted F1 score of 0.79, Rank 2/10), XLMRoBERTa for Hindi Subtask B (Weighted F1 score of 0.87, Rank 2/10), SVM for Bangla Subtask A (Weighted F1 score of 0.81, Rank 2/10), and SVM for Bangla Subtask B (Weighted F1 score of 0.93, Rank 4/8). It is seen that the superior performance of the SVM classifier was achieved mainly because of its better prediction of the majority class. BERT based classifiers were found to predict the minority classes better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Partisan antipathy in politics is on the rise. All over the world, societies are getting more and more politically polarized (Thomas Carothers, 2019) . It is partly fuelled by the echo chamber and filter bubble effect of social media. Anger is fast becoming a tool to lure voters. As the world gets polarized, the popularity and convenience of the social media platforms are turning them to a modern-day battlefield. This has led to an increase in aggressive content in social media. Some of the world leaders are also using social media as a platform for displaying their aggressiveness. An example of this is the following tweet addressed to North Korean leader Kim Jong-un by U.S. President Donald Trump, \"Will someone from his depleted and food starved regime please inform him that I too have a Nuclear Button, but it is a much bigger & more powerful one than his, and my Button works!\" Social media sites are grappling to remove aggressive content from their sites both to promote healthy discussions and also to comply with legal laws. However, the scale involved makes manual moderation a difficult task. The need of the hour is automated methods for detecting aggressive content. The second workshop on Trolling, Aggression, and Cyberbullying (TRAC-2) (Kumar et al., 2020) is an attempt to promote research in automated detection of aggression in text. This workshop had two shared tasks titled \"Aggression Identification\" (Subtask A) and \"Misogynistic Aggression Identification\" (Subtask B). Aggression identification is a 3-way classification problem where it is required to determine if a given comment is overtly, covertly or not aggressive. Misogynistic aggression is a binary classification problem where it is required to determine if the comment is gender-based or not. Both the subtasks were held for En-glish, Hindi, and Bangla language. We participated in both the subtasks for all the three languages. The classifiers we used in this study include En-BERT, M-BERT, RoBERTa, DistilRoBERTa, and XLM-RoBERTa.",
"cite_spans": [
{
"start": 125,
"end": 149,
"text": "(Thomas Carothers, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1261,
"end": 1281,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Apart from automatic detection of aggression in text, considerable research has been performed for detection of offensive language, abusive language, hate speech, cyberbullying, profanity, and insults. Fortuna and Nunes (2018) provides definitions of the terms mentioned above, provides statistics of research performed for the detection of hate speech, lists the features, classification methods, and challenges in automated hate speech detection. Schmidt and Wiegand (2017) too discusses the different classification methods, features and the challenges involved in the detection of hate speech. Davidson et al. (2017) mentions that not all offensive language is hate speech. Their classifier was able to reduce the number of offensive tweets misclassified as hate speech to 5%. Malmasi and Zampieri (2017) worked on differentiating hate speech from profanity by using an SVM classifier trained on features such as character n-grams (2 to 8), word n-grams (1 to 3), and word skip-grams. Malmasi and Zampieri (2018) extended the above work to include Brown cluster features, ensemble classifiers and meta-classifiers in addition to single classifiers. Zampieri et al. (2019a) introduces a new dataset called Offensive Language Identification Dataset (OLID) where the data has been categorized as offensive or not, targeted or untargeted, and targets individual, group or other. SVM, BiLSTM and CNN classifiers were used in this study to predict the type and target of offensive posts. Zampieri et al. (2019b) (Liu et al., 2019b) and obtained a macro F1 score of 0.8286. Zhu et al. (2019) also used a BERT based model and obtained the 3 rd rank in subtask A of OffensEval 2019 with a macro F1 score of 0.8136. The results of the TRAC-1 has been summarized in Kumar et al. (2018) . As can be seen, both deep learning (LSTM, BiLSTM, CNN) and traditional machine learning classifiers (SVM, Logistic Regression, Random Forest, Naive Bayes) were used in this shared task. Similarly, the HASOC 1 (Mandl et al., 2019) workshop organized at FIRE2019 was also aimed at stimulating research the aforementioned areas in Hindi, English and German languages respectively. They note that the most widely used approach was LSTMs coupled with word embeddings. In this workshop, the participants used a wide variety of models such as BERT, SVM, CNN, LSTM with Attention, etc.",
"cite_spans": [
{
"start": 202,
"end": 226,
"text": "Fortuna and Nunes (2018)",
"ref_id": "BIBREF6"
},
{
"start": 449,
"end": 475,
"text": "Schmidt and Wiegand (2017)",
"ref_id": "BIBREF18"
},
{
"start": 598,
"end": 620,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 781,
"end": 808,
"text": "Malmasi and Zampieri (2017)",
"ref_id": "BIBREF13"
},
{
"start": 989,
"end": 1016,
"text": "Malmasi and Zampieri (2018)",
"ref_id": "BIBREF14"
},
{
"start": 1153,
"end": 1176,
"text": "Zampieri et al. (2019a)",
"ref_id": "BIBREF22"
},
{
"start": 1486,
"end": 1509,
"text": "Zampieri et al. (2019b)",
"ref_id": "BIBREF23"
},
{
"start": 1510,
"end": 1529,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF11"
},
{
"start": 1571,
"end": 1588,
"text": "Zhu et al. (2019)",
"ref_id": "BIBREF26"
},
{
"start": 1759,
"end": 1778,
"text": "Kumar et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 1990,
"end": 2010,
"text": "(Mandl et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The dataset for subtask A has been labelled as either overtly aggressive (OAG), covertly aggressive (CAG) or not aggressive (NAG). The dataset for subtask B has been labelled as gendered (GEN) or non-gendered (NGEN). The dataset is further described in Bhattacharya et al. (2020) . Table 1 shows the statistics of the dataset used for the two shared tasks. As can be seen, the dataset is imbalanced with NAG (for subtask A) and NGEN (for subtask B) occurring more frequently in all the three languages. The NGEN category occurred as high as 93.15% in the English development dataset. This, however, is a true reflection of the proportion of aggressive and non-aggressive comments in real 1 https://hasocfire.github.io/hasoc/2019/ life as has been mentioned in Gao et al. (2017) . The only exception is the Hindi test dataset. In this dataset, OAG is the most frequently occurring class for subtask A and this dataset is almost balanced for subtask B. As can be seen, the comments were also of varied length (in terms of the number of words). The longest comment of 1390 words occurred in the English test dataset. However, as can be seen from the table, the majority of the comments were of length less than 50 words.",
"cite_spans": [
{
"start": 253,
"end": 279,
"text": "Bhattacharya et al. (2020)",
"ref_id": "BIBREF0"
},
{
"start": 760,
"end": 777,
"text": "Gao et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "In our work, before performing tokenization, the text was converted to lower case. This conversion to lower-case was performed through the BERT tokenizer and the TFIDF vectorizer. As mentioned in section 3, except for English and Hindi test set, more than 93% of the comments were of length less than 50 tokens. Hence, for En-BERT and M-BERT, the maximum sequence length of 50 was used. Comments of length beyond 50 tokens were truncated. In the RoBERTa models, the long sentences were split into multiple samples 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1."
},
{
"text": "English BERT (Devlin et al., 2019 ) is a bi-directional model based on the transformer architecture. The transformer architecture is an architecture based solely on attention mechanism (Vaswani et al., 2017) . The transformer architecture overcomes the inherent sequential nature of Recurrent Neural Networks (RNN) and hence they are more conducive for parallelization. In our study, we used the uncased large version of En-BERT 2 . This version has 24 layers and 16 attention heads. This model generates 1024 dimensional vector for each word. We used 1024 dimensional vector of the Extract layer as the representation of the comment. Our classification layer consisted of a single Dense layer. For subtask A, the dense layer consisted of 3 units and the softmax activation function was used. The loss function used was sparse categorical crossentropy. For subtask B, the dense layer consisted of 1 unit and the sigmoid activation function was used. The loss function used was binary crossentropy. The Adam optimizer with a learning rate of 2e-5 was used for training the model. The model was trained for 15 epochs. Early stopping with patience of 5 was used for both the subtasks. Sparse categorical accuracy was monitored for early stopping.",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF4"
},
{
"start": 185,
"end": 207,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "English BERT (En-BERT)",
"sec_num": "4.2.1."
},
{
"text": "Multilingual BERT is BERT trained for multilingual tasks. It was trained on monolingual Wikipedia articles of 104 different languages. It is intended to enable M-BERT finetuned in one language to make predictions for another language. In our study, we used the M-BERT model having 12 layers and 12 heads. This model generates 768 dimensional vector for each word. We used the 768 dimensional vector of the Extract layer as the representation of the comment. Just like for the English language subtasks, a single Dense layer was used as the classification model. The hyperparameters used for training the model is the same as mentioned for the English language. E \u2190 E + p 7: end for 8: preds \u2190 Index of max element in each row of N training on a larger dataset, dynamically masking out tokens compared to the original static masking, etc. Distil-RoBERTa (Sanh et al., 2019 ) is a compressed version of the same which trains faster and preserves up to 95% of the performance of the original. For both of these models, we make use of the pre-trained base versions made available by the HuggingFace Transformers library (Wolf et al., 2019) . We make use of the RoBERTa model for English Task A and DistilRoBERTa for English Task B. We use an attention layer (Zhou et al., 2016) on top of the embeddings of the underlying pre-trained model. However, instead of the tanh activation function used in the original work, we used penalized \u2212 tanh which is demonstrated to work better for NLP tasks (Eger et al., 2019) combined with a crossentropy loss function. We also do not apply sof tmax on the output of the classifying layer as done in the original work and instead use argmax directly on the final layer outputs to make the prediction. We make use of the Ranger Optimizer which is a combination of RAdam (Liu et al., 2019a) wrapped with Lookahead (Zhang et al., 2019) to train the model. The entire model is fine-tuned with a tiny learning rate of 1e \u2212 4 for both of the English classification tasks. For task A and task B, lookahead's (k, \u03b1) is set to (5, 0.5) and (6, 0.5) with a weight decay of 1e \u2212 5 respectively. The models were set to run for 20 epochs with early stopping patience of 4. We made use of a naive checkpoint ensembling method (Chen et al., 2017) where we save the model weights and dev-set predictions (i.e. the final layer output) at each epoch. The method is given in Algorithm 1. The method is called once with reverse set to T rue and once with F alse. The ensembled model which maximize our chosen metric (weighted-f1) value is chosen. If the ensemble does not improve the metric, we simply choose the best model found during training. Once we have chosen the model, we use Algorithm 2 to make the final prediction on the test set. This Algorithm 2 simply describes adding the weights of the final classifying layer of the model and using argmax along each row to get the prediction. Naive ensembling increases the weighted f1 on the dev-set on English task A from 0.8070 to 0.8124. We did not use it for English task B as it degraded the performance.",
"cite_spans": [
{
"start": 853,
"end": 871,
"text": "(Sanh et al., 2019",
"ref_id": null
},
{
"start": 1116,
"end": 1135,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 1254,
"end": 1273,
"text": "(Zhou et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 1488,
"end": 1507,
"text": "(Eger et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 1801,
"end": 1820,
"text": "(Liu et al., 2019a)",
"ref_id": "BIBREF10"
},
{
"start": 1844,
"end": 1864,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 2244,
"end": 2263,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT (M-BERT)",
"sec_num": "4.2.2."
},
{
"text": "XLM-RoBERTa (Conneau et al., 2019) is a cross-lingual model that aims to tackle the curse-of-multilinguality problem of cross-lingual models. It is inspired by (Liu et al., 2019c) and is trained on up-to 100 languages and outperforms M-BERT in multiple cross-lingual benchmarks.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "(Conneau et al., 2019)",
"ref_id": null
},
{
"start": 160,
"end": 179,
"text": "(Liu et al., 2019c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-RoBERTa",
"sec_num": "4.2.4."
},
{
"text": "Similar to Section 4.2.3, we use 3 the base version coupled with an attention head classifier, the same optimizer, epochs, and early stopping. Lookahead's (k, \u03b1) is set to (6, 0.5) with weight-decay of 1e \u2212 5. Batch-size is set to (22,24) for Bangla tasks (A, B) and 32 for both Hindi tasks. This model is used in the sub-tasks of the Hindi and Bangla languages. For the Hindi models, we use the naive checkpoint ensembling method described in Section 4.2.3. This increased the weighted f1 from 0.7146 to 0.7160 for Hindi task A and from 0.8908 to 0.8969 for Hindi task B. Naive ensembling did not yield any performance boosts in Bangla tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-RoBERTa",
"sec_num": "4.2.4."
},
{
"text": "We also used the Support Vector Machine (SVM) model for both the subtasks in all the 3 languages. The SVM model was trained using TF-IDF features of word and character ngrams. Word n-grams of size 1 to 3 and character n-grams of size 1 to 6 were used. The linear kernel was used for the classifier and hyperparameter C was set to 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SVM",
"sec_num": "4.2.5."
},
{
"text": "As has been mentioned in section 4, the classifiers we used include En-BERT, RoBERTa, DistilRoBERTa and SVM for the subtasks in the English language, and M-BERT, XLM-RoBERTa and SVM for the subtasks in Hindi and Bangla language. Table 2 and 3 show the results we obtained on the development and test set respectively. Both the table shows the precision, recall, macro F1, weighted F1, and accuracy. Weighted F1 score is the metric that has officially been used to rank the submissions. As can be seen from table 2, the best performing classifiers on the development set were RoBERTa for English subtask A, En-BERT for English subtask B, XLM-RoBERTa for Hindi subtask A, Bangla subtask A, and Bangla subtask B, and M-BERT for Hindi subtask B. As can be seen from table 3, the SVM classifier which was not the best on the development set, actually performed well on the test set for English subtask B (ranked 2 nd ), Hindi subtask A (ranked 2 nd ), Bangla subtask A (ranked 2 nd ), and Bangla subtask B (ranked 4 th ). The other bestperforming classifiers are En-BERT for English subtask A (ranked 5 th ), and XLM-RoBERTa for Hindi subtask B (ranked 2 nd ). The results of M-BERT for Hindi subtask A are not shown as an error was made for this run (binary classification was performed instead of performing 3-class classification). It can also be seen from table 3 that for subtask B, the best performance of all the classifiers (SVM, BERT-based, and RoBERTa-based) was obtained for the Bangla language. For subtask B, the SVM classifier had the weighted F1 score of 0.87, 0.84 and 0.92, the RoBERTa-based classifiers had a score of 0.86, 0.87 and 0.92, and the BERT-based classifiers had a score of 0.85, 0.84 and 0.92 for English, Hindi and Bangla language respectively. Even for subtask A, the classifiers obtained better score for the Bangla 3 Code for this particular model available at https:// github.com/cozek/trac2020_submission language (except for RoBERTa-based classifier which obtained a slightly better score for Hindi language as compared to Bangla language). The confusion matrices of the classifiers on the test set are shown in table 4 to 9. As can be seen from table 4, the strength of En-BERT which was our best performing classifier for English subtask A, was that it predicted the minority classes better than the other two classifiers. In fact, it was the worst in predicting the majority NAG class. But because of its correct predictions for the minority classes, it was our best performing classifier for this subtask. RoBERTa too predicted the OAG class better than SVM. However, RoBERTa did not perform well in predicting the CAG class. Detecting covertly aggressive comments is very difficult and En-BERT performed better than the other two classifiers in predicting this class. As can be seen from table 7, SVM which was our best performing classifier for English subtask B, predicted the majority class better than the other two classifiers. SVM, however, was the worst in predicting the minority class. En-BERT again was the best in predicting the minority class. En-BERT also had the best recall score for this subtask. As mentioned in section 3, for Hindi subtask A, OAG was the majority class. XLM-RoBERTa performed better than SVM in predicting the majority class. However, SVM performed better in predicting the CAG and NAG class and hence was the best performing classifier in this subtask. For Hindi subtask B, the dataset was quite balanced, and in this dataset, XLM-RoBERTa performed the best. For Bangla subtask A, SVM performed the best in predicting the majority NAG class as well as the CAG class. As such, it was the best performing classifier in this subtask. For Bangla subtask B, SVM again performed better in predicting the majority class. In this subtask, M-BERT and XLM-RoBERTa performed better than SVM in predicting the minority class. The best performing classifier for this subtask was SVM.",
"cite_spans": [
{
"start": 1844,
"end": 1845,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 229,
"end": 242,
"text": "Table 2 and 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5."
},
{
"text": "On analysis of the predictions made by our classifiers on the development set, we found that our classifiers were not able to handle intentional or unintentional orthographic variations of toxic words and spelling mistakes. For example, both the SVM and En-BERT classifiers wrongly classified the comment \"Fuuck your music\" as not aggressive. This comment has been labelled by the annotators as overtly aggressive. However, after changing the toxic word 'Fuuck' to 'Fuck', both the classifiers were able to make the correct prediction for the comment. Similarly, both the classifiers were not able to handle the spelling mistake for the word 'prostitute' in the comment 'So sad she is a professional prostatiut'. The comment was wrongly classified as not gendered. After correcting the spelling mistake, both the classifiers were able to classify the comment correctly. Annotators have labelled comments such as 'Im homosexual and really proud of it' and 'I. Gay' where the user is attributing homosexuality to oneself as not gendered. However, our SVM wrongly classifies these comments as gendered based on the presence of the words homosexual and gay. So, the SVM classifier has not been able to detect the benign use of these words. The En-BERT classifier however correctly classified these comments correctly as not gendered. Our classifiers were not able to correctly classify comments such as 'There are only 2 genders' that require world knowledge. The above comment was labelled by the annotators as gendered. However, because of the absence of any toxic words, the above comment was classified by both the SVM and En-BERT classifier as not gendered. There were also certain comments such as 'Hot' that were labelled as gendered by the annotators. These comments are ambiguous and can belong to either of the two categories. Most likely, these comments we labelled so based on some contextual information. In the absence of contextual information, our classifiers did not classify these comments correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6."
},
{
"text": "We used BERT, RoBERTa and SVM based classifiers for detection of aggression in English, Hindi and Bangla text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "Our SVM classifier performed remarkably well on the test set and obtained 2 nd rank in the official results for 3 of the 6 tests and obtained 4 th in another. However, on closer analysis, it is seen that the superior performance of the SVM classifier was mainly due to the better prediction of the majority class. BERT based classifiers were found to predict the minority classes better. It was also found that our clas-sifiers did not handle spelling mistakes and intentional orthographic variations correctly. FastText word embeddings are better in handling orthographic variations. As a future study, it can be checked if FastText embeddings improve performance on this dataset. Another option would be to use automatic methods for correcting grammatical and spelling mistakes. Use of contextual information and world knowledge for automatic detection of aggression needs further investigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "https://github.com/google-research/bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Developing a multilingual annotated corpus of misogyny and aggression",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Dawer",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lahiri",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhattacharya, S., Singh, S., Kumar, R., Bansal, A., Bhagat, A., Dawer, Y., Lahiri, B., and Ojha, A. K. (2020). Devel- oping a multilingual annotated corpus of misogyny and aggression.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Checkpoint ensembles: Ensemble methods from a single training process",
"authors": [
{
"first": "H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "S.-I",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, H., Lundberg, S., and Lee, S.-I. (2017). Check- point ensembles: Ensemble methods from a single train- ing process.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automated Hate Speech Detection and the Problem of Offensive Language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated Hate Speech Detection and the Prob- lem of Offensive Language. In Proceedings of ICWSM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Is it time to swish? comparing deep learning activation functions across nlp tasks",
"authors": [
{
"first": "S",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Youssef",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eger, S., Youssef, P., and Gurevych, I. (2019). Is it time to swish? comparing deep learning activation functions across nlp tasks.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Survey on Automatic Detection of Hate Speech in Text",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "51",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fortuna, P. and Nunes, S. (2018). A Survey on Automatic Detection of Hate Speech in Text. ACM Computing Sur- veys (CSUR), 51(4):85.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recognizing Explicit and Implicit Hate Speech Using a Weakly Supervised Two-path Bootstrapping Approach",
"authors": [
{
"first": "L",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kuppersmith",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "IJC-NLP 2017",
"volume": "",
"issue": "",
"pages": "774--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, L., Kuppersmith, A., and Huang, R. (2017). Recog- nizing Explicit and Implicit Hate Speech Using a Weakly Supervised Two-path Bootstrapping Approach. In IJC- NLP 2017, pages 774-782, Taipei, Taiwan.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Benchmarking Aggression Identification in Social Media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018). Benchmarking Aggression Identification in So- cial Media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC), Santa Fe, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Evaluating Aggression Identification in Social Media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2020). Evaluating Aggression Identification in Social Media. In Ritesh Kumar, et al., editors, Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying (TRAC-2020), Paris, France, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On the variance of the adaptive learning rate and beyond",
"authors": [
{
"first": "L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.03265"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., and Han, J. (2019a). On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "NULI at SemEval-2019 task 6: Transfer learning for offensive language detection using bidirectional transformers",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "87--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, P., Li, W., and Zou, L. (2019b). NULI at SemEval- 2019 task 6: Transfer learning for offensive language de- tection using bidirectional transformers. In Proceedings of the 13th International Workshop on Semantic Evalua- tion, pages 87-91, Minneapolis, Minnesota, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Detecting Hate Speech in Social Media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "467--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2017). Detecting Hate Speech in Social Media. In Proceedings of the Interna- tional Conference Recent Advances in Natural Language Processing (RANLP), pages 467-472.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Challenges in Discriminating Profanity from Hate Speech",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Experimental & Theoretical Artificial Intelligence",
"volume": "30",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2018). Challenges in Dis- criminating Profanity from Hate Speech. Journal of Ex- perimental & Theoretical Artificial Intelligence, 30:1- 16.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mandlia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation, FIRE '19",
"volume": "",
"issue": "",
"pages": "14--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandl, T., Modha, S., Majumder, P., Patel, D., Dave, M., Mandlia, C., and Patel, A. (2019). Overview of the hasoc track at fire 2019: Hate speech and offensive con- tent identification in indo-european languages. In Pro- ceedings of the 11th Forum for Information Retrieval Evaluation, FIRE '19, page 14-17, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Survey on Hate Speech Detection Using Natural Language Processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmidt, A. and Wiegand, M. (2017). A Survey on Hate Speech Detection Using Natural Language Processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. Associ- ation for Computational Linguistics, pages 1-10, Valen- cia, Spain.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "How to Understand the Global Spread of Political Polarization",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Carothers",
"suffix": ""
},
{
"first": "A",
"middle": [
"O"
],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Carothers, A. O. (2019). How to Understand the Global Spread of Political Polarization. https: //carnegieendowment.org/2019/10/01/ how-to-understand-global-spread-of- political-polarization-pub-79893. [On- line; accessed 15-April-2020].",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is All you Need",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is All you Need. In I. Guyon, et al., editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtow- icz, M., and Brew, J. (2019). Huggingface's transform- ers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Predicting the Type and Target of Offensive Posts in Social Media",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019a). Predicting the Type and Tar- get of Offensive Posts in Social Media. In Proceedings of NAACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019b). SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in So- cial Media (OffensEval). In Proceedings of The 13th International Workshop on Semantic Evaluation (Se- mEval).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lookahead optimizer: k steps forward, 1 step back",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, M. R., Lucas, J., Hinton, G., and Ba, J. (2019). Lookahead optimizer: k steps forward, 1 step back.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention-based bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "P",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "207--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, P., Shi, W., Tian, J., Qi, Z., Li, B., Hao, H., and Xu, B. (2016). Attention-based bidirectional long short-term memory networks for relation classification. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 207-212, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "UM-IU@LING at SemEval-2019 task 6: Identifying offensive tweets using BERT and SVMs",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "788--795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhu, J., Tian, Z., and K\u00fcbler, S. (2019). UM-IU@LING at SemEval-2019 task 6: Identifying offensive tweets us- ing BERT and SVMs. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 788- 795, Minneapolis, Minnesota, USA, June. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td colspan=\"4\">Language Type Total NAG</td><td>CAG</td><td>OAG</td><td>NGEN</td><td>GEN</td><td>Max</td><td>Length below</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Length 50 words</td></tr><tr><td>English</td><td colspan=\"3\">Train 4263 3375</td><td>453</td><td>435</td><td>3954</td><td>309</td><td>806</td><td>93.31%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(79.17%) (10.63%) (10.20%) (92.75%) (7.25%)</td><td/><td/></tr><tr><td>English</td><td>Dev</td><td colspan=\"2\">1066 836</td><td>117</td><td>113</td><td>993</td><td>73</td><td>457</td><td>93.34%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(78.42%) (10.98%) (10.60%) (93.15%) (6.85%)</td><td/><td/></tr><tr><td>English</td><td>Test</td><td colspan=\"2\">1200 690</td><td>224</td><td>286</td><td>1025</td><td>175</td><td>1390</td><td>77.41%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(57.50%) (18.67%) (23.83%) (85.42%) (14.58%)</td><td/><td/></tr><tr><td>Hindi</td><td colspan=\"3\">Train 3984 2245</td><td>829</td><td>910</td><td>3323</td><td>661</td><td>557</td><td>95.41%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(56.35%) (20.81%) (22.84%) (83.41%) (16.59%)</td><td/><td/></tr><tr><td>Hindi</td><td>Dev</td><td>997</td><td>578</td><td>211</td><td>208</td><td>845</td><td>152</td><td>230</td><td>93.98%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(57.97%) (21.16%) (20.86%) (84.75%) (15.26%)</td><td/><td/></tr><tr><td>Hindi</td><td>Test</td><td colspan=\"2\">1200 325</td><td>191</td><td>684</td><td>633</td><td>567</td><td>669</td><td>89.92%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(27.08%) (15.92%) (57.00%) (52.75%) (47.25%)</td><td/><td/></tr><tr><td>Bangla</td><td colspan=\"3\">Train 3826 2078</td><td>898</td><td>850</td><td>3114</td><td>712</td><td>154</td><td>98.64%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(54.31%) (23.47%) (22.22%) (81.39%) (18.61%)</td><td/><td/></tr><tr><td>Bangla</td><td>Dev</td><td>957</td><td>522</td><td>218</td><td>217</td><td>766</td><td>191</td><td>182</td><td>98.64%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(54.55%) (22.78%) (22.68%) (80.04%) (19.96%)</td><td/><td/></tr><tr><td>Bangla</td><td>Test</td><td colspan=\"2\">1188 712</td><td>225</td><td>251</td><td>986</td><td>202</td><td>113</td><td>99.24%</td></tr><tr><td/><td/><td/><td colspan=\"5\">(59.93%) (18.94%) (21.13%) (83.00%) (17.00%)</td><td/><td/></tr></table>",
"text": "summarizes the results from the shared task on",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table><tr><td>: Dataset Statistics</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table><tr><td>Algorithm 1 Naive Checkpoint Ensemble 1: A \u2190 True labels 2: P \u2190 Model predictions at each epoch 3: N \u2190 Num samples, C \u2190 Num classes 4: reverse \u2190 boolean 5: function ENSEMBLE(P, A, N, C, reverse) 6: models \u2190 {}, val \u2190 0 7: Z[N ][C] \u2190 Zero Matrix 8: \u2190 len(P ) Num Epochs 9: if reverse then 10: range \u2190 to 0 11: else 12: range \u2190 0 to 13: end if 14: for (e \u2190 range) do 15: temp \u2190 Z 16: temp \u2190 temp + P [e] 17: if metric(A, temp) &gt; val then 18: Z \u2190 Z + P 19: models \u2190 models \u222a e 20: val \u2190 metric(A, temp) 21: else 22: continue 23: end if 24: end for 25: return models, val 26: end function 4.2.3. RoBERTa and DistilRoBERTa RoBERTa (4: 5: 6:</td><td>Load model with weights at epoch i p \u2190 model.predict(samples)</td></tr></table>",
"text": "Liu et al., 2019c) improves upon BERT by adding a few modifications to the original model such asAlgorithm 2 Make Prediction 1: m \u2190 model ids chosen for ensemble 2: E[N ][C] \u2190 Zero Matrix 3: for i in m do",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>Task</td><td>System</td><td colspan=\"2\">Precision Recall</td><td>F1</td><td>F1</td><td colspan=\"2\">Accuracy Rank</td></tr><tr><td/><td/><td>(Macro)</td><td colspan=\"3\">(Macro) (macro) (weighted)</td><td/><td/></tr><tr><td colspan=\"2\">English A SVM</td><td>0.7923</td><td>0.6077</td><td>0.6489</td><td>0.7173</td><td>0.7450</td><td/></tr><tr><td colspan=\"2\">English A RoBERTa</td><td>0.6722</td><td>0.5921</td><td>0.6130</td><td>0.6986</td><td>0.7233</td><td/></tr><tr><td colspan=\"2\">English A En-BERT</td><td>0.6880</td><td>0.6415</td><td>0.6501</td><td>0.7289</td><td>0.7350</td><td>5 th</td></tr><tr><td colspan=\"2\">English B SVM</td><td>0.7980</td><td>0.6744</td><td>0.7121</td><td>0.8701</td><td>0.8850</td><td>2 nd</td></tr><tr><td colspan=\"2\">English B DistilRoBERTa</td><td>0.7277</td><td>0.7101</td><td>0.7183</td><td>0.8623</td><td>0.8650</td><td/></tr><tr><td colspan=\"2\">English B En-BERT</td><td>0.6980</td><td>0.7226</td><td>0.7089</td><td>0.8503</td><td>0.8458</td><td/></tr><tr><td>Hindi A</td><td>SVM</td><td>0.7252</td><td>0.7592</td><td>0.7363</td><td>0.7944</td><td>0.7867</td><td>2 nd</td></tr><tr><td>Hindi A</td><td colspan=\"2\">XLM-RoBERTa 0.7129</td><td>0.7269</td><td>0.7188</td><td>0.7927</td><td>0.7892</td><td/></tr><tr><td>Hindi B</td><td>SVM</td><td>0.8597</td><td>0.8373</td><td>0.8395</td><td>0.8408</td><td>0.8433</td><td/></tr><tr><td>Hindi B</td><td colspan=\"2\">XLM-RoBERTa 0.8704</td><td>0.8673</td><td>0.8683</td><td>0.8689</td><td>0.8692</td><td>2 nd</td></tr><tr><td>Hindi B</td><td>M-BERT</td><td>0.8395</td><td>0.8363</td><td>0.8372</td><td>0.8379</td><td>0.8383</td><td/></tr><tr><td colspan=\"2\">Bangla A SVM</td><td>0.8385</td><td>0.7171</td><td>0.7586</td><td>0.8083</td><td>0.8199</td><td>2 nd</td></tr><tr><td colspan=\"3\">Bangla A XLM-RoBERTa 0.7434</td><td>0.7136</td><td>0.7264</td><td>0.7880</td><td>0.7938</td><td/></tr><tr><td colspan=\"2\">Bangla A M-BERT</td><td>0.7265</td><td>0.6945</td><td>0.7074</td><td>0.7740</td><td>0.7820</td><td/></tr><tr><td colspan=\"2\">Bangla B SVM</td><td>0.9299</td><td>0.8167</td><td>0.8600</td><td>0.9258</td><td>0.9310</td><td>4 th</td></tr><tr><td colspan=\"3\">Bangla B XLM-RoBERTa 0.8431</td><td>0.8617</td><td>0.8519</td><td>0.9153</td><td>0.9141</td><td/></tr><tr><td colspan=\"2\">Bangla B M-BERT</td><td>0.8619</td><td>0.8648</td><td>0.8633</td><td>0.9227</td><td>0.9226</td><td/></tr></table>",
"text": "Dev Set Results",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table><tr><td/><td>SVM</td><td/><td/><td>RoBERTa</td><td/><td/><td>En-BERT</td><td/></tr><tr><td colspan=\"9\">Pred Pred Pred Pred Pred Pred Pred Pred Pred</td></tr><tr><td colspan=\"9\">CAG NAG OAG CAG NAG OAG CAG NAG OAG</td></tr><tr><td>True CAG 86</td><td>135</td><td>3</td><td>64</td><td>132</td><td>28</td><td>122</td><td>83</td><td>19</td></tr><tr><td>True NAG 3</td><td>677</td><td>10</td><td>26</td><td>645</td><td>19</td><td>48</td><td>624</td><td>18</td></tr><tr><td>True OAG 26</td><td>129</td><td>131</td><td>38</td><td>89</td><td>159</td><td>97</td><td>53</td><td>136</td></tr></table>",
"text": "Official Results on Test Set",
"num": null,
"type_str": "table",
"html": null
},
"TABREF6": {
"content": "<table><tr><td/><td>SVM</td><td/><td colspan=\"3\">XLM-RoBERTa</td></tr><tr><td colspan=\"6\">Pred Pred Pred Pred Pred Pred</td></tr><tr><td colspan=\"6\">CAG NAG OAG CAG NAG OAG</td></tr><tr><td>True CAG 121</td><td>52</td><td>18</td><td>101</td><td>53</td><td>37</td></tr><tr><td>True NAG 42</td><td>273</td><td>10</td><td>54</td><td>257</td><td>14</td></tr><tr><td>True OAG 64</td><td>70</td><td>550</td><td>46</td><td>49</td><td/></tr></table>",
"text": "Confusion Matrix on Test Set for English Subtask A",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table><tr><td/><td>SVM</td><td/><td colspan=\"3\">XLM-RoBERTa</td><td/><td>M-BERT</td><td/></tr><tr><td colspan=\"9\">Pred Pred Pred Pred Pred Pred Pred Pred Pred</td></tr><tr><td colspan=\"9\">CAG NAG OAG CAG NAG OAG CAG NAG OAG</td></tr><tr><td>True CAG 116</td><td>101</td><td>8</td><td>115</td><td>82</td><td>28</td><td>100</td><td>90</td><td>35</td></tr><tr><td>True NAG 14</td><td>691</td><td>7</td><td>42</td><td>647</td><td>23</td><td>53</td><td>645</td><td>14</td></tr><tr><td>True OAG 16</td><td>68</td><td>167</td><td>33</td><td>37</td><td>181</td><td>26</td><td>41</td><td>184</td></tr></table>",
"text": "Confusion Matrix on Test Set for Hindi Subtask A",
"num": null,
"type_str": "table",
"html": null
},
"TABREF8": {
"content": "<table><tr><td/><td/><td>SVM</td><td colspan=\"2\">RoBERTa</td><td colspan=\"2\">En-BERT</td></tr><tr><td/><td colspan=\"2\">Pred Pred</td><td colspan=\"2\">Pred Pred</td><td colspan=\"2\">Pred Pred</td></tr><tr><td/><td colspan=\"6\">GEN NGEN GEN NGEN GEN NGEN</td></tr><tr><td>True GEN</td><td>66</td><td>109</td><td>86</td><td>89</td><td>96</td><td>79</td></tr><tr><td colspan=\"2\">True NGEN 29</td><td>996</td><td>73</td><td>952</td><td>106</td><td>919</td></tr></table>",
"text": "Confusion Matrix on Test Set for Bangla Subtask A",
"num": null,
"type_str": "table",
"html": null
},
"TABREF9": {
"content": "<table><tr><td/><td/><td>SVM</td><td colspan=\"2\">XLM-RoBERTa</td><td colspan=\"2\">M-BERT</td></tr><tr><td/><td colspan=\"2\">Pred Pred</td><td colspan=\"2\">Pred Pred</td><td colspan=\"2\">Pred Pred</td></tr><tr><td/><td colspan=\"4\">GEN NGEN GEN NGEN</td><td colspan=\"2\">GEN NGEN</td></tr><tr><td>True GEN</td><td>413</td><td>154</td><td>473</td><td>94</td><td>453</td><td>114</td></tr><tr><td colspan=\"2\">True NGEN 34</td><td>599</td><td>63</td><td>570</td><td>80</td><td>553</td></tr></table>",
"text": "Confusion Matrix on Test Set for English Subtask B",
"num": null,
"type_str": "table",
"html": null
},
"TABREF10": {
"content": "<table><tr><td/><td/><td>SVM</td><td colspan=\"2\">XLM-RoBERTa</td><td colspan=\"2\">M-BERT</td></tr><tr><td/><td colspan=\"2\">Pred Pred</td><td colspan=\"2\">Pred Pred</td><td colspan=\"2\">Pred Pred</td></tr><tr><td/><td colspan=\"4\">GEN NGEN GEN NGEN</td><td colspan=\"2\">GEN NGEN</td></tr><tr><td>True GEN</td><td>130</td><td>72</td><td>158</td><td>44</td><td>157</td><td>45</td></tr><tr><td colspan=\"2\">True NGEN 10</td><td>976</td><td>58</td><td>928</td><td>47</td><td>939</td></tr></table>",
"text": "Confusion Matrix on Test Set for Hindi Subtask B",
"num": null,
"type_str": "table",
"html": null
},
"TABREF11": {
"content": "<table/>",
"text": "Confusion Matrix on Test Set for Bangla Subtask B",
"num": null,
"type_str": "table",
"html": null
}
}
}
}