| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:12:28.495961Z" |
| }, |
| "title": "Spartans@LT-EDI-EACL2021: Inclusive Speech Detection using Pretrained Language Models", |
| "authors": [ |
| { |
| "first": "Megha", |
| "middle": [], |
| "last": "Sharma", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Gaurav", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "gaurav@haptik.ai" |
| }, |
| { |
| "first": "Jio", |
| "middle": [], |
| "last": "Haptik", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe our system that ranked first in Hope Speech Detection (HSD) shared task and fourth in Offensive Language Identification (OLI) shared task, both in Tamil language. The goal of HSD and OLI is to identify if a codemixed comment or post contains hope speech or offensive content respectively. Our work extends that of (Arora, 2020a) as we use their strategy to synthetically generate code-mixed data for training a transformer-based model RoBERTa and use it in an ensemble along with their pre-trained ULMFiT.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe our system that ranked first in Hope Speech Detection (HSD) shared task and fourth in Offensive Language Identification (OLI) shared task, both in Tamil language. The goal of HSD and OLI is to identify if a codemixed comment or post contains hope speech or offensive content respectively. Our work extends that of (Arora, 2020a) as we use their strategy to synthetically generate code-mixed data for training a transformer-based model RoBERTa and use it in an ensemble along with their pre-trained ULMFiT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Language has the ability to build relationships and forge connections but it is equally liable for creating barriers and impacting someone's sense of belonging. The language used on the internet has an impact on people across the globe. It is important to build language technology which makes everyone feel valued and included. We make contributions to this field by competing in two shared tasks: Hope is considered significant for the well-being, recuperation and restoration of human life by health professionals. Hope speech reflects the belief that one can discover pathways to one's desired objectives and become motivated to utilize those pathways (Snyder et al., 1991; Chang, 1998) . The goal of HSD task is to identify whether a YouTube comment contains hope speech or not. The datasets are available in English, code-mixed Tamil-English and Malayalam-English. OLI task intends to identify offensive language content in datasets comprising of comments/posts in code-mixed Tamil-English, Malayalam-English and Kannada-English which are collected from social media. Both datasets have been annotated at a comment level wherein a comment could comprise of more than one sentence but on average it has a single sentence.", |
| "cite_spans": [ |
| { |
| "start": 656, |
| "end": 677, |
| "text": "(Snyder et al., 1991;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 678, |
| "end": 690, |
| "text": "Chang, 1998)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our work is an extension of the work done in (Arora, 2020a) as we use their synthetic codemixed dataset for Tamil and ULMFiT model trained on that dataset. We pre-train a transformer based model RoBERTa (Liu et al., 2019) from scratch on code-mixed data and build an ensemble using ULMFiT and RoBERTa to achieve:", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 221, |
| "text": "(Liu et al., 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Weighted F1 score of 0.61 for Tamil HSD and Rank 1 amongst 30 participating teams", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Weighted F1 score of 0.75 for Tamil OLI and Rank 4 amongst 30 participating teams", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We review some related work from literature before explaining the details of our approach and the results. All experiments described in the paper can be reproduced using the source code available on GitHub 3 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As noted on the LT-EDI 2021's website 4 , this is the first shared task on HSD. Some work has been previously done for HSD (Palakodety et al., 2019) ,", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 148, |
| "text": "(Palakodety et al., 2019)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Offensive but we are not aware of any work for HSD in Tamil language. OLI has been an area of active research in both academia and industry for the past two decades. Recent work has been done for OLI in Dravidian languages in HASOC task at FIRE (Mandl et al., 2020) . HASOC task, which attracted over 40 research groups, consisted of building Hate Speech and Offensive Language identification systems by using datasets prepared by extracting comments/posts from YouTube and Twitter. In this paper, we build classification models for HSD and OLI using Transfer Learning, details of which have been explained in Section 3.", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 265, |
| "text": "(Mandl et al., 2020)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metric", |
| "sec_num": null |
| }, |
| { |
| "text": "Language", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metric", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we look at classification datasets, discuss details of RoBERTa and ULMFiT models and the classifiers which are trained on top of these language models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Dataset for RoBERTa pre-training. We use synthetically generated code-mixed data for Tamil 5 prepared in (Arora, 2020a) to pretrain RoBERTa from scratch. The dataset is a collection of Tamil sentences written in Latin script. It was prepared by transliterating Tamil Wikipedia articles using Indic-Trans 6 library.", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 119, |
| "text": "(Arora, 2020a)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Classification datasets. Table 1 shows statistics of datasets of both tasks. We observe that the statistics are fairly consistent across train, valid and test sets. Classification dataset for HSD (Chakravarthi, 2020) has 3 classes whereas that in OLI has 6 classes. Both the classification datasets have significant class imbalance depicting real-world scenarios. Additionally, they contain code-mixed comments/posts in both Latin and native scripts, making the tasks challenging.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 25, |
| "end": 32, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We take a two-step approach to the problem by pretraining ULMFiT (Howard and Ruder, 2018) and RoBERTa (Liu et al., 2019 ) models on synthetically generated code-mixed language followed by an ensemble of two classifiers which are trained on top of ULMFiT and RoBERTa language models respectively.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 89, |
| "text": "(Howard and Ruder, 2018)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 102, |
| "end": 119, |
| "text": "(Liu et al., 2019", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Details", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We use pre-trained ULMFiT model for code-mixed Tamil similar to the one used in (Arora, 2020b at the last layer.", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 93, |
| "text": "(Arora, 2020b", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ULMFiT Model", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "RoBERTa model builds on BERT (Devlin et al., 2019) and modifies BERT's key hyperparameters, removes the next-sentence pre-training objective and trains with much larger mini-batches and learning rates. RoBERTa has the same architecture as BERT but it uses a different pre-training scheme and tokenizes text using Byte-Pair Encoding (Sennrich et al., 2016) . We use implementation of RoBERTa in Huggingface's Transformers library 7 to pre-train the model from scratch. We train it for 7 epochs using a learning rate of 5e-5 and a dropout of 0.1 for attention and hidden layers. Table 2 compares perplexity of our pre-trained RoBERTa model with that of ULMFiT model which is also trained on the same code-mixed data.", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 50, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 332, |
| "end": 355, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 577, |
| "end": 584, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "RoBERTa Model", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "We pre-process the classification datasets of both tasks by transforming comments in native script into Latin script using Indic-Trans library. This step is required because both of our pre-trained language models, ULMFiT and RoBERTa, are trained on code-mixed data in Latin script. We also perform other basic pre-processing steps like lowercasing and removing @username mentions. We did not apply other pre-processing steps such as stop words removal or removal of words that are too short since both of our pre-trained language models 7 https://huggingface.co/transformers/model doc/roberta.html are trained on complete sentences and we wanted the model to figure out on its own if stop/short words are important for classification or not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification data pre-processing", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In this section we discuss details and results of our baseline model, classification models and ensemble strategy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment and Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Baseline Model. Our baseline uses KNN classifier on embeddings generated using code-mixed ULM-FiT model from iNLTK 8 (Arora, 2020b) . We set k=5 for all our experiments with uniform weighting on neighbors. Ensemble. Our final model is a weighted ensemble of two classifiers where their weights sum to 1. Training of classifiers happens in two steps. First we fine-tune our language model on the downstream task of OLI and then train a classifier on the fine-tuned language model. Table 3 contains details of hyperparameters of the first classifier which is trained on our pre-trained RoBERTa. We train the second classifier using fine-tuned ULMFiT language model which is available in iNLTK. We experiment with different weights of classifiers in the ensemble. Best results on the validation set are obtained by setting a weight of 0.5 for both classifiers. Figure 1 shows the variation of weighted F1 score ", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 131, |
| "text": "(Arora, 2020b)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 480, |
| "end": 487, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 858, |
| "end": 866, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "OLI Task", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "We use a similar approach as that used for OLI task. Baseline model is built using the KNN algorithm and the final model is a classifier trained on a fine-tuned ULMFiT language model. Due to time and resource constraints, we weren't able to train RoBERTa based classifier.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HSD Task", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "In both tasks weighted averaged Precision, weighted averaged Recall and weighted averaged F-Score are used as evaluation and ranking criteria. We participated in sub-task for Tamil and got Rank 4 in OLI task and Rank 1 in HSD task (Chakravarthi and Muralidaran, 2021) . Table 4 shows the performance of different models on the validation set of former task. Best F1 score of 0.76 is obtained by using the ensemble of classifiers trained on RoBERTa and ULMFiT model which are pre-trained on codemixed data. Table 5 contains results of models on the validation set of the latter task. We obtain an F1 score of 0.63 with ULMFiT based classifier. Results on the test-set for both the tasks have been shown in Table 6 .", |
| "cite_spans": [ |
| { |
| "start": 231, |
| "end": 267, |
| "text": "(Chakravarthi and Muralidaran, 2021)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 270, |
| "end": 277, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 506, |
| "end": 513, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 705, |
| "end": 712, |
| "text": "Table 6", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this paper we present RoBERTa language model for code-mixed Tamil which we pre-trained from scratch. Using transfer learning we fine-tune RoBERTa and ULMFiT language models on downstream tasks of OLI and HSD. We got Rank 4 in the former task using an ensemble of classifiers trained on RoBERTa and ULMFiT and Rank 1 in the latter task using classifier based on ULMFiT. In future research we will explore other transformer architectures like BERT (Devlin et al., 2018) , T5 (Raffel et al., 2020) , XLM (Conneau et al., 2019) . We will work on improving code-mixed data generation strategies. We plan to create a dataset using a combination of native Tamil sentences, their transliterations and translations in English.", |
| "cite_spans": [ |
| { |
| "start": 449, |
| "end": 470, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 476, |
| "end": 497, |
| "text": "(Raffel et al., 2020)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 504, |
| "end": 526, |
| "text": "(Conneau et al., 2019)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://github.com/goru001/nlp-for-tanglish 4 https://sites.google.com/view/lt-edi-2021", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Pre-training dataset can be downloaded from https://github.com/goru001/nlp-for-tanglish", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/libindic/indic-trans", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/goru001/inltk", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank our teams at Amazon and Jio Haptik for motivating us to participate in these shared tasks. Please note that this work is not a byproduct of any formal collaboration with Amazon and Jio Haptik. We participated in these tasks out of personal interests and in our own time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Gauravarora@hasoc-dravidiancodemix-fire2020: Pre-training ulmfit on synthetically generated code-mixed data for hate speech detection", |
| "authors": [ |
| { |
| "first": "Gaurav", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gaurav Arora. 2020a. Gauravarora@hasoc-dravidian- codemix-fire2020: Pre-training ulmfit on syntheti- cally generated code-mixed data for hate speech de- tection.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "iNLTK: Natural language toolkit for indic languages", |
| "authors": [ |
| { |
| "first": "Gaurav", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)", |
| "volume": "", |
| "issue": "", |
| "pages": "66--71", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.nlposs-1.10" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gaurav Arora. 2020b. iNLTK: Natural language toolkit for indic languages. In Proceedings of Sec- ond Workshop for NLP Open Source Software (NLP- OSS), pages 66-71, Online. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion", |
| "authors": [ |
| { |
| "first": "Chakravarthi", |
| "middle": [], |
| "last": "Bharathi Raja", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media", |
| "volume": "", |
| "issue": "", |
| "pages": "41--53", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bharathi Raja Chakravarthi. 2020. HopeEDI: A mul- tilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Me- dia, pages 41-53, Barcelona, Spain (Online). Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion", |
| "authors": [ |
| { |
| "first": "Vigneshwaran", |
| "middle": [], |
| "last": "Bharathi Raja Chakravarthi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Muralidaran", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bharathi Raja Chakravarthi and Vigneshwaran Mural- idaran. 2021. Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclu- sion. In Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclu- sion. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Hope, problem-solving ability, and coping in a college student population: some implications for theory and practice", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "C" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "J Clin Psychol", |
| "volume": "54", |
| "issue": "7", |
| "pages": "953--962", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. C. Chang. 1998. Hope, problem-solving ability, and coping in a college student population: some im- plications for theory and practice. J Clin Psychol, 54(7):953-962.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Unsupervised cross-lingual representation learning at scale", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kartikay", |
| "middle": [], |
| "last": "Khandelwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Vishrav", |
| "middle": [], |
| "last": "Chaudhary", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Wenzek", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Guzm\u00e1n", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Myle", |
| "middle": [], |
| "last": "Ott", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1911.02116" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "BERT: pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Finetuned language models for text classification", |
| "authors": [ |
| { |
| "first": "Jeremy", |
| "middle": [], |
| "last": "Howard", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ruder", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Fine- tuned language models for text classification. CoRR, abs/1801.06146.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mandl", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandip", |
| "middle": [], |
| "last": "Modha", |
| "suffix": "" |
| }, |
| { |
| "first": "Anand", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Bharathi Raja", |
| "middle": [], |
| "last": "Chakravarthi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Forum for Information Retrieval Evaluation", |
| "volume": "2020", |
| "issue": "", |
| "pages": "29--32", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3441501.3441517" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german. In Forum for Information Re- trieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Computing Machin- ery.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Kashmir: A computational analysis of the voice of peace", |
| "authors": [ |
| { |
| "first": "Shriphani", |
| "middle": [], |
| "last": "Palakodety", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashiqur", |
| "middle": [ |
| "R" |
| ], |
| "last": "Khudabukhsh", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [ |
| "G" |
| ], |
| "last": "Carbonell", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. 2019. Kashmir: A compu- tational analysis of the voice of peace. CoRR, abs/1909.12940.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Exploring the limits of transfer learning with a unified text-to", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharan", |
| "middle": [], |
| "last": "Narang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Matena", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanqi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "J" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Neural machine translation of rare words with subword units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1715--1725", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-1162" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The will and the ways: development and validation of an individualdifferences measure of hope", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "R" |
| ], |
| "last": "Snyder", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Harris", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Anderson", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Holleran", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "M" |
| ], |
| "last": "Irving", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "T" |
| ], |
| "last": "Sigmon", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Yoshinobu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gibb", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Langelle", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Harney", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "J Pers Soc Psychol", |
| "volume": "60", |
| "issue": "4", |
| "pages": "570--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. R. Snyder, C. Harris, J. R. Anderson, S. A. Holleran, L. M. Irving, S. T. Sigmon, L. Yoshinobu, J. Gibb, C. Langelle, and P. Harney. 1991. The will and the ways: development and validation of an individual- differences measure of hope. J Pers Soc Psychol, 60(4):570-585.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Change in Tamil OLI validation set F1 score on y-axis with change in RoBERTa weight shown on x-axis", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>Model Architecture</td><td colspan=\"2\">Perplexity Vocab size</td></tr><tr><td>RoBERTa</td><td>8.4</td><td>10000</td></tr><tr><td>ULMFiT</td><td>37.50</td><td>8000</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "Dataset statistics for OLI and HSD tasks" |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "" |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td>Dropout Multiplicity</td><td>Batch Size</td><td>Epochs</td><td>Learning Rate</td><td>Adam Beta1</td><td>Adam Beta2</td><td>Adam Epsilon</td><td>LR Scheduler Type</td></tr><tr><td>0.1</td><td>8</td><td>3</td><td>5e-05</td><td>0.9</td><td>0.999</td><td>1e-08</td><td>LINEAR</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "). Its embedding size is 400, number of hidden activations per layer are 1152 and the number of layers are 3. Two linear blocks with batch normalization and dropout have been added as custom head for the classifier with rectified linear unit activations for the intermediate layer and a softmax activation" |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td>Model</td><td colspan=\"4\">Precision Recall F1 Score Accuracy</td></tr><tr><td>Baseline KNN</td><td>0.62</td><td>0.72</td><td>0.65</td><td>0.72</td></tr><tr><td>ULMFit</td><td>0.73</td><td>0.78</td><td>0.73</td><td>0.78</td></tr><tr><td>RoBERTa</td><td>0.74</td><td>0.77</td><td>0.75</td><td>0.77</td></tr><tr><td>Ensemble</td><td>0.75</td><td>0.79</td><td>0.76</td><td>0.79</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "RoBERTa Classification Model Hyperparams for OLI" |
| }, |
| "TABREF6": { |
| "content": "<table><tr><td>Model</td><td colspan=\"4\">Precision Recall F1 Score Accuracy</td></tr><tr><td colspan=\"2\">Baseline KNN Model 0.53</td><td>0.53</td><td>0.53</td><td>0.53</td></tr><tr><td>ULMFit</td><td>0.63</td><td>0.63</td><td>0.63</td><td>0.63</td></tr></table>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "Validation set results for OLI" |
| }, |
| "TABREF7": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "" |
| }, |
| "TABREF9": { |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "num": null, |
| "text": "Test set results for OLI task and HSD task by changing weights of RoBERTa based classifier." |
| } |
| } |
| } |
| } |