| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:20:48.574668Z" |
| }, |
| "title": "professionals@DravidianLangTech-EACL2021: Malayalam Offensive Language Identification -A Minimalistic Approach", |
| "authors": [ |
| { |
| "first": "Srinath", |
| "middle": [], |
| "last": "Nair", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "srinath.nair@research.iiit.ac.in" |
| }, |
| { |
| "first": "Dolton", |
| "middle": [], |
| "last": "Fernandes", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "dolton.fernandes@research.iiit.ac.in" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper is a description of the system that was designed by team \"professionals\" for the task Offensive Language Identification in Dravidian Languages-EACL 2021. Our system Dravidian Offensive Language Identifier Classifier (DrOLIC) uses Indic-BERT to generate word embeddings which is then fed into a 4layer Multi Layer Perceptron (MLP) which does the multi-class classification task. The system helped us achieve an F1 score of 85% in the shared task for the Malayalam language. 1", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper is a description of the system that was designed by team \"professionals\" for the task Offensive Language Identification in Dravidian Languages-EACL 2021. Our system Dravidian Offensive Language Identifier Classifier (DrOLIC) uses Indic-BERT to generate word embeddings which is then fed into a 4layer Multi Layer Perceptron (MLP) which does the multi-class classification task. The system helped us achieve an F1 score of 85% in the shared task for the Malayalam language. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Offensive language identification is a very popular multi-class classification research problem in NLP. A proper solution to the research problem could give the world ways and means to identify and restrict offensive content on the internet, especially on public platforms and social media. The rising demand for a system to identify offensive language on social media and in multiple languages is a testimony to the relevance of the Offensive Language Identification in Dravidian Languages task (Chakravarthi et al., 2021) .", |
| "cite_spans": [ |
| { |
| "start": 496, |
| "end": 523, |
| "text": "(Chakravarthi et al., 2021)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Dravidian languages are spoken in South Asia, especially in the Indian subcontinent and are a family of multiple languages including Tamil, Malayalam and Kannada, and more. Research on NLP in Indian languages was largely focused on Hindi which is spoken by a large section of people in India. Dravidian languages have now recently started getting attention from the research community which is viewed as a positive change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we propose Dravidian Offensive Language Identifier Classifier (DrOLIC), a 4layered Multi Layer Perceptron (MLP) to do the * Indicates equal contribution 1 Link to code: https://github.com/ snath99920/Professionals-EACL-2021 multi-class classification task of identifying offensive language for the Malayalam language. The reason for choosing a simple MLP over more complex neural networks was because of the simplicity that we see in an MLP. Our initial experimentation with MLP gave us pretty good results which were much better than what we were initially expecting out of a simple system like a Multi Layer Perceptron. This made us want to stick to the system and explore how well it would perform under different conditions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A lot of work has been put into identification of offensive language task in recent years with an aim to enable machines and platforms to automatically filter content, especially in social media platforms. SemEval Task-6 (Zampieri et al., 2019) is a notable shared task done in this direction where participants identified and categorized offensive language in social media. The dataset contained over 14,000 tweets in English and was divided into three sub-tasks. The sub-task A required participants to identify offensive language while sub-task B was an automatic categorization and sub-task C required participants to identify the offense target. The top teams experimented extensively with LSTM (Hochreiter and Schmidhuber, 1997) , pretrained BERT (Devlin et al., 2018) models, pretrained GloVe vectors (Pennington et al., 2014) to achieve the results.", |
| "cite_spans": [ |
| { |
| "start": 700, |
| "end": 734, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 753, |
| "end": 774, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 808, |
| "end": 833, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "With the evolution of technology it has been made possible to prepare and circulate content in different languages and recent years have seen a lot of importance being given to Indian languages. The availability of content in Indian languages brought in a need to expand the task of offensive language identification into Indian languages. This resulted in much work being done in the domain, but we can see most work clearly leaning towards Hindi and a few other languages spoken mostly in the Northern and Central parts of India. One such work does an interesting comparative study of offensive and aggressive language in Hindi, Bangla and English . They have used SVM, BERT and its derivatives like ALBERT (Lan et al., 2019) and DistilBERT (Sanh et al., 2019) to develop the classifiers. The performance of the classifiers were judged based on F1 score where they managed to achieve an F1 score as high as 0.80 using BERT.", |
| "cite_spans": [ |
| { |
| "start": 709, |
| "end": 727, |
| "text": "(Lan et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 743, |
| "end": 762, |
| "text": "(Sanh et al., 2019)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "HASOC (Hate Speech and Offensive Content identification in Indo-European Languages) track at FIRE 2019 (Mandl et al., 2019) and Fire 2020 (Mandl et al., 2020) is another such interesting initiative. The HASOC dataset was prepared from publicly available posts from Twitter Facebook and allows the participants to develop supervised models. The limitation to this task again lies in the fact that the task and the dataset are specific to three languages: English, Hindi and German.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 123, |
| "text": "(Mandl et al., 2019)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 138, |
| "end": 158, |
| "text": "(Mandl et al., 2020)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We approach the problem as a multi-class classification of embeddings and for this we present the DrOLIC architecture as depicted in figure 1. For every comment represented as an indic-BERT embedding, we pass it through a series of dense layers with ReLU activation to learn the representations. In order to make the layers do the learning more independently and avoid overfitting of the model, we introduce batch normalization and dropout layers respectively. To do the classification we introduce a dense softmax layer to output class probabilities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The data (Chakravarthi et al., 2021) (Chakravarthi et al., 2020a) (Chakravarthi et al., 2020b) (Hande et al., 2020) that was made available by the organizers of the task consisted of English-Malayalam code-mixed text. The data was prepared by collecting comments appearing on the trailers of various Malayalam movies on YouTube. The data was present in the form of a CSV file with the sentences belonging to the classes \"Not offensive\", \"Offensive targeted insult group\", \"Offensive targeted insult individual\", \"Offensive untargeted\" and \"Not Malayalam\". As one can see in figures 2, the training set has a noticeable class imbalance with most of the sentences belonging to the \"Not offensive\" class.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 115, |
| "text": "(Hande et al., 2020)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The sentences in the dataset are code-mixed in Malayalam-English. We first transliterated the code-mixed sentences to English with the help of indic transliterator (Bhat et al., 2015) . The sentences were then processed to remove hashtags, emojis and other unrecognised symbols. Now, out of these processed sentences, those whose length had significantly reduced (we kept the threshold as 4 words) were removed from the train and validation sets. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 183, |
| "text": "(Bhat et al., 2015)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Preprocessing", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Indic-BERT", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generating Word Embeddings using", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "IndicBERT ) , a multilingual NLU model that was pretrained on 12 Indian languages and evaluated on IndicGLUE. IndicBERT can be used for some of the most popularly spoken Dravidian languages including Malayalam, Tamil, Kannada, Telugu, etc. Notably, the model used here is ALBERT, a much more compact version of BERT with fewer parameters making it much more convenient to use. For a comment X, we extract embeddings using indicBERT to get a 768 sized vector which is then used for training the MLP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generating Word Embeddings using", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We used a 4-layer Multi Layer Perceptron (MLP) to do the classification. The dimension and configuration of each layer of the model can be seen in table 1. We have 3 dense layers with ReLU activation followed by Batch Normalization layers to help the layers learn more independently. We add a dropout of 0.2 to two of the layers to reduce overfitting. At the end we have a dense layer with softmax activation to output the class probabilities. Our model has a total of 470,789 parameters out of which 469,381 are learnable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We first perform min-max scaling on the generated embedding vectors to not let any bias due to huge/small values propagate further into the network. The class labels are one hot encoded to make it suitable for the loss function. A random stratified split of 0.1 is done on the training data to get a validation set for our model. We don't use the original validation set provided for training in any way. It was only used to check our model's performance before submitting the final results. As mentioned earlier, there was a huge class imbalance in the data provided. To not let this affect our model, we kept only 300 samples from the class with the highest samples in the validation set and transferred them to the the training set. The final training and validation set distribution are shown in figures 2 and 3 respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "The model is trained using Adam optimizer with a learning rate of 1e \u2212 5 and a decay of 1e \u2212 4 for 100 epochs. The learning rate was modified according to the following rule:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "lr i = initialLR 1 + decay * i (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "where lr i is the learning rate at the i th iteration, initialLR is the initial learning rate, decay is the decay factor and i is the iteration number.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "We use the categorical cross-entropy loss for evaluating how good our method is. The model computes categorical cross-entropy loss L between the predicted class probabilities and the correct class for the sample, as given below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L(y,\u0233) = y i .log(\u0233 i )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Training", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "where y i is the i th value in the model output y and\u0233 i is the i th value in the target\u0233.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "The organisers of the shared task used weighted average F1 score for getting an overall quality for the classification, so we used the same metric to seek a balance between precision and recall. So the weighted F1 score is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metric", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "F 1 = 2 * P recision * Recall P recision + Recall", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Evaluation Metric", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "F 1 = n i=1 p i x i (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metric", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "where p i is the class probability for class i and n is the total number of classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metric", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "The results of our method stands at 85% in the contest for the Malayalam language, but we have fine tuned our model since then and the latest results are reported here. The original test set was used for evaluation. The loss and accuracy curves for the training can be seen in figures 4 and 5 respectively. We noticed that MLP stops learning after some epochs, which is bound to happen because MLP's don't take care of vanishing gradients unlike for eg. LSTM. From table 2 we can see the importance of text processing. Without preprocessing we get a F1 score of 85%, but on preprocessing the text we get a F1 score of 88%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This shared task is one of the first in the area of offensive language identification in Dravidian languages. With the advent of a need for localization of the internet by introducing content in multiple languages, this shared task is a positive step.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The most interesting aspect of our work lies in the fact that we have used a simple 4-layered MLP to train the multi-class classifier. While keeping the system, we managed to get an F1 score as high as 0.85 and achieve a shared 11 th position in the shared task. The sentences in the dataset were represented as IndicBERT embeddings. The use of IndicBERT gives us an added advantage by letting us use the same setup for around 12 popularly spoken Indian languages. To overcome the challenges of using a codemixed dataset, we transliterated the sentences into Malayalam before generating the IndicBERT embeddings. The system proposed by us can be further improved especially because, like we have mentioned earlier, this is a very simple setup. The first step that can be taken is replacing the MLP with an LSTM or an SVM which could give a significantly better result.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We would like to begin by thanking Dr. Radhika Mamidi from Language Technology Research Center (LTRC), IIIT Hyderabad for her support and guidance throughout the course of the shared task in both building the system and writing the paper for review. We would also like to thank the organizers of the Offensive Language Identification in Dravidian Languages for bringing us this opportunity. Lastly, we would like to thank the anonymous reviewers for their quality suggestions that helped us shape the paper better and for all their constructive feedback.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Iiit-h system submission for fire2014 shared task on transliterated search", |
| "authors": [ |
| { |
| "first": "Ahmad", |
| "middle": [], |
| "last": "Irshad", |
| "suffix": "" |
| }, |
| { |
| "first": "Vandan", |
| "middle": [], |
| "last": "Bhat", |
| "suffix": "" |
| }, |
| { |
| "first": "Aniruddha", |
| "middle": [], |
| "last": "Mujadia", |
| "suffix": "" |
| }, |
| { |
| "first": "Riyaz", |
| "middle": [ |
| "Ahmad" |
| ], |
| "last": "Tammewar", |
| "suffix": "" |
| }, |
| { |
| "first": "Manish", |
| "middle": [], |
| "last": "Bhat", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shrivastava", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Forum for Information Retrieval Evaluation, FIRE '14", |
| "volume": "", |
| "issue": "", |
| "pages": "48--53", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/2824864.2824872" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Irshad Ahmad Bhat, Vandan Mujadia, Aniruddha Tam- mewar, Riyaz Ahmad Bhat, and Manish Shrivastava. 2015. Iiit-h system submission for fire2014 shared task on transliterated search. In Proceedings of the Forum for Information Retrieval Evaluation, FIRE '14, pages 48-53, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A sentiment analysis dataset for codemixed Malayalam-English", |
| "authors": [ |
| { |
| "first": "Navya", |
| "middle": [], |
| "last": "Bharathi Raja Chakravarthi", |
| "suffix": "" |
| }, |
| { |
| "first": "Shardul", |
| "middle": [], |
| "last": "Jose", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Suryawanshi", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "Philip" |
| ], |
| "last": "Sherly", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mc-Crae", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", |
| "volume": "", |
| "issue": "", |
| "pages": "177--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip Mc- Crae. 2020a. A sentiment analysis dataset for code- mixed Malayalam-English. In Proceedings of the 1st Joint Workshop on Spoken Language Technolo- gies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 177-184, Marseille, France. European Language Resources association.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Corpus creation for sentiment analysis in code-mixed Tamil-English text", |
| "authors": [ |
| { |
| "first": "Vigneshwaran", |
| "middle": [], |
| "last": "Bharathi Raja Chakravarthi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruba", |
| "middle": [], |
| "last": "Muralidaran", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "Philip" |
| ], |
| "last": "Priyadharshini", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mc-Crae", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", |
| "volume": "", |
| "issue": "", |
| "pages": "202--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bharathi Raja Chakravarthi, Vigneshwaran Murali- daran, Ruba Priyadharshini, and John Philip Mc- Crae. 2020b. Corpus creation for sentiment anal- ysis in code-mixed Tamil-English text. In Pro- ceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced lan- guages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 202-210, Marseille, France. European Language Re- sources association.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Findings of the shared task on Offensive Language Identification in Tamil, Malayalam, and Kannada", |
| "authors": [ |
| { |
| "first": "Ruba", |
| "middle": [], |
| "last": "Bharathi Raja Chakravarthi", |
| "suffix": "" |
| }, |
| { |
| "first": "Navya", |
| "middle": [], |
| "last": "Priyadharshini", |
| "suffix": "" |
| }, |
| { |
| "first": "Anand", |
| "middle": [], |
| "last": "Jose", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mandl", |
| "suffix": "" |
| }, |
| { |
| "first": "Prasanna", |
| "middle": [], |
| "last": "Kumar Kumaresan", |
| "suffix": "" |
| }, |
| { |
| "first": "Rahul", |
| "middle": [], |
| "last": "Ponnusamy", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Hariharan", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Sherly", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "Philip" |
| ], |
| "last": "Mc-Crae", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Navya Jose, Anand Kumar M, Thomas Mandl, Prasanna Kumar Kumaresan, Rahul Ponnusamy, Hariharan V, Elizabeth Sherly, and John Philip Mc- Crae. 2021. Findings of the shared task on Offen- sive Language Identification in Tamil, Malayalam, and Kannada. In Proceedings of the First Workshop on Speech and Language Technologies for Dravid- ian Languages. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "BERT: pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection", |
| "authors": [ |
| { |
| "first": "Adeep", |
| "middle": [], |
| "last": "Hande", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruba", |
| "middle": [], |
| "last": "Priyadharshini", |
| "suffix": "" |
| }, |
| { |
| "first": "Bharathi Raja", |
| "middle": [], |
| "last": "Chakravarthi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media", |
| "volume": "", |
| "issue": "", |
| "pages": "54--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adeep Hande, Ruba Priyadharshini, and Bharathi Raja Chakravarthi. 2020. KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection. In Proceedings of the Third Workshop on Computational Modeling of Peo- ple's Opinions, Personality, and Emotion's in Social Media, pages 54-63, Barcelona, Spain (Online). Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "", |
| "pages": "1735--80", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/neco.1997.9.8.1735" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9:1735- 80.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages", |
| "authors": [ |
| { |
| "first": "Divyanshu", |
| "middle": [], |
| "last": "Kakwani", |
| "suffix": "" |
| }, |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Kunchukuttan", |
| "suffix": "" |
| }, |
| { |
| "first": "Satish", |
| "middle": [], |
| "last": "Golla", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "C" |
| ], |
| "last": "Gokul", |
| "suffix": "" |
| }, |
| { |
| "first": "Avik", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Pratyush", |
| "middle": [], |
| "last": "Khapra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Findings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for In- dian Languages. In Findings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Aggressive and offensive language identification in hindi, bangla, and english: A comparative study", |
| "authors": [ |
| { |
| "first": "Ritesh", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Bornini", |
| "middle": [], |
| "last": "Lahiri", |
| "suffix": "" |
| }, |
| { |
| "first": "Atul", |
| "middle": [], |
| "last": "Kr", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ojha", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "SN Computer Science", |
| "volume": "2", |
| "issue": "1", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/s42979-020-00414-6" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ritesh Kumar, Bornini Lahiri, and Atul Kr. Ojha. 2021. Aggressive and offensive language identification in hindi, bangla, and english: A comparative study. SN Computer Science, 2(1):26.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Ai4bharat-indicnlp corpus: Monolingual corpora and word embeddings for indic languages", |
| "authors": [ |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Kunchukuttan", |
| "suffix": "" |
| }, |
| { |
| "first": "Divyanshu", |
| "middle": [], |
| "last": "Kakwani", |
| "suffix": "" |
| }, |
| { |
| "first": "Satish", |
| "middle": [], |
| "last": "Golla", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "C" |
| ], |
| "last": "Gokul", |
| "suffix": "" |
| }, |
| { |
| "first": "Avik", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Pratyush", |
| "middle": [], |
| "last": "Khapra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anoop Kunchukuttan, Divyanshu Kakwani, Satish Golla, Gokul N. C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. Ai4bharat- indicnlp corpus: Monolingual corpora and word em- beddings for indic languages.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "ALBERT: A lite BERT for selfsupervised learning of language representations", |
| "authors": [ |
| { |
| "first": "Zhenzhong", |
| "middle": [], |
| "last": "Lan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mingda", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. 2019. ALBERT: A lite BERT for self- supervised learning of language representations. CoRR, abs/1909.11942.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mandl", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandip", |
| "middle": [], |
| "last": "Modha", |
| "suffix": "" |
| }, |
| { |
| "first": "Anand", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Bharathi Raja", |
| "middle": [], |
| "last": "Chakravarthi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Forum for Information Retrieval Evaluation", |
| "volume": "2020", |
| "issue": "", |
| "pages": "29--32", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3441501.3441517" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german. In Forum for Information Re- trieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Computing Machin- ery.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mandl", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandip", |
| "middle": [], |
| "last": "Modha", |
| "suffix": "" |
| }, |
| { |
| "first": "Prasenjit", |
| "middle": [], |
| "last": "Majumder", |
| "suffix": "" |
| }, |
| { |
| "first": "Daksh", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohana", |
| "middle": [], |
| "last": "Dave", |
| "suffix": "" |
| }, |
| { |
| "first": "Chintak", |
| "middle": [], |
| "last": "Mandlia", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation, FIRE '19", |
| "volume": "", |
| "issue": "", |
| "pages": "14--17", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3368567.3368584" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019. Overview of the hasoc track at fire 2019: Hate speech and offensive content identifi- cation in indo-european languages. In Proceedings of the 11th Forum for Information Retrieval Evalu- ation, FIRE '19, page 14-17, New York, NY, USA. Association for Computing Machinery.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/D14-1162" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543, Doha, Qatar. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter", |
| "authors": [ |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval)", |
| "authors": [ |
| { |
| "first": "Marcos", |
| "middle": [], |
| "last": "Zampieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Shervin", |
| "middle": [], |
| "last": "Malmasi", |
| "suffix": "" |
| }, |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Noura", |
| "middle": [], |
| "last": "Farra", |
| "suffix": "" |
| }, |
| { |
| "first": "Ritesh", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "75--86", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S19-2010" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. SemEval-2019 task 6: Identifying and catego- rizing offensive language in social media (OffensE- val). In Proceedings of the 13th International Work- shop on Semantic Evaluation, pages 75-86, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Overview of our method.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Visualization of train data.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Visualization of validation data.", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Figure 4: Loss", |
| "uris": null |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "text": "The detailed architecture of our model used for the multiclass classification of embeddings." |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "text": "The model descriptions and the results arranged in descending order of test F1 score. Here P r stands for preprocessing." |
| } |
| } |
| } |
| } |