ACL-OCL / Base_JSON /prefixL /json /ltedi /2021.ltedi-1.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:11:57.255039Z"
},
"title": "IIITK@LT-EDI-EACL2021: Hope Speech Detection for Equality, Diversity, and Inclusion in Tamil , Malayalam and English",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Kumar Ghanghor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Information Technology Kottayam",
"location": {}
},
"email": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Ponnusamy",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Prasanna",
"middle": [
"Kumar"
],
"last": "Kumaresan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ULTRA",
"location": {
"settlement": "Madurai",
"country": "India"
}
},
"email": ""
},
{
"first": "Sajeetha",
"middle": [],
"last": "Thavareesan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Eastern University",
"location": {
"country": "Sri Lanka"
}
},
"email": ""
},
{
"first": "Bharathi",
"middle": [],
"last": "Raja Chakravarthi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Ireland",
"location": {
"settlement": "Galway"
}
},
"email": ""
},
{
"first": "Management",
"middle": [],
"last": "Kerala",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the IIITK's team submissions to the hope speech detection for equality, diversity and inclusion in Dravidian languages shared task organized by LT-EDI 2021 workshop@EACL 2021. We have used the transformer-based pretrained models along with the customized versions of those models with custom loss functions. Our best configurations for the shared tasks achieve weighted F1 scores of 0.60 for Tamil, 0.83 for Malayalam, and 0.93 for English. We have secured ranks of 4, 3, 2 in Tamil, Malayalam and English respectively. We have open-sourced our code implementations for all the models across both the tasks on GitHub 1 .",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the IIITK's team submissions to the hope speech detection for equality, diversity and inclusion in Dravidian languages shared task organized by LT-EDI 2021 workshop@EACL 2021. We have used the transformer-based pretrained models along with the customized versions of those models with custom loss functions. Our best configurations for the shared tasks achieve weighted F1 scores of 0.60 for Tamil, 0.83 for Malayalam, and 0.93 for English. We have secured ranks of 4, 3, 2 in Tamil, Malayalam and English respectively. We have open-sourced our code implementations for all the models across both the tasks on GitHub 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "According to Wikipedia hope is being defined as an optimistic state of mind that is based on the expectation of outcomes with respect to events and circumstances in one's life or the world at large. The hope speech detection shared task 2 organized by LT-EDI aimed to detect hope speeches in the given corpus for English, Tamil, and Malayalam (Chakravarthi and Muralidaran, 2021) . The data set has been gathered from some social media remarks. We participated in this task given a social media remarks in hope speech, frameworks need to characterize if a post is hope speech or not.",
"cite_spans": [
{
"start": 313,
"end": 379,
"text": "English, Tamil, and Malayalam (Chakravarthi and Muralidaran, 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tamil and Malayalam (ISO 639-3: tam) belong to same family. Tamil was the first to be listed as a classical language of India, one of 22 scheduled languages in the Constitution of India, also official language of Tamil Nadu, Puducherry, Singapore and Sri Lanka and is one of the world's longest-surviving classical languages (Norman, 1977; Stein, 1977; Hart III, 2015) . The oldest epigraphic documents discovered date from about the 6th century BC on pottery, rock edicts and hero blocks. Over 55 percent of the epigraphic inscriptions discovered by the Archaeological Survey of India (about 55,000) are in the Tamil language (Maloney, 1970; Abraham, 2003) . A Tamil prayer book in ancient Tamil script called Thambiran Vanakkam was written by Portuguese Christian missionaries in 1578, thereby rendering Tamil the first Indian language to be printed and published (Balachandran, 2005) . Malayalam split from Tamil during 16th century by Thunchaththu Ramanujan Ezhuthachan until then it was west coast dialect of Tamil (Menon, 1938; Steever, 1998) .",
"cite_spans": [
{
"start": 325,
"end": 339,
"text": "(Norman, 1977;",
"ref_id": "BIBREF31"
},
{
"start": 340,
"end": 352,
"text": "Stein, 1977;",
"ref_id": "BIBREF38"
},
{
"start": 353,
"end": 368,
"text": "Hart III, 2015)",
"ref_id": "BIBREF18"
},
{
"start": 627,
"end": 642,
"text": "(Maloney, 1970;",
"ref_id": "BIBREF27"
},
{
"start": 643,
"end": 657,
"text": "Abraham, 2003)",
"ref_id": "BIBREF0"
},
{
"start": 866,
"end": 886,
"text": "(Balachandran, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 1020,
"end": 1033,
"text": "(Menon, 1938;",
"ref_id": "BIBREF29"
},
{
"start": 1034,
"end": 1048,
"text": "Steever, 1998)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over time various methodologies are being proposed by the researchers throughout the Natural Language Processing (NLP) community for building better textual analysis systems. Solving the text classification problem has been improvised throughout by building better architectures and better representation techniques for texts. The community has also benefitted by borrowing the ideas from other domains like computer vision and incorporating those in these systems which have given promising results. Initially, the models used to deal with the Bag Of Words (BOW) representations, then came the ideas of lemmatization and stemming, which helped in improving the representation techniques further. Then around the early 2010s word embeddings were proposed by (Mikolov et al., 2013) .",
"cite_spans": [
{
"start": 758,
"end": 780,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The NLP domain has also observed many architectural innovations which have further pushed the performances to give state of the art (SOTA) results. Some of them are the LSTMs (Hochreiter and Schmidhuber, 1997) , BiLSTMs (Ghaeini et al., 2018) , GRUs (Chung et al., 2014) and then the mighty transformers (Vaswani et al., 2017) . The introduction of transformers changed the entire land-scape, and the models built upon the transformer architecture are consistently pushing the results on the GLUE (Wang et al., 2019) benchmarks. There have been instances where the researchers have tried to incorporate the architectural innovations from different domains to NLP. In this paper, we have tried several architectures built upon the transformer architecture and have fine-tuned them on our task, details of which are being discussed in the later sections of the paper.",
"cite_spans": [
{
"start": 175,
"end": 209,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF20"
},
{
"start": 220,
"end": 242,
"text": "(Ghaeini et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 250,
"end": 270,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 304,
"end": 326,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 497,
"end": 516,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hope speech detection is a relatively new field and an active area of research in the NLP domain. With the rise of the Internet and the social media platforms, people from various places around the globe are now connected through these platforms which have given them a common place to express their views. These views can often be specifically targeted to a particular person or community that can convey either of a positive, neutral or negative emotion to the concerned person or community. This makes it an important aspect to have systems that can automatically classify these content and filter out the ones having a negative impact on the society. In other words, this also means that we have systems that explicitly detect positive content and help it stay in the social good system. As defined earlier, hope speech can also be considered a piece of text conveying a positive sentiment to the reader of it. One of the first works on hope speech detection is done by Chakravarthi (2020a), Puranik et al. (2021) , and Palakodety et al. (2020) . Palakodety et al. (2020) used the polyglot word embeddings to have clusters of texts that conveys similar sentiments and obtained promising results. Hope speech detection can also be considered as the opposite task of hate speech detection.",
"cite_spans": [
{
"start": 996,
"end": 1017,
"text": "Puranik et al. (2021)",
"ref_id": "BIBREF36"
},
{
"start": 1024,
"end": 1048,
"text": "Palakodety et al. (2020)",
"ref_id": "BIBREF32"
},
{
"start": 1051,
"end": 1075,
"text": "Palakodety et al. (2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There has been a significant amount of work done for the hate speech detection task (Mandl et al., 2020; Chakravarthi et al., 2020c; Yasaswini et al., 2021; Ghanghor et al., 2021; Hegde et al., 2021) . It has even been a part of several conferences like SemEval 3 as challenges. However, these conferences mainly focused on datasets which were constructed for resource abundant languages. However, in the mid-2020s several competitions have been organized which centred around these underresourced languages. To build a system that performs well on under-resourced languages like Dra-vidian languages, several researchers have developed systems that have given noticeable results on these tasks (Hande et al., 2020; Chakravarthi, 2020b; Chakravarthi et al., 2020d,b,a) . HASOC-Dravidian-CodeMix-FIRE2020 participants used traditional ml methods like Naive Bayes Classifier, Support Vector Machines (SVMs) and Random Forest along with the pretrained transformers models like XLM-Roberta (XLMR) (Conneau et al., 2020) and BERT (Devlin et al., 2019) for the offensive content identification in code-mixed datasets (Tamil-English and Malayalam-English). (Arora, 2020) at HASOC-Dravidian-CodeMix-FIRE2020 used ULMFit (Howard and Ruder, 2018) to pretrain on a synthetically generated code-mixed dataset and then fine-tuned it to the downstream tasks of text classification. ",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Mandl et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 105,
"end": 132,
"text": "Chakravarthi et al., 2020c;",
"ref_id": "BIBREF9"
},
{
"start": 133,
"end": 156,
"text": "Yasaswini et al., 2021;",
"ref_id": null
},
{
"start": 157,
"end": 179,
"text": "Ghanghor et al., 2021;",
"ref_id": null
},
{
"start": 180,
"end": 199,
"text": "Hegde et al., 2021)",
"ref_id": "BIBREF19"
},
{
"start": 695,
"end": 715,
"text": "(Hande et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 716,
"end": 736,
"text": "Chakravarthi, 2020b;",
"ref_id": "BIBREF5"
},
{
"start": 737,
"end": 768,
"text": "Chakravarthi et al., 2020d,b,a)",
"ref_id": null
},
{
"start": 993,
"end": 1015,
"text": "(Conneau et al., 2020)",
"ref_id": null
},
{
"start": 1025,
"end": 1046,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1150,
"end": 1163,
"text": "(Arora, 2020)",
"ref_id": "BIBREF1"
},
{
"start": 1212,
"end": 1236,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The analysis of the nature of texts has been one of the central tasks in NLP. Textual analysis can be defined as a separation of the texts into different classes based upon the underlying meaning they convey . The NLP domain has observed many advancements for solving the textual analysis problem. However, it remains an unsolved problem because of the linguistic diversity worldwide and the difficulties in expressing texts to a suitable format for feeding it into the textual analysis systems (Jose et al., 2020) . Over time, various methods have been proposed for representation of texts, ranging from Bag of Words, TF-IDF to word embeddings. The word embeddings were introduced with the Word2Vec model, which gives a vectorized representation for a word. After the introduction of the Word2Vec word embedding model, different word embedding techniques have been proposed throughout the NLP domain such as Glove (Pennington et al., 2014) , Doc2Vec (Le and Mikolov, 2014), Fasttext (Bojanowski et al., 2017) . Using these different kinds of word representations, various different models have been proposed for solving the textual analysis problem. These models consisted of the primitive machine learning models like Naive Bayes (NB), Logistic Regression (LR), Multinomial Naive Bayes (MNB), Support Vector Machines (SVMs). Apart from these models, various models based upon neural networks were also being used such as LSTMs, Bidirectional LSTMs, GRUs. However, the current State Of The Art (SOTA) models are based upon the transformer architecture. There are numerous models built upon the transformer architecture which were being trained on large corpora of texts and are available for fine-tuning to different downstream tasks like textual classification, question answering. These models based upon the transformer architecture uses their tokenizers for the conversion of texts into embeddings which are based upon their own vocabularies. One major problem faced with these models built upon the transformer architecture is that they are only available for high resourced languages like English, German, and Chinese. To use these models for under-resourced languages, the researchers came up with the idea of cross-lingual transfer learning, which means training a model on a high resourced language and then fine-tuning it on a downstream task. A separate benchmark known as XNLI (Conneau et al., 2018) was being made to evaluate the model's performances across multiple languages.",
"cite_spans": [
{
"start": 495,
"end": 514,
"text": "(Jose et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 915,
"end": 940,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 984,
"end": 1009,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 2390,
"end": 2412,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "The competition organizers have used the SVMs, MNBs , Decision Trees and other machine learning models as the baseline models for the given datasets. So, we went with using the models built upon the transformer architecture while approaching the problem. We have used the hugging face 4 transformers library for our implementations and used the original versions of the models as well as their customized versions with different loss functions. We have used multilingual-cased BERT (mBERT-cased), XLM-Roberta (XLMR), IndicBERT (Kakwani et al., 4 https://huggingface.co/transformers/ 2020) , BERT-base-cased (BERT-cased) and BERTbase-uncased (BERT-uncased) models for our implementations. Pertaining to the large size of mBERT-cased and XLMR models we have used their customized versions as well by freezing the original model and stacking a fully connected layer of 512 neurons with a final layer having the same number of neurons as the number of our output classes. With this customized versions, we have used two different loss functions the Negative Log Likelihood (NLL) loss function with class weights, and the Sadice (Li et al., 2020) Loss function both of which were used to handle the data imbalance in the datasets. A pictorial representation of our customized architecture can be seen in the figure 1. Out of all the models mentioned above, mBERTcased, XLMR, and IndicBERT are multilingual models, and BERT-cased and BERT-uncased models are monolingual models. We defined our custom architecture apart from the original transformer models as being built upon the transformer models as the base unit. The output attention heads from the transformer layers are further connected to a 512 neuron fully connected (FC) layer which is finally connected to another fully connected layer having number of neurons same as the number of classes denoted by nc",
"cite_spans": [
{
"start": 527,
"end": 543,
"text": "(Kakwani et al.,",
"ref_id": null
},
{
"start": 544,
"end": 545,
"text": "4",
"ref_id": null
},
{
"start": 1124,
"end": 1141,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "We have tried several different combinations of models discussed in the section 4 across the datasets of each language and have reported our results on the development set and the test set in the Table 2 and Table 3 respectively. The results are being reported in terms of weighted F1 scores as it was the evaluation measure being used by the competition organizers. We have used mBERT-cased, XLMR and IndicBERT as our models common across all the three datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 215,
"text": "Table 2 and Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The original versions of these models as well as their customized versions, were also being used for finetuning purposes over the datasets. Attributing to the huge model size of mBERT-cased and XLMR we have also tried out their customized versions with the NLL loss and Sadice loss functions. Since the IndicBERT model is comparatively smaller than the other models, its original version was only being considered. Apart from these multilingual models two different monolingual models for the English dataset were also considered. Out of all the models, the original versions outperformed the customized versions of the models. For the Tamil dataset XLMR, mBERT-cased and IndicBERT gave similar results on the development dataset. However, mBERT-cased gave comparatively better performances than XLMR and IndicBERT on the test dataset. For the Malayalam dataset, XLMR, mBERT-cased and IndicBERT had almost equivalent performances on the development dataset, but mBERT-cased gave much better results on the test set. Surprisingly, XLMR performed even worse than the IndicBERT model on the test set for Malayalam. For the English dataset, apart from the XLMR, IndicBERT, mBERT-cased the BERT-cased and BERT-uncased versions were also being tried and almost each model performed equivalently on the development dataset as well as the test datasets. The superior performance of mBERT over the other two models can be attributed to the training strategy of mBERT. It employs zeroshot cross-lingual model transfer, in which taskspecific annotations in one language are used to fine-tune the model for evaluation in another language. A brief explaination of the multilingual nature of mBERT is being discussed in (Pires et al., 2019) . On the other hand XLMR although being trained over much more data and having the same training strategy as (Liu et al., 2019) was expected to perform better across the multilingual tasks but it hasn't. We hypothesize the reason behind the degradation in it's performance can be attributed to the code-mixed nature of our dataset in hand. Since the XLMR model was being trained over the Com-monCrawl data it could be possible that the data being utilised for the pretraining had very fewer instances of code-mixed data which thus leads to an overall inferior performance as compared to other models. The customized versions of these models were expected to address the skewness of the dataset but failed to do so. When inspecting these model's performances the reason for this performance degradation turned out to be the freezening of the base layers of the transformer models. The performance of these custom models can be further improved by having unfreezed layers which can further increase the performance of these models and can be considered for the future works on this task.",
"cite_spans": [
{
"start": 1706,
"end": 1726,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 1836,
"end": 1854,
"text": "(Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We have presented the IIITK team's approach for the hope speech detection shared task organized by DravidianLangTech. Our approach consisted of using the existing pretrained models and finetuning their original as well as the custom versions on the datasets. Out of all the models, the mBERTcased model gave the best results for the Tamil and Malayalam datasets as 0.60 and 0.83 weighted F1 scores. For the English dataset, mBERT-cased and BERT-cased gave exactly similar results of 0.93 weighted F1 scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://semeval.github.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Chera, Chola, Pandya: Using archaeological evidence to identify the Tamil kingdoms of early historic South India. Asian Perspectives",
"authors": [
{
"first": "A",
"middle": [],
"last": "Shinu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abraham",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "207--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinu A Abraham. 2003. Chera, Chola, Pandya: Using archaeological evidence to identify the Tamil king- doms of early historic South India. Asian Perspec- tives, pages 207-223.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Gauravarora@hasoc-dravidiancodemix-fire2020: Pre-training ulmfit on synthetically generated code-mixed data for hate speech detection",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Arora",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Arora. 2020. Gauravarora@hasoc-dravidian- codemix-fire2020: Pre-training ulmfit on syntheti- cally generated code-mixed data for hate speech de- tection.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pioneers of tamil literature: Transition to modernity",
"authors": [
{
"first": "",
"middle": [],
"last": "Balachandran",
"suffix": ""
}
],
"year": 2005,
"venue": "Indian Literature",
"volume": "49",
"issue": "2",
"pages": "179--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Balachandran. 2005. Pioneers of tamil literature: Transition to modernity. Indian Literature, 49(2 (226):179-184.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion",
"authors": [
{
"first": "Chakravarthi",
"middle": [],
"last": "Bharathi Raja",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
"volume": "",
"issue": "",
"pages": "41--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi. 2020a. HopeEDI: A mul- tilingual hope speech detection dataset for equality, diversity, and inclusion. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Me- dia, pages 41-53, Barcelona, Spain (Online). Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Leveraging orthographic information to improve machine translation of under-resourced languages",
"authors": [
{
"first": "Chakravarthi",
"middle": [],
"last": "Bharathi Raja",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi. 2020b. Leveraging ortho- graphic information to improve machine translation of under-resourced languages. Ph.D. thesis, NUI Galway.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A sentiment analysis dataset for codemixed Malayalam-English",
"authors": [
{
"first": "Navya",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Shardul",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "John",
"middle": [
"Philip"
],
"last": "Sherly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Crae",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
"volume": "",
"issue": "",
"pages": "177--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip Mc- Crae. 2020a. A sentiment analysis dataset for code- mixed Malayalam-English. In Proceedings of the 1st Joint Workshop on Spoken Language Technolo- gies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 177-184, Marseille, France. European Language Resources association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclusion",
"authors": [
{
"first": "Vigneshwaran",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muralidaran",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi and Vigneshwaran Mural- idaran. 2021. Findings of the shared task on Hope Speech Detection for Equality, Diversity, and Inclu- sion. In Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclu- sion. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Corpus creation for sentiment analysis in code-mixed Tamil-English text",
"authors": [
{
"first": "Vigneshwaran",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Muralidaran",
"suffix": ""
},
{
"first": "John",
"middle": [
"Philip"
],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Crae",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
"volume": "",
"issue": "",
"pages": "202--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Vigneshwaran Murali- daran, Ruba Priyadharshini, and John Philip Mc- Crae. 2020b. Corpus creation for sentiment anal- ysis in code-mixed Tamil-English text. In Pro- ceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced lan- guages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 202-210, Marseille, France. European Language Re- sources association.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the Track on Sentiment Analysis for Dravidian Languages in Code-Mixed Text",
"authors": [
{
"first": "Ruba",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Vigneshwaran",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Shardul",
"middle": [],
"last": "Muralidaran",
"suffix": ""
},
{
"first": "Navya",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Sherly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "In Forum for Information Retrieval Evaluation",
"volume": "2020",
"issue": "",
"pages": "21--24",
"other_ids": {
"DOI": [
"10.1145/3441501.3441515"
]
},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Vigneshwaran Muralidaran, Shardul Suryawanshi, Navya Jose, Elizabeth Sherly, and John P. McCrae. 2020c. Overview of the Track on Sentiment Analy- sis for Dravidian Languages in Code-Mixed Text. In Forum for Information Retrieval Evaluation, FIRE 2020, page 21-24, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bilingual lexicon induction across orthographicallydistinct under-resourced Dravidian languages",
"authors": [
{
"first": "Navaneethan",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Mihael",
"middle": [],
"last": "Rajasekaran",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Arcan",
"suffix": ""
},
{
"first": "Noel",
"middle": [
"E"
],
"last": "Mcguinness",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "O'connor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "57--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Navaneethan Ra- jasekaran, Mihael Arcan, Kevin McGuinness, Noel E. O'Connor, and John P. McCrae. 2020d. Bilingual lexicon induction across orthographically- distinct under-resourced Dravidian languages. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 57-69, Barcelona, Spain (Online). International Committee on Computational Linguistics (ICCL).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Xnli: Evaluating crosslingual sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Ruty Rinott, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dr-bilstm: Dependent reading bidirectional lstm for natural language inference. Cite arxiv:1802.05577Comment: 18 pages",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaeini",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Joey",
"middle": [],
"last": "Datla",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Aaditya",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [
"Z"
],
"last": "Prakash",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reza Ghaeini, Sadid A. Hasan, Vivek Datla, Joey Liu, Kathy Lee, Ashequl Qadir, Yuan Ling, Aa- ditya Prakash, Xiaoli Z. Fern, and Oladimeji Farri. 2018. Dr-bilstm: Dependent reading bidirec- tional lstm for natural language inference. Cite arxiv:1802.05577Comment: 18 pages, Accepted as a long paper at NAACL HLT 2018.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ruba Priyadharshini, and Bharathi Raja Chakravarthi. 2021. IIITK@DravidianLangTech-EACL2021: Offensive Language Identification and Meme Classification in Tamil, Malayalam and Kannada",
"authors": [
{
"first": "Parameswari",
"middle": [],
"last": "Nikhil Kumar Ghanghor",
"suffix": ""
},
{
"first": "Sajeetha",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thavareesan",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Kumar Ghanghor, Parameswari Krishna- murthy, Sajeetha Thavareesan, Ruba Priyad- harshini, and Bharathi Raja Chakravarthi. 2021. IIITK@DravidianLangTech-EACL2021: Offensive Language Identification and Meme Classification in Tamil, Malayalam and Kannada. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection",
"authors": [
{
"first": "Adeep",
"middle": [],
"last": "Hande",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Bharathi Raja",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adeep Hande, Ruba Priyadharshini, and Bharathi Raja Chakravarthi. 2020. KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection. In Proceedings of the Third Workshop on Computational Modeling of Peo- ple's Opinions, Personality, and Emotion's in Social Media, pages 54-63, Barcelona, Spain (Online). Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Poets of the Tamil anthologies: Ancient poems of love and war",
"authors": [
{
"first": "L",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Hart",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George L Hart III. 2015. Poets of the Tamil antholo- gies: Ancient poems of love and war. Princeton Uni- versity Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "UVCE-IIITT@DravidianLangTech-EACL2021: Tamil Troll Meme Classification: You need to Pay more Attention",
"authors": [
{
"first": "Adeep",
"middle": [],
"last": "Siddhanth U Hegde",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Hande",
"suffix": ""
},
{
"first": "Sajeetha",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Bharathi Raja",
"middle": [],
"last": "Thavareesan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddhanth U Hegde, Adeep Hande, Ruba Priyadharshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. UVCE- IIITT@DravidianLangTech-EACL2021: Tamil Troll Meme Classification: You need to Pay more Attention. In Proceedings of the First Workshop on Speech and Language Technologies for Dra- vidian Languages. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Survey of Current Datasets for Code-Switching Research",
"authors": [
{
"first": "Navya",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Shardul",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Sherly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)",
"volume": "",
"issue": "",
"pages": "136--141",
"other_ids": {
"DOI": [
"10.1109/ICACCS48705.2020.9074205"
]
},
"num": null,
"urls": [],
"raw_text": "Navya Jose, Bharathi Raja Chakravarthi, Shardul Suryawanshi, Elizabeth Sherly, and John P. McCrae. 2020. A Survey of Current Datasets for Code- Switching Research. In 2020 6th International Con- ference on Advanced Computing and Communica- tion Systems (ICACCS), pages 136-141.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages",
"authors": [
{
"first": "Divyanshu",
"middle": [],
"last": "Kakwani",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Satish",
"middle": [],
"last": "Golla",
"suffix": ""
},
{
"first": "N",
"middle": [
"C"
],
"last": "Gokul",
"suffix": ""
},
{
"first": "Avik",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Pratyush",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for In- dian Languages. In Findings of EMNLP.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc V. Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dice loss for dataimbalanced nlp tasks",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjun",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, and Jiwei Li. 2020. Dice loss for data- imbalanced nlp tasks.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The beginnings of civilization in South India",
"authors": [
{
"first": "Clarence",
"middle": [],
"last": "Maloney",
"suffix": ""
}
],
"year": 1970,
"venue": "The Journal of Asian Studies",
"volume": "",
"issue": "",
"pages": "603--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarence Maloney. 1970. The beginnings of civiliza- tion in South India. The Journal of Asian Studies, pages 603-616.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "Sandip",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Bharathi Raja Chakravarthi ;",
"middle": [],
"last": "Malayalam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hindi",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "English",
"suffix": ""
}
],
"year": 2020,
"venue": "Forum for Information Retrieval Evaluation",
"volume": "2020",
"issue": "",
"pages": "29--32",
"other_ids": {
"DOI": [
"10.1145/3441501.3441517"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. 2020. Overview of the HASOC Track at FIRE 2020: Hate Speech and Offensive Language Identification in Tamil, Malay- alam, Hindi, English and German. In Forum for Information Retrieval Evaluation, FIRE 2020, page 29-32, New York, NY, USA. Association for Com- puting Machinery.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Ezuttaccan and his age",
"authors": [
{
"first": "",
"middle": [],
"last": "Chelnat Achyuta Menon",
"suffix": ""
}
],
"year": 1938,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelnat Achyuta Menon. 1938. Ezuttaccan and his age. Ph.D. thesis, SOAS University of London.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The Poems of Ancient Tamil: Their Milieu and Their Sanskrit Counterparts",
"authors": [
{
"first": "",
"middle": [],
"last": "Kr Norman",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "KR Norman. 1977. The Poems of Ancient Tamil: Their Milieu and Their Sanskrit Counterparts.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Hope speech detection: A computational analysis of the voice of peace",
"authors": [
{
"first": "Shriphani",
"middle": [],
"last": "Palakodety",
"suffix": ""
},
{
"first": "Ashiqur",
"middle": [
"R"
],
"last": "Khudabukhsh",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. 2020. Hope speech detection: A computational analysis of the voice of peace.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "How multilingual is multilingual bert?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert?",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding",
"authors": [
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Vegupatti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)",
"volume": "",
"issue": "",
"pages": "68--72",
"other_ids": {
"DOI": [
"10.1109/ICACCS48705.2020.9074379"
]
},
"num": null,
"urls": [],
"raw_text": "Ruba Priyadharshini, Bharathi Raja Chakravarthi, Mani Vegupatti, and John P. McCrae. 2020. Named Entity Recognition for Code-Mixed Indian Corpus using Meta Embedding. In 2020 6th International Conference on Advanced Computing and Communi- cation Systems (ICACCS), pages 68-72.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "IIITT@LT-EDI-EACL2021-Hope Speech Detection: There is always hope in Transformers",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Puranik",
"suffix": ""
},
{
"first": "Adeep",
"middle": [],
"last": "Hande",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Sajeetha",
"middle": [],
"last": "Thavareesan",
"suffix": ""
},
{
"first": "Bharathi Raja",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Puranik, Adeep Hande, Ruba Priyad- harshini, Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. IIITT@LT-EDI-EACL2021- Hope Speech Detection: There is always hope in Transformers. In Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Introduction to the Dravidian languages",
"authors": [
{
"first": "B",
"middle": [],
"last": "Sanford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steever",
"suffix": ""
}
],
"year": 1998,
"venue": "The Dravidian languages",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanford B Steever. 1998. Introduction to the Dravidian languages. The Dravidian languages, 1:39.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Circulation and the historical geography of Tamil country",
"authors": [
{
"first": "Burton",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 1977,
"venue": "The journal of Asian studies",
"volume": "37",
"issue": "1",
"pages": "7--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burton Stein. 1977. Circulation and the historical geog- raphy of Tamil country. The journal of Asian studies, 37(1):7-26.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Sajeetha Thavareesan, and Bharathi Raja Chakravarthi. 2021. IIITT@DravidianLangTech-EACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages",
"authors": [
{
"first": "Konthala",
"middle": [],
"last": "Yasaswini",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Puranik",
"suffix": ""
},
{
"first": "Adeep",
"middle": [],
"last": "Hande",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konthala Yasaswini, Karthik Puranik, Adeep Hande, Ruba Priyadharshini, Sajeetha Thava- reesan, and Bharathi Raja Chakravarthi. 2021. IIITT@DravidianLangTech-EACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages. In Proceedings of the First Workshop on Speech and Language Technolo- gies for Dravidian Languages. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Custom architecture :",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Hope Speech EDI DatasetThe competition organizers have provided us with datasets (Chakravarthi, 2020a) for three different languages Tamil, Malayalam and English. Across each dataset we had three different classes Hope Speech, Non Hope Speech and not lang where lang can be either of Tamil, Malayalam or English depending upon the dataset we are dealing with. The train, dev and test set distributions of the dataset are as shown in the table 1."
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td></tr></table>",
"text": "Experiments with development dataset (in terms of weighted F1 scores)"
},
"TABREF4": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Experiments with test dataset (in terms of weighted F1 scores)"
}
}
}
}