| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:17:31.473831Z" |
| }, |
| "title": "MSR India at SemEval-2020 Task 9: Multilingual Models can do Code-Mixing too", |
| "authors": [ |
| { |
| "first": "Anirudh", |
| "middle": [], |
| "last": "Srinivasan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Microsoft Research", |
| "location": { |
| "country": "India" |
| } |
| }, |
| "email": "anirudhsriniv@gmail.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we present our system for the SemEval 2020 task on code-mixed sentiment analysis. Our system makes use of large transformer based multilingual embeddings like mBERT. Recent work has shown that these models posses the ability to solve code-mixed tasks in addition to their originally demonstrated cross-lingual abilities. We evaluate the stock versions of these models for the sentiment analysis task and also show that their performance can be improved by using unlabelled code-mixed data. Our submission (username Genius1237) achieved the second rank on the English-Hindi subtask with an F1 score of 0.726.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we present our system for the SemEval 2020 task on code-mixed sentiment analysis. Our system makes use of large transformer based multilingual embeddings like mBERT. Recent work has shown that these models posses the ability to solve code-mixed tasks in addition to their originally demonstrated cross-lingual abilities. We evaluate the stock versions of these models for the sentiment analysis task and also show that their performance can be improved by using unlabelled code-mixed data. Our submission (username Genius1237) achieved the second rank on the English-Hindi subtask with an F1 score of 0.726.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The task of identifying sentiment from text is extremely important in this age where large volumes of text content are being consumed via social media. The task becomes even more interesting when it comes to bilingual communities as these communities exhibit the phenomenon of code-mixing online (Rijhwani et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 296, |
| "end": 319, |
| "text": "(Rijhwani et al., 2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Existing approaches to tackling this problem have mainly been based on statistical methods (Vilares et al., 2016; Patra et al., 2018) . These methods have used features like n-gram counts and TF-IDF vectors along with a linear classifier. There have been very few approaches to this problem using deep learning as the amount of labelled code-mixed data available has always been very less. Methods like the one in Pratapa et al. (2018b) train word embeddings using unlabelled code-mixed data, the availability of which is not as problematic as labelled data, and use these embeddings along with a recurrent neural network based model. Recent advancements in natural language processing have shown that large transformer based models like BERT (Devlin et al., 2019) , when pre-trained on large corpora, are easily adaptable for downstream tasks with small datasets. These models even perform well in a cross-lingual manner (Conneau et al., 2018) when pre-trained on corpora spanning multiple languages. Our experiments show that these multilingual models perform well even on code-mixing tasks, having had no exposure to any code-mixing during pre-training. We use such a system to solve the code-mixed sentiment analysis problem. We also show that it's performance can be improved by using a combination of generated and real code-mixed text.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 113, |
| "text": "(Vilares et al., 2016;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 114, |
| "end": 133, |
| "text": "Patra et al., 2018)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 414, |
| "end": 436, |
| "text": "Pratapa et al. (2018b)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 743, |
| "end": 764, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 922, |
| "end": 944, |
| "text": "(Conneau et al., 2018)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of the paper is organized as follows. Section 2 tasks about the dataset for the task and the pre-processing done to it. Section 3 talks about the different systems we evaluated, with Section 3.3 in particular going into how we improved the multilingual models using code-mixed data. Section 4 describes the performance of the different models and Section 5 concludes our discussion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The details about the datasets (Patwa et al., 2020) for both the English-Hindi (En-Hi) and English-Spanish (En-Es) tasks are described in Figure 1 : Model dataset consists of tweets where the Hindi is written in the Roman script. We make use of the language identification tool by Gella et al. (2014) to identify the Romanized Hindi sections and transliterate them to Devanagari using the Bing Translator API 1 . The language tags provided along with the data is not used. No other pre-processing is done to the data.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 51, |
| "text": "(Patwa et al., 2020)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 281, |
| "end": 300, |
| "text": "Gella et al. (2014)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 138, |
| "end": 146, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset and Preprocessing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 System Description Figure 1 describes the model used for sentiment analysis. The model is a classification model that comprises of a pretrained transformer-based multilingual embedding (like BERT) and a linear layer acting as a classification head. The embedding takes in a tokenized sentence and outputs a single embedding for that sentence. This embedding is then run through the linear layer that outputs scores for each of the 3 classes. The entire system was implemented using the Huggingface Transformers library (Wolf et al., 2019) . We experimented with different models for the embedding. We also experimented with different pooling techniques that are used to obtain the sentence embedding and these are detailed below. Finally, as a baseline, we report the results from the method in Pratapa et al. (2018b) , using Word2vec embeddings trained on code-mixed data along with a BiLSTM.", |
| "cite_spans": [ |
| { |
| "start": 521, |
| "end": 540, |
| "text": "(Wolf et al., 2019)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 797, |
| "end": 819, |
| "text": "Pratapa et al. (2018b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 29, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset and Preprocessing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Multilingual BERT (mBERT) (Devlin et al., 2019) is a transformer based model that is pre-trained on a corpora comprising 104 languages. This performs well on cross-lingual tasks like XNLI and this was taken as our baseline model. A more recent model is XLM-Roberta (XLM-R) (Conneau et al., 2019) and this has been shown to outperform BERT on many cross-lingual tasks. This differs from BERT in the type of tokenization it uses and the amount of data it is pre-trained on. Table 2 contains a list of differences between the two models. We use the bert-base-multilingual-cased model for BERT and the xlm-roberta-base model for XLM-R.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 47, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 273, |
| "end": 295, |
| "text": "(Conneau et al., 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 472, |
| "end": 479, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multilingual Embeddings", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The aforementioned multilingual models output one embedding per input token. These need to be pooled together to obtain a sentence embedding to use for the sequence classification task. There have been multiple works proposing different methods to obtain a sentence embedding from BERT (Reimers and Gurevych, 2019; Wang and Kuo, 2020) . The two most popular (and simplest) methods are performing average pooling over the embeddings of every token or using the embedding of the first token ([CLS] token in case of BERT, <s> in case of XLM-R). We evaluate both these methods and report the performance of both.", |
| "cite_spans": [ |
| { |
| "start": 286, |
| "end": 314, |
| "text": "(Reimers and Gurevych, 2019;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 315, |
| "end": 334, |
| "text": "Wang and Kuo, 2020)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Embedding Technique", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "There have been multiple works proposing techniques to create domain specific versions of models like BERT (Sun et al., 2019; Alsentzer et al., 2019) . Khanuja et al. (2020) showed that when models like mBERT are finetuned on synthetic and non-synthetic code-mixed data, they perform much better on downstream code-mixed tasks. Along these lines, we finetune both mBERT and XLM-R with code-mixed data on the masked language modeling task. We follow a 2 stage curriculum, first finetuning on a large corpus of 2 million generated (synthetic) code-mixed sentences and then with a smaller corpus of 90,000 real (non-synthetic) code-mixed sentences. The curriculum followed and synthetic sentences generated are based on the technique in Pratapa et al. (2018a) . We create one model each for En-Es and En-Hi, finetuned on code-mixed data from that pair. We call these Modified mBERT and Modified XLM-R.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 125, |
| "text": "(Sun et al., 2019;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 126, |
| "end": 149, |
| "text": "Alsentzer et al., 2019)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 152, |
| "end": 173, |
| "text": "Khanuja et al. (2020)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 734, |
| "end": 756, |
| "text": "Pratapa et al. (2018a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Finetuning Multilingual Embeddings on Code-Mixed Data", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The results are presented in Tables 3 and 4 . Each table contains F1 scores averaged over 5 different seeds. For all the runs, a batch size of 64 was used along with the Adam optimizer with a learning rate of 5e-5. Each batch was made to have equal number of samples from all 3 classes. Training was performed for 10 epochs. Right away, we are able to observe that the stock versions of mBERT and XLM-R, which are not exposed to any form of code-mixing during their pre-training show impressive F1 scores. This is talked about more in Section 4.2. We present an analysis of the sentence embedding techniques first.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 43, |
| "text": "Tables 3 and 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Both the sentence embedding methods experimented with are shown as separate columns in Tables 3 and 4 . Using average pooling does bring in improvements in some cases, mainly on the Dev sets, but the corresponding Test set numbers are not better. The embedding of the first token ([CLS]/<s>) in the final layer is computed as a weighted sum over the embeddings of the all the tokens of the n \u2212 1 st layer. Given such a mechanism, the embedding of the first token may be able to capture enough information over all the tokens of the sentence and is able to perform as well as the average pooling method for a simple sequence classification task. Our results are in line with the results in Wang and Kuo (2020) , where most simple downstream tasks do not see big differences in performances of the 2 embedding methods, with only more complex sentence similarity or probing tasks showing the average pooling method to perform better. ", |
| "cite_spans": [ |
| { |
| "start": 690, |
| "end": 709, |
| "text": "Wang and Kuo (2020)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 87, |
| "end": 102, |
| "text": "Tables 3 and 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sentence Embedding Methods", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Both mBERT and XLM-R performing well on these tasks is pretty impressive. Finetuning 2 these models with code-mixed data improves the performance of the stock models. We observe an improvement in almost all the cases, ranging from 1-5%. Our results resonate with the ones in Khanuja et al. (2020) , suggesting that most code-mixed tasks can be solved by simply using multilingual embeddings like mBERT, finetuning them on any available code-mixed data if better performance is needed.", |
| "cite_spans": [ |
| { |
| "start": 275, |
| "end": 296, |
| "text": "Khanuja et al. (2020)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Finetuning on Code-Mixed Data", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We take the best performing model (on the test set this is Stock XLM-R) for both tasks and analyse the class-wise precision, recall and F1-scores. These are depicted in Tables 5 and 6 . Given that training was with data balanced across the 3 classes, similar performance across them is expected. This is observed in the En-Hi task, with all 3 classes having precision and recall within a small range. Similar numbers are observed between the dev and test sets too. However when it comes to the En-Es test set, there is a big gap between the classes. The precision values for the neutral class is extremely low and this is affecting the overall F1 scores. Interestingly, this gap in scores isn't present on the dev set, suggesting that there is some aspect of the test set that the model is unable to learn from the train set during the training process.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 169, |
| "end": 183, |
| "text": "Tables 5 and 6", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Class-Wise Performance Analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In this paper, we present our system for the SemEval 2020 task on code-mixed sentiment analysis. We make use of multilingual models like mBERT and show that they work well for code-mixing tasks. The best performance is extracted from these models by finetuning them on code-mixed data and using this version instead of their stock versions. We also find that for simple sequence classification tasks, the choice of sentence embedding technique does not have a significant impact on the result. There are multiple paths for further exploration of this work. While finetuning mBERT on code-mixed data, we've created one model per language and used a relatively small amount of data (compared to the amount of data BERT is pretrained on). Both these could be looked into, creating a single model for Table 6 : En-Es Task: Class-wise performance with Stock XLM-R multiple language pairs, and using much more data for this purpose. In this process, one may be able to obtain a universal model that works for a large number of code-mixed pairs in addition to the large number of languages that mBERT already supports.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 797, |
| "end": 804, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://aka.ms/translatordevdoc", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "MLM finetuning", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Monojit Choudhury and Sebastin Santy for their feedback during the model evaluation process and Simran Khanuja for setting up the model training process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Publicly available clinical BERT embeddings", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Alsentzer", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Boag", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Hung", |
| "middle": [], |
| "last": "Weng", |
| "suffix": "" |
| }, |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Jindi", |
| "suffix": "" |
| }, |
| { |
| "first": "Tristan", |
| "middle": [], |
| "last": "Naumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Mc-Dermott", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "72--78", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei-Hung Weng, Di Jindi, Tristan Naumann, and Matthew Mc- Dermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA, June. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "XNLI: Evaluating cross-lingual sentence representations", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruty", |
| "middle": [], |
| "last": "Rinott", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Adina", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2475--2485", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium, October- November. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "ye word kis lang ka hai bhai?\" testing the limits of word level language identification", |
| "authors": [ |
| { |
| "first": "Spandana", |
| "middle": [], |
| "last": "Gella", |
| "suffix": "" |
| }, |
| { |
| "first": "Kalika", |
| "middle": [], |
| "last": "Bali", |
| "suffix": "" |
| }, |
| { |
| "first": "Monojit", |
| "middle": [], |
| "last": "Choudhury", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 11th International Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "368--377", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Spandana Gella, Kalika Bali, and Monojit Choudhury. 2014. \"ye word kis lang ka hai bhai?\" testing the limits of word level language identification. In Proceedings of the 11th International Conference on Natural Language Processing, pages 368-377, Goa, India, December. NLP Association of India.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "GLUECoS: An evaluation benchmark for code-switched NLP", |
| "authors": [ |
| { |
| "first": "Simran", |
| "middle": [], |
| "last": "Khanuja", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandipan", |
| "middle": [], |
| "last": "Dandapat", |
| "suffix": "" |
| }, |
| { |
| "first": "Anirudh", |
| "middle": [], |
| "last": "Srinivasan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sunayana", |
| "middle": [], |
| "last": "Sitaram", |
| "suffix": "" |
| }, |
| { |
| "first": "Monojit", |
| "middle": [], |
| "last": "Choudhury", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "3575--3585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simran Khanuja, Sandipan Dandapat, Anirudh Srinivasan, Sunayana Sitaram, and Monojit Choudhury. 2020. GLUECoS: An evaluation benchmark for code-switched NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3575-3585, Online, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
| "authors": [ |
| { |
| "first": "Jinhyuk", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Wonjin", |
| "middle": [], |
| "last": "Yoon", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungdong", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Donghyeon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Sunkyu", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Chan", |
| "middle": [], |
| "last": "Ho So", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaewoo", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Bioinformatics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, Sep.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Sentiment analysis of code-mixed indian languages: An overview of sail code-mixed shared task @icon", |
| "authors": [ |
| { |
| "first": "Dipankar", |
| "middle": [], |
| "last": "Braja Gopal Patra", |
| "suffix": "" |
| }, |
| { |
| "first": "Amitava", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Braja Gopal Patra, Dipankar Das, and Amitava Das. 2018. Sentiment analysis of code-mixed indian languages: An overview of sail code-mixed shared task @icon-2017.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets", |
| "authors": [ |
| { |
| "first": "Parth", |
| "middle": [], |
| "last": "Patwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Gustavo", |
| "middle": [], |
| "last": "Aguilar", |
| "suffix": "" |
| }, |
| { |
| "first": "Sudipta", |
| "middle": [], |
| "last": "Kar", |
| "suffix": "" |
| }, |
| { |
| "first": "Suraj", |
| "middle": [], |
| "last": "Pandey", |
| "suffix": "" |
| }, |
| { |
| "first": "Pykl", |
| "middle": [], |
| "last": "Srinivas", |
| "suffix": "" |
| }, |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Gamb\u00e4ck", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanmoy", |
| "middle": [], |
| "last": "Chakraborty", |
| "suffix": "" |
| }, |
| { |
| "first": "Thamar", |
| "middle": [], |
| "last": "Solorio", |
| "suffix": "" |
| }, |
| { |
| "first": "Amitava", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas PYKL, Bj\u00f6rn Gamb\u00e4ck, Tanmoy Chakraborty, Thamar Solorio, and Amitava Das. 2020. Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Language modeling for code-mixing: The role of linguistic theory based synthetic data", |
| "authors": [ |
| { |
| "first": "Adithya", |
| "middle": [], |
| "last": "Pratapa", |
| "suffix": "" |
| }, |
| { |
| "first": "Gayatri", |
| "middle": [], |
| "last": "Bhat", |
| "suffix": "" |
| }, |
| { |
| "first": "Monojit", |
| "middle": [], |
| "last": "Choudhury", |
| "suffix": "" |
| }, |
| { |
| "first": "Sunayana", |
| "middle": [], |
| "last": "Sitaram", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandipan", |
| "middle": [], |
| "last": "Dandapat", |
| "suffix": "" |
| }, |
| { |
| "first": "Kalika", |
| "middle": [], |
| "last": "Bali", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1543--1553", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018a. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1543-1553, Melbourne, Australia, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Word embeddings for code-mixed language processing", |
| "authors": [ |
| { |
| "first": "Adithya", |
| "middle": [], |
| "last": "Pratapa", |
| "suffix": "" |
| }, |
| { |
| "first": "Monojit", |
| "middle": [], |
| "last": "Choudhury", |
| "suffix": "" |
| }, |
| { |
| "first": "Sunayana", |
| "middle": [], |
| "last": "Sitaram", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "3067--3072", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adithya Pratapa, Monojit Choudhury, and Sunayana Sitaram. 2018b. Word embeddings for code-mixed language processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3067-3072, Brussels, Belgium, October-November. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Sentence-BERT: Sentence embeddings using Siamese BERT-networks", |
| "authors": [ |
| { |
| "first": "Nils", |
| "middle": [], |
| "last": "Reimers", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "3982--3992", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China, November. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Estimating code-switching on twitter with a novel generalized word-level language detection technique", |
| "authors": [ |
| { |
| "first": "Shruti", |
| "middle": [], |
| "last": "Rijhwani", |
| "suffix": "" |
| }, |
| { |
| "first": "Royal", |
| "middle": [], |
| "last": "Sequiera", |
| "suffix": "" |
| }, |
| { |
| "first": "Monojit", |
| "middle": [], |
| "last": "Choudhury", |
| "suffix": "" |
| }, |
| { |
| "first": "Kalika", |
| "middle": [], |
| "last": "Bali", |
| "suffix": "" |
| }, |
| { |
| "first": "Chandra Shekhar", |
| "middle": [], |
| "last": "Maddila", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1971--1982", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shruti Rijhwani, Royal Sequiera, Monojit Choudhury, Kalika Bali, and Chandra Shekhar Maddila. 2017. Estimat- ing code-switching on twitter with a novel generalized word-level language detection technique. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1971-1982, Vancouver, Canada, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "How to fine-tune bert for text classification?", |
| "authors": [ |
| { |
| "first": "Chi", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Xipeng", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yige", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuanjing", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification?", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "EN-ES-CS: An English-Spanish codeswitching twitter corpus for multilingual sentiment analysis", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Vilares", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [ |
| "A" |
| ], |
| "last": "Alonso", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
| "volume": "", |
| "issue": "", |
| "pages": "4149--4153", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Vilares, Miguel A. Alonso, and Carlos G\u00f3mez-Rodr\u00edguez. 2016. EN-ES-CS: An English-Spanish code- switching twitter corpus for multilingual sentiment analysis. In Proceedings of the Tenth International Confer- ence on Language Resources and Evaluation (LREC'16), pages 4149-4153, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Sbert-wk: A sentence embedding method by dissecting bert-based word models", |
| "authors": [ |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "C" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Jay", |
| "middle": [], |
| "last": "Kuo", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bin Wang and C. C. Jay Kuo. 2020. Sbert-wk: A sentence embedding method by dissecting bert-based word models.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Huggingface's transformers: State-of-the-art natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [], |
| "last": "Brew", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">Language Train Dev Test</td></tr><tr><td>En-Es</td><td>12002 2998 3789</td></tr><tr><td>En-Hi</td><td>14000 3000 3000</td></tr></table>", |
| "num": null, |
| "html": null, |
| "text": "The datasets comprise entirely of tweets. The English-Hindi" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"2\">: Dataset details</td></tr><tr><td>Feature</td><td>mBERT</td><td>XLM-R</td></tr><tr><td colspan=\"2\">Tokenization WordPiece</td><td>SPM</td></tr><tr><td>Languages</td><td>104</td><td>100</td></tr><tr><td>Vocab</td><td>30k</td><td>250k</td></tr><tr><td>Num. Layers</td><td>12</td><td>12</td></tr><tr><td>Params</td><td>110M</td><td>270M</td></tr></table>", |
| "num": null, |
| "html": null, |
| "text": "" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "text": "Model Differences" |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"4\">: F1 scores on En-Hi Dataset</td><td/></tr><tr><td colspan=\"5\">Model/Sent. Embedding First Token Avg. Pooling</td></tr><tr><td/><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td></tr><tr><td>Word2vec + BiLSTM</td><td>54.50</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Stock mBERT</td><td>60.06</td><td colspan=\"2\">-60.31</td><td>-</td></tr><tr><td>Modified mBERT</td><td colspan=\"3\">60.66 63.73 61.94</td><td>-</td></tr><tr><td>Stock XLM-R</td><td colspan=\"3\">57.45 68.44 61.23</td><td>-</td></tr><tr><td>Modified XLM-R</td><td>62.00</td><td colspan=\"2\">-56.84</td><td>-</td></tr></table>", |
| "num": null, |
| "html": null, |
| "text": "" |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "text": "F1 scores on En-Es Dataset a" |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"3\">Class/Measure Precision Recall</td><td>F1</td></tr><tr><td>Dev</td><td/><td/></tr><tr><td>Positive</td><td>0.67</td><td colspan=\"2\">0.61 0.64</td></tr><tr><td>Neutral</td><td>0.46</td><td colspan=\"2\">0.40 0.43</td></tr><tr><td>Negative</td><td>0.43</td><td colspan=\"2\">0.66 0.52</td></tr><tr><td>Test</td><td/><td/></tr><tr><td>Positive</td><td>0.92</td><td colspan=\"2\">0.64 0.76</td></tr><tr><td>Neutral</td><td>0.09</td><td colspan=\"2\">0.61 0.16</td></tr><tr><td>Negative</td><td>0.53</td><td colspan=\"2\">0.35 0.42</td></tr><tr><td>: En-Hi Task: Class-wise performance</td><td/><td/></tr><tr><td>with Stock XLM-R</td><td/><td/></tr></table>", |
| "num": null, |
| "html": null, |
| "text": "" |
| } |
| } |
| } |
| } |