ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.119.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:14:34.388540Z"
},
"title": "HinglishNLP: Fine-tuned Language Models for Hinglish Sentiment Detection",
"authors": [
{
"first": "Meghana",
"middle": [],
"last": "Bhange",
"suffix": "",
"affiliation": {},
"email": "meghana@verloop.io"
},
{
"first": "Verloop",
"middle": [],
"last": "Io",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nirant",
"middle": [],
"last": "Kasliwal",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sentiment analysis for code-mixed social media text continues to be an under-explored area. This work adds two common approaches: fine-tuning large transformer models and sample efficient methods like ULMFiT (Howard and Ruder, 2018). Prior work demonstrates the efficacy of classical ML methods for polarity detection. Fine-tuned general-purpose language representation models, such as those of the BERT family are benchmarked along with classical machine learning and ensemble methods. We show that NB-SVM beats RoBERTa by 6.2% (relative) F1. The best performing model is a majority-vote ensemble which achieves an F1 of 0.707. The leaderboard submission was made under the codalab username nirantk, with F1 of 0.689.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Sentiment analysis for code-mixed social media text continues to be an under-explored area. This work adds two common approaches: fine-tuning large transformer models and sample efficient methods like ULMFiT (Howard and Ruder, 2018). Prior work demonstrates the efficacy of classical ML methods for polarity detection. Fine-tuned general-purpose language representation models, such as those of the BERT family are benchmarked along with classical machine learning and ensemble methods. We show that NB-SVM beats RoBERTa by 6.2% (relative) F1. The best performing model is a majority-vote ensemble which achieves an F1 of 0.707. The leaderboard submission was made under the codalab username nirantk, with F1 of 0.689.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Code-mixing or code-switching refers to the use of two or more languages or speech variants together (Contini-Morava, 1995) . This is commonly observed in informal conversations, especially those on social media, e.g. Twitter (Rudra et al., 2016) (Rijhwani et al., 2017) . While a small body of work does exist on code-mixing detection, this task focuses on polarity detection (Sentiment Analysis). We demonstrate the impressive performance of transfer learning for the task of sentiment detection in code-mixed context. We discuss limitations of existing deep learning pre-trained models which are trained on \"monolingual\" textwhich has sentences in only one language.",
"cite_spans": [
{
"start": 101,
"end": 123,
"text": "(Contini-Morava, 1995)",
"ref_id": "BIBREF3"
},
{
"start": 226,
"end": 246,
"text": "(Rudra et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 247,
"end": 270,
"text": "(Rijhwani et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we demonstrate how it is beneficial to fine-tune the language model (LM) for a code-mixed setting like Hinglish, when Hindi is written in the Roman script. The labelled data for the sentiment classifier is from the task paper - (Patwa et al., 2020) . It consists of a total of 17000 tweets. The sentiment labels are positive, negative, or neutral, and the code-mixed languages are English-Hindi. The train data after the split has 14k tweets. The validation and test data contain 3k tweets each.",
"cite_spans": [
{
"start": 242,
"end": 262,
"text": "(Patwa et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We used two primary datasets. The data used for fine-tuning LM is from Twitter stream. The tweet stream data was used which contains \u223c1.9 million Hinglish tweets. We manually sampled 3k tweets to verify what fraction of them are Hinglish. This dataset consists of \u223c86.5% Hinglish tweets. The 17k tweet Sentimix data (Patwa et al., 2020) was used to further fine-tune the sentiment classifier. The Sentimix data contains 14,000 tagged tweets marked as positive, negative and neutral for training, and 3,000 for development and testing each.",
"cite_spans": [
{
"start": 316,
"end": 336,
"text": "(Patwa et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Pre-training LM require large datasets. In order to enable this, we gathered \u223c1.9 Million tweets from the Twitter 1% sample stream for the entire year of 2018. We curated a seed dictionary of Hinglish words and their spelling variants. Next, we calculate the Jaccard Index (Jaccard, 1912) between our seed dictionary and every tweet. For values above 0.6, we mark the tweet as \"Hinglish tweet\". This threshold value 0.6 was selected empirically, by evaluating Jaccard values for 200 tweets.",
"cite_spans": [
{
"start": 273,
"end": 288,
"text": "(Jaccard, 1912)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mining and Filtering Twitter Corpus",
"sec_num": "2.1"
},
{
"text": "Next, every token which is present in the tweet, but missing in our dictionary is marked as a Candidate token C t . We remove duplicates to get our set of unique candidate tokens C. For every unique C t in C, we manually review and add to our dictionary. A known limitation of this iterative-expansion dictionary based approach is that it starts out with a bias for smaller tweets with fewer total tokens. Hence, we repeated this exercise in batches of 10,000 tweets each -till we saw two batches of full 280 character tweets. We marked these tweets as \"highly likely\" to be Hinglish. These were roughly 160,000 tweets. A secondary split of roughly 380,000 tweets was marked as \"possibly\" Hinglish. Both of these were primarily used for training or fine-tuning the LM backbone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining and Filtering Twitter Corpus",
"sec_num": "2.1"
},
{
"text": "The \u223c1.9M tweets are composed of 3 different splits, with differing Hinglish percentage. The first split of 162K tweets is enriched to 86.5% Hinglish, with 13% tweets being empty, or non-Hinglish in other ways. The second split of 384K tweets is enriched to 89.6%. The remaining1.4M tweets are expected to have 83% Hinglish tweets. We randomly pulled 1K tweets from each of these splits to get these estimates. The estimate is hence, prone to sampling biases/errors. To re-iterate, this added dump is neither tagged with polarity nor pure Hinglish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining and Filtering Twitter Corpus",
"sec_num": "2.1"
},
{
"text": "The dataset is released on Github. 1 Since Twitter discourages releasing the text directly, tweet ids are shared. This leaves the user to pull the specific tweets using Twitter's Developer API.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Release",
"sec_num": "2.2"
},
{
"text": "We de-duplicated the 1.9M tweets corpus using a string equality check. We also de-duplicated tweets using the meta-information in the JSON when a \"retweeted\" text is included twice. The text was pre-processed before data were introduced to the model. The pre-processing included removal of both external links and shortened twitter links. The \"@\" was replaced with with \"mention\". Similarly, \"#\" was replaced with the word \"hashtag\". Emojis were converted to text equivalent using the emoji package (Taehoon Kim and Kevin Wurster, 2019) . During this stage, both the datasets (SemEval and Twitter Large Supervised Dataset) are pre-processed with identical code, both during training/fine-tuning and inference.",
"cite_spans": [
{
"start": 508,
"end": 536,
"text": "Kim and Kevin Wurster, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cleaning and Pre-processing",
"sec_num": "2.3"
},
{
"text": "The Language Models (LMs) were fine-tuned on the entire Twitter Stream Sample. We used held out about 10% for measuring LM perplexity. Classifiers were trained using 14000 tweets from the 17000 tweets in the SemEval training corpus. The linear layers were fine-tuned on the SemEval training corpus for 3 epochs for all experiments. The fine-tuning parameters for the BERT-family sentiment classifiers are referenced in Table 1 . For Attention dropout and hidden dropout, the parameters were empirically chosen using random grid-search with a range of 0.1 to 0.9. The range considered for Adam Epsilon was 1e-8 to 9e-8 with 1e-8 granularity. Learning rates varied from 1e-7 to 1e-4. These parameters were combined with two learning rate schedulers, a linear learning rate scheduler and a cosine learning rate scheduler. The training for models which took place in two steps: First, the pre-trained language model was fine-tuned using the 1.9M tweets. Second, this fine-tuned deep LM was used as an encoder for training the polarity classifier using the 14K tagged tweets from Sentimix.",
"cite_spans": [],
"ref_spans": [
{
"start": 419,
"end": 426,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training and Fine-tuning",
"sec_num": "3.1"
},
{
"text": "During the competition, we used multiple methods of evaluation and different train-test splits. In this work, the test dataset refers to the officially released test set of 3,000 tweets. The performance numbers have been updated to reflect the same. We chose to ignore the validation set from SemEval for evaluation because most of our LMs had consistently very high performance of 0.95 F1 or more on the set. The F1 4 Modeling Approaches",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.2"
},
{
"text": "NBSVM is the approach proposed by Wang and Manning (2012) , which performs well on text classification in tasks like sentiment classification. It takes a linear model such as SVM (or logistic regression) and incorporates the possibilities of Bayesian by replacing terms with Naive Bayes log-count ratios. The NBSVM implementation was borrowed as is from zaxcie (2018). The motivation for using NBSVM is that they are comparatively faster to train as opposed to deep learning models. We chose C = 4, the inverse regularization parameter.",
"cite_spans": [
{
"start": 34,
"end": 57,
"text": "Wang and Manning (2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NB-SVM",
"sec_num": "4.1"
},
{
"text": "We used AWD-QRNN (Bradbury et al., 2016) instead of AWD-LSTM (Howard and Ruder, 2018) for pre-training and fine-tuning. We used Sentence Piece (Kudo and Richardson, 2018 ) for text tokenisation. The intent was to capture sub-word level features. Vocabulary Size of the sentencepiece tokenizer was 8000 and was trained on 540k tweets to save compute time. For the ULMFiT-QRNN model batch size of 1024 was used while training both the LM and classifier (linear) layers. AWD-LSTM gives an F1 0.48 on the test set where as AWD-QRNN performs with an F1 of 0.650. The hypothesis which could explain this is that tweet length, which is typically less than 140 characters, is too short for LSTM is learn a meaningful pattern.",
"cite_spans": [
{
"start": 17,
"end": 40,
"text": "(Bradbury et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 143,
"end": 169,
"text": "(Kudo and Richardson, 2018",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ULMFiT: Universal Language Model Fine-tuning for Text Classification",
"sec_num": "4.2"
},
{
"text": "BERT-base-multilingual-cased (Devlin et al., 2018) , without any fine-tuning of LM on Hinglish data, was used to train the sentiment classifier. It is trained on cased text in the top 104 languages with the largest Wikipedia corpora. The linear layers were trained/fine-tuned without updating the frozen backbone for 3 epochs.",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Multilingual",
"sec_num": "4.3"
},
{
"text": "The base model for fine-tuning BERT LM on Hinglish data was BERT-base-multilingual-cased (Devlin et al., 2018) . Both the backbone and linear layers of the LM were fine-tuned. This was on a pre-processed Twitter Stream Sample (described in the previous section) over 26,000 iterations. It was trained for a total of 4 epochs. Training batch size was four and vocab size 119,547. The perplexity of the fine-tuned LM was 8.2. The trained BERT tokenizer and model were utilized for fine-tuning classifier.",
"cite_spans": [
{
"start": 89,
"end": 110,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hinglish Fine-tuned BERT",
"sec_num": "4.4"
},
{
"text": "RoBERTa (Liu et al., 2019 ) is a robustly optimized BERT pre-training approach. It is trained over longer sequences and removes the next sentence prediction task from BERT pre-training. The base model for fine-tuning the LM-backbone for RoBERTa on Hinglish data was RoBERTa-base. The LM was fine-tuned Figure 1 : Once pre-processed, the data is used for predicting results which are then passed to the ensemble described in section 4.7. Table 2 : Results on sentiment classification where the F1 is performances of the model on test-data provided by Sentimix. The models with LM-backbones are provided with the perplexity of the fine-tuned LM where as the ones without are denoted by NA.",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Liu et al., 2019",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 302,
"end": 310,
"text": "Figure 1",
"ref_id": null
},
{
"start": 437,
"end": 444,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "RoBERTa",
"sec_num": "4.5"
},
{
"text": "on a pre-processed unsupervised twitter dataset over 25,000 iterations. It was trained for a total of 3 epochs. Training batch size was four and vocab size 50265. The perplexity of this model was 7.54.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "DistilBERT (Sanh et al., 2019) uses the technique of knowledge distillation to improve the performance of BERT and create a smaller distilled version of the model. The LM-backbone was fine-tuned on a pre-processed unsupervised twitter dataset over 49,000 iterations. It was trained for a total of 6 epochs. Training batch size was four and vocab size 28996. The perplexity of the fine-tuned LM-backbone for distilBERT was 6.51 and the base model used for fine-tuning the LM was distilbert-base-cased.",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DistilBERT",
"sec_num": "4.6"
},
{
"text": "For the final submissions, three variations of BERTs and two variations of DistilBERT were used. These were the top 5 selected based on their validation accuracy. For the ensemble, Weighted Majority Voting, by using the prediction confidence (0 to 1 scale) as the weight; The ensemble methodology and its usage in our case is described in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 339,
"end": 347,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.7"
},
{
"text": "The result for the experiments are summarized in Table 2 . Out of all the techniques used on test-data, Weighted majority vote ensemble with LR funneling gained a significant edge when it comes to F1 score. Traditional machine learning models like NB-SVM show a comparative performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "There are three main incremental directions of improvements: data, methods, adopting techniques from text classification. For instance, initial tweet data had a lot of truncated tweets, using tweet ids to get an entire tweet would enrich our inputs. The training data can also be augmented in a wide variety of ways such as using vector similarity (Ma, 2019) . We can investigate other methods which might help in understanding missed case. Sentence embeddings for Hinglish, similar to InferSent (Conneau et al., 2017) or Universal Sentence Encoding (Cer et al., 2018 ) may be promising, in addition to Skip Thought or other sentence vectorisation methods, as well as exploring the performance of models which do not focus on transfer learning like R-CNN, and LSTMs.",
"cite_spans": [
{
"start": 348,
"end": 358,
"text": "(Ma, 2019)",
"ref_id": "BIBREF9"
},
{
"start": 496,
"end": 518,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 550,
"end": 567,
"text": "(Cer et al., 2018",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "6"
},
{
"text": "Lastly, a wide variety of deep learning tricks and methods could be used, such as label smoothing (M\u00fcller et al., 2019) , which can help in generalising better beyond the small training sample.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(M\u00fcller et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "6"
},
{
"text": "We demonstrate that ensembles of classical Machine Learning models, even NB-SVM exhibit competitive performance and can in fact be better than some Transformer baselines. It is still worthwhile to implement simple classical baselines. Additionally, we hope that the released dataset and models 2 will encourage readers to investigate this further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/NirantK/Hinglish",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/NirantK/Hinglish",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quasi-recurrent neural networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. CoRR, abs/1611.01576.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. ArXiv, abs/1705.02364.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Duelling languages: Grammatical structure in codeswitching",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Contini-Morava",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Linguistic Anthropology",
"volume": "5",
"issue": "2",
"pages": "246--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Contini-Morava. 1995. Duelling languages: Grammatical structure in codeswitching. Journal of Linguistic Anthropology, 5(2):246-247.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirec- tional transformers for language understanding. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fine-tuned language models for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Fine-tuned language models for text classification. CoRR, abs/1801.06146.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The distribution of the flora in the alpine zone",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Jaccard",
"suffix": ""
}
],
"year": 1912,
"venue": "New Phytologist",
"volume": "1",
"issue": "2",
"pages": "37--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Jaccard. 1912. The distribution of the flora in the alpine zone.1. New Phytologist, 11(2):37-50.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. CoRR, abs/1808.06226.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Nlp augmentation",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Ma. 2019. Nlp augmentation. https://github.com/makcedward/nlpaug.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "When does label smoothing help? CoRR",
"authors": [
{
"first": "Rafael",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Kornblith",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael M\u00fcller, Simon Kornblith, and Geoffrey E. Hinton. 2019. When does label smoothing help? CoRR, abs/1906.02629.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets",
"authors": [
{
"first": "Parth",
"middle": [],
"last": "Patwa",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Sudipta",
"middle": [],
"last": "Kar",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Pykl",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas PYKL, Bj\u00f6rn Gamb\u00e4ck, Tanmoy Chakraborty, Thamar Solorio, and Amitava Das. 2020. Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Estimating code-switching on twitter with a novel generalized word-level language detection technique",
"authors": [
{
"first": "Shruti",
"middle": [],
"last": "Rijhwani",
"suffix": ""
},
{
"first": "Royal",
"middle": [],
"last": "Sequiera",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Chandra Shekhar",
"middle": [],
"last": "Maddila",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1971--1982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shruti Rijhwani, Royal Sequiera, Monojit Choudhury, Kalika Bali, and Chandra Shekhar Maddila. 2017. Estimat- ing code-switching on twitter with a novel generalized word-level language detection technique. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1971-1982, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Understanding language preference for expression of opinion and sentiment: What do Hindi-English speakers do on twitter?",
"authors": [
{
"first": "Koustav",
"middle": [],
"last": "Rudra",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Rijhwani",
"suffix": ""
},
{
"first": "Rafiya",
"middle": [],
"last": "Begum",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Niloy",
"middle": [],
"last": "Ganguly",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1131--1141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koustav Rudra, Shruti Rijhwani, Rafiya Begum, Kalika Bali, Monojit Choudhury, and Niloy Ganguly. 2016. Understanding language preference for expression of opinion and sentiment: What do Hindi-English speakers do on twitter? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1131-1141, Austin, Texas, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Baselines and bigrams: Simple, good sentiment and topic classification",
"authors": [
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "90--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sida Wang and Christopher D. Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic clas- sification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers -Volume 2, ACL '12, page 90-94, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Nb-svm: Strong linear baseline",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "zaxcie. 2018. Nb-svm: Strong linear baseline.",
"links": null
}
},
"ref_entries": {}
}
}