ACL-OCL / Base_JSON /prefixV /json /vardial /2021.vardial-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:43.082616Z"
},
"title": "Comparing the Performance of CNNs and Shallow Models for Language Identification",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Ceolin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e0 di Modena e Reggio Emilia",
"location": {}
},
"email": "ceolin@unimore.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this work we compare the performance of convolutional neural networks and shallow models on three out of the four language identification shared tasks proposed in the Var-Dial Evaluation Campaign 2021. In our experiments, convolutional neural networks and shallow models yielded comparable performance in the Romanian Dialect Identification (RDI) and the Dravidian Language Identification (DLI) shared tasks, after the training data was augmented, while an ensemble of support vector machines and Na\u00efve Bayes models was the best performing model in the Uralic Language Identification (ULI) task. While the deep learning models did not achieve state-ofthe-art performance at the tasks and tended to overfit the data, the ensemble method was one of two methods that beat the existing baseline for the first track of the ULI shared task. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this work we compare the performance of convolutional neural networks and shallow models on three out of the four language identification shared tasks proposed in the Var-Dial Evaluation Campaign 2021. In our experiments, convolutional neural networks and shallow models yielded comparable performance in the Romanian Dialect Identification (RDI) and the Dravidian Language Identification (DLI) shared tasks, after the training data was augmented, while an ensemble of support vector machines and Na\u00efve Bayes models was the best performing model in the Uralic Language Identification (ULI) task. While the deep learning models did not achieve state-ofthe-art performance at the tasks and tended to overfit the data, the ensemble method was one of two methods that beat the existing baseline for the first track of the ULI shared task. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we present the submissions of Team Phlyers to the VarDial Evaluation Campaign 2021 (Chakravarthi et al., 2021) . The campaign is part of a conference series, the Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), which has reached its eighth edition, five of which have included several shared tasks (Zampieri et al., 2017 (Zampieri et al., , 2018 (Zampieri et al., , 2019 G\u0203man et al., 2020) . The shared tasks typically involve the categorization of texts according to their language or their dialect. This task, known as language identification, is a classical NLP problem (House and Neuburg, 1977; Dunning, 1994; McNamee, 2005) , because of its importance for information retrieval, machine translation, and more recently categorization of social media posts (Bergsma et al., 2012; Lui and Baldwin, 2014; Zubiaga et al., 2016) .",
"cite_spans": [
{
"start": 98,
"end": 125,
"text": "(Chakravarthi et al., 2021)",
"ref_id": null
},
{
"start": 337,
"end": 359,
"text": "(Zampieri et al., 2017",
"ref_id": "BIBREF39"
},
{
"start": 360,
"end": 384,
"text": "(Zampieri et al., , 2018",
"ref_id": "BIBREF40"
},
{
"start": 385,
"end": 409,
"text": "(Zampieri et al., , 2019",
"ref_id": "BIBREF41"
},
{
"start": 410,
"end": 429,
"text": "G\u0203man et al., 2020)",
"ref_id": null
},
{
"start": 613,
"end": 638,
"text": "(House and Neuburg, 1977;",
"ref_id": "BIBREF20"
},
{
"start": 639,
"end": 653,
"text": "Dunning, 1994;",
"ref_id": "BIBREF15"
},
{
"start": 654,
"end": 668,
"text": "McNamee, 2005)",
"ref_id": "BIBREF29"
},
{
"start": 800,
"end": 822,
"text": "(Bergsma et al., 2012;",
"ref_id": "BIBREF1"
},
{
"start": 823,
"end": 845,
"text": "Lui and Baldwin, 2014;",
"ref_id": "BIBREF28"
},
{
"start": 846,
"end": 867,
"text": "Zubiaga et al., 2016)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the next sections, we briefly describe the three tasks that we participated in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The VarDial 2019 edition (Zampieri et al., 2019) proposed the first shared task based on distinguishing standard Romanian from Moldavian newspaper articles. The task consisted in training a classifier on news articles in Romanian and Moldavian from the MOROCO corpus , and using it to classify other news articles yet to be added to the corpus. The best model achieved an F 1 score of 0.895 on the test set, using an ensemble method based on convolutional neural networks (CNN) and support vector machines (SVM) (Tudoreanu, 2019) . Last year's task asked participants to train a classifier on the news articles of the MOROCO corpus to distinguish standard Romanian from Moldavian tweets (G\u0203man et al., 2020) . The task was particularly interesting because the organizers provided a large validation dataset based on news articles (5923), and only a small validation dataset based on tweets (215). This made the validation stage challenging, because on the one hand a model trained on news articles which yields high accuracy on the news validation dataset could fail to generalize to a different domain, while on the other hand the size of the tweets validation dataset was so small that by training a model only on tweets, the risk of overfitting was considerable. The best result on the task was obtained by an ensemble of linear SVM classifiers (\u00c7\u00f6ltekin, 2020) , which yielded an accuracy of F 1 =0.788 on the test data. A similar accuracy (F 1 =0.775) was reached by fine-tuned Romanian BERT models (Popa and Stef\u0203nescu, 2020) . Attempts that relied on Na\u00efve Bayes (NB) models yielded lower accuracies (Jauhiainen et al., 2020a; Ceolin and Zhang, 2020) .",
"cite_spans": [
{
"start": 25,
"end": 48,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 512,
"end": 529,
"text": "(Tudoreanu, 2019)",
"ref_id": "BIBREF34"
},
{
"start": 687,
"end": 707,
"text": "(G\u0203man et al., 2020)",
"ref_id": null
},
{
"start": 1348,
"end": 1364,
"text": "(\u00c7\u00f6ltekin, 2020)",
"ref_id": null
},
{
"start": 1504,
"end": 1531,
"text": "(Popa and Stef\u0203nescu, 2020)",
"ref_id": "BIBREF31"
},
{
"start": 1607,
"end": 1633,
"text": "(Jauhiainen et al., 2020a;",
"ref_id": "BIBREF23"
},
{
"start": 1634,
"end": 1657,
"text": "Ceolin and Zhang, 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RDI",
"sec_num": "1.1"
},
{
"text": "The shared task has been proposed again in this year's VarDial edition, in which a new dataset of tweets in standard Romanian and Moldavian was made available to the participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RDI",
"sec_num": "1.1"
},
{
"text": "The Uralic Language Identification task (ULI) differs from traditional tasks in the high number of language varieties that are present in the dataset (Jauhiainen et al., 2020b; G\u0203man et al., 2020) . In total, there are 178 languages that need to be distinguished, which include the 29 Uralic varieties which are the focus of the task, 3 extra Uralic languages (Finnish, Estonian, and Hungarian) and 146 other non-Uralic languages. Moreover, the classes are imbalanced: for the 29 Uralic varieties, the number of sentences vary from 19 to 214,225, while for the other languages the range is much higher (from 10K to 3 million). The shared task is divided in three separate subtasks. The first subtask requires a model to distinguish among the 29 Uralic varieties giving equal importance to each of the classes, and is evaluated through a macro F 1 score. In the second subtask, classes are weighted according to their frequency, and thus a micro F 1 score is used. In the third subtask, the models are evaluated on all 178 languages.",
"cite_spans": [
{
"start": 150,
"end": 176,
"text": "(Jauhiainen et al., 2020b;",
"ref_id": "BIBREF24"
},
{
"start": 177,
"end": 196,
"text": "G\u0203man et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ULI",
"sec_num": "1.2"
},
{
"text": "The high number of classes, their overlap, and the imbalanced dataset all represent challenges for deep learning algorithms: in particular, they can lead to overfitting if the traning set is not balanced (Bernier-Colborne and Goutte, 2020). On the other hand, the same problem might affect shallow methods like SVMs, since they need to identify many distinct separation hyperplanes in a domain where many languages are virtually indistinguishable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ULI",
"sec_num": "1.2"
},
{
"text": "The shared task was presented for the first time during the VarDial Evaluation Campaign 2020 (G\u0203man et al., 2020) , and in that case the best performing system for all tracks was the HeLI method (Jauhiainen et al., 2016) , which was the baseline presented by the organizers. Since the HeLI baselines were not improved, the task has been proposed again in this year's edition.",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "(G\u0203man et al., 2020)",
"ref_id": null
},
{
"start": 195,
"end": 220,
"text": "(Jauhiainen et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ULI",
"sec_num": "1.2"
},
{
"text": "The Dravidian Language Identification task (DLI) requires participants to classify three South Dravidian languages (Tamil, Malayalam, and Kannada) using a dataset of 16,672 YouTube comments (Chakravarthi et al., 2020b,a; Hande et al., 2020) . The comments are written in Roman script. This task differs from the others in being focused on code-switching: all comments contain a mix of words from the target language and words from English, and in some cases native words can appear within an English grammatical structure. Classifiers must then be robust to this variability and be able to not be deceived by the English material. Another interesting feature of this task is the large class imbalance in the training set (with about 10K comments in Tamil, 4K in Malayalam, and only about 500 in Kannada) and the fact that both the training and the test datasets contain comments from other languages (under the label 'otherlanguage', approximately 1K comments).",
"cite_spans": [
{
"start": 190,
"end": 220,
"text": "(Chakravarthi et al., 2020b,a;",
"ref_id": null
},
{
"start": 221,
"end": 240,
"text": "Hande et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DLI",
"sec_num": "1.3"
},
{
"text": "This is the first edition of this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DLI",
"sec_num": "1.3"
},
{
"text": "Previous methods used for language identification typically involve SVMs (Goutte et al., 2014; \u00c7\u00f6ltekin and Rama, 2017; Medvedeva et al., 2017; Kreutz and Daelemans, 2018; Benites de Azevedo e Souza et al., 2018; Wu et al., 2019) and multinomial NB models applied to word and character ngrams (Barbaresi, 2016; Clematide and Makarov, 2017; Jauhiainen et al., 2016 Jauhiainen et al., , 2020a . Deep learning methods based on CNNs and LSTMs have also been successfully applied to language identification tasks (Jaech et al., 2016; Hu et al., 2019; Tudoreanu, 2019) , and the last two editions of VarDial also showed successful applications of BERT models (Bernier-Colborne et al., 2019; Popa and Stef\u0203nescu, 2020; Scherrer and Ljube\u0161i\u0107, 2020; Zaharia et al., 2020) . For the current tasks, we employed NB and SVM models trained on character ngrams as baselines, and compared their performance with that of a CNN. For the CNN, we decided to use the character-based model for text classification that was developed by Zhang et al. (2015) , and that was successfully adopted by to distinguish between standard Romanian and Moldavian news texts. The architecture of the CNN is summarized in Table 1 . All models were run using Google Colab, with 1 GPU. 2",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Goutte et al., 2014;",
"ref_id": "BIBREF16"
},
{
"start": 95,
"end": 119,
"text": "\u00c7\u00f6ltekin and Rama, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 120,
"end": 143,
"text": "Medvedeva et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 144,
"end": 171,
"text": "Kreutz and Daelemans, 2018;",
"ref_id": "BIBREF27"
},
{
"start": 172,
"end": 212,
"text": "Benites de Azevedo e Souza et al., 2018;",
"ref_id": "BIBREF33"
},
{
"start": 213,
"end": 229,
"text": "Wu et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 293,
"end": 310,
"text": "(Barbaresi, 2016;",
"ref_id": "BIBREF0"
},
{
"start": 311,
"end": 339,
"text": "Clematide and Makarov, 2017;",
"ref_id": "BIBREF9"
},
{
"start": 340,
"end": 363,
"text": "Jauhiainen et al., 2016",
"ref_id": "BIBREF25"
},
{
"start": 364,
"end": 390,
"text": "Jauhiainen et al., , 2020a",
"ref_id": "BIBREF23"
},
{
"start": 508,
"end": 528,
"text": "(Jaech et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 529,
"end": 545,
"text": "Hu et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 546,
"end": 562,
"text": "Tudoreanu, 2019)",
"ref_id": "BIBREF34"
},
{
"start": 653,
"end": 684,
"text": "(Bernier-Colborne et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 685,
"end": 711,
"text": "Popa and Stef\u0203nescu, 2020;",
"ref_id": "BIBREF31"
},
{
"start": 712,
"end": 740,
"text": "Scherrer and Ljube\u0161i\u0107, 2020;",
"ref_id": "BIBREF32"
},
{
"start": 741,
"end": 762,
"text": "Zaharia et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 1014,
"end": 1033,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 1185,
"end": 1192,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "This section summarizes our contributions to the three shared tasks, the evaluation of our models, and their performance on the test datasets. -6 . Output is passed through Softmax. The dimensions marked with (*) are task specific.",
"cite_spans": [
{
"start": 143,
"end": 145,
"text": "-6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "A question that remains open after last year's edition is whether deep learning methods can achieve a good accuracy at distinguishing between standard Romanian and Moldavian tweets even with limited training data. The deep learning models that were used in last year's task (Popa and Stef\u0203nescu, 2020; Zaharia et al., 2020) all relied on pre-trained BERT models (Dumitrescu et al., 2020) , which might not always be available when working on low-resource languages. Even though CNNs have been successfully applied to the task of distinguishing news texts between the two language varieties , in our contribution to last year's edition we have showed that the representations they learn fail to generalize to the tweets domain (Ceolin and Zhang, 2020) . This year we readdressed this issue using the new dataset.",
"cite_spans": [
{
"start": 274,
"end": 301,
"text": "(Popa and Stef\u0203nescu, 2020;",
"ref_id": "BIBREF31"
},
{
"start": 302,
"end": 323,
"text": "Zaharia et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 362,
"end": 387,
"text": "(Dumitrescu et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 726,
"end": 750,
"text": "(Ceolin and Zhang, 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RDI",
"sec_num": "3.1"
},
{
"text": "In last year's edition, a small dataset of 215 tweets was given to the participants to evaluate the model. Since this year's edition provides a larger amount of in-domain data (5237 tweets), we decided to use the model we trained last year and fine-tune it using the larger tweets validation dataset available for this year's task. We also decided to train a separate model on tweets-data only, to see if the use of out-of-domain news data leads to a better performance than a model trained only on tweets. In order to augment the data, we experimented with some of the data augmentation techniques proposed by Wei and Zou (2019) . The one which turned out to be the most successful was random swap, especially when it was used multiple times on the same sentence rather than just once (i.e., essentially shuffling the words in the sentence). See the Appendix for a more detailed summary of the data augmentation experiments.",
"cite_spans": [
{
"start": 611,
"end": 629,
"text": "Wei and Zou (2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.1.1"
},
{
"text": "After some trial runs, we decided to set a batch size of 128, and a learning rate of 0.001. We used 1/5 of the data to create a validation dataset, while the rest was used for training. The training data is augmented with 10 replications that involve shuffled sentences. On the basis of training and validation accuracies, we decided to interrupted training after 10 epochs on the original training data, and after 5 epochs on the augmented dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.1.1"
},
{
"text": "We also trained a NB and a linear SVM model on TFIDF-transformed character ngrams in the [5] [6] [7] [8] range, which was determined to be the optimal range for these languages in Ceolin and Zhang (2020) . The models have not been fine-tuned, and have been evaluated using the same validation dataset selected to evaluate the CNN.",
"cite_spans": [
{
"start": 89,
"end": 92,
"text": "[5]",
"ref_id": null
},
{
"start": 93,
"end": 96,
"text": "[6]",
"ref_id": null
},
{
"start": 97,
"end": 100,
"text": "[7]",
"ref_id": null
},
{
"start": 101,
"end": 104,
"text": "[8]",
"ref_id": null
},
{
"start": 180,
"end": 203,
"text": "Ceolin and Zhang (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow models",
"sec_num": "3.1.2"
},
{
"text": "The tweets dataset was already balanced, with 2625 standard Romanian tweets and 2612 Moldavian tweets. As we see in Table 2 , data augmentation improved the performance of the CNNs dramatically, to the point that it became comparable to that of shallow models. For instance, if we take a look at the CNN (news+tweets) model, Figure 1 shows that after training for 10 epochs the performance on the validation set reaches a macro F 1 score of 0.709, while in Figure 2 we see that in the augmented dataset the accuracy is well above 0.7 after the first epoch, and converges to \u22480.76 after five epochs. The same is true for the model trained only on tweets, whose accuracy jumps from 0.7 to 0.75.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 325,
"end": 333,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 457,
"end": 465,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.1.3"
},
{
"text": "We decided to submit two runs to the RDI shared task. Both contained the predictions of the CNN pre-trained on news articles and then fine-tuned on augmented tweets, but the second submission resulted from a model were data augmentation had an increased weight (from x10 to x12). Table 3 contains the results of the best runs of the three teams that took part in the task. The best accuracy (0.777) was reached by SUKI, who last year proposed a NB model trained on character ngrams (Jauhiainen et al., 2020a) , while the second best accuracy (0.732) was reached by the UPB team (Zaharia et al., 2020), who last year addressed the task using a BERT model. The CNN we presented did not reach comparable performance, even though an inspection of precision and recall did not point to any obvious explanation: the network just ended up overfitting the training data. Our second run was not competitive, because increased data augmentation without changing the other parameters of the CNN led to even more overfitting.",
"cite_spans": [
{
"start": 482,
"end": 508,
"text": "(Jauhiainen et al., 2020a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 280,
"end": 287,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1.4"
},
{
"text": "Macro F 1 score SUKI 0.777 UPB 0.732 Phlyers 0.653 Table 3 : Final performance of the teams' submissions to the RDI task.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Considering that the network was trained on the full tweets dataset rather than 80% of it before the submission, stopping the iterations after the first or the second epoch would have been a wiser choice, since we have seen that no considerable improvement of the valuation performance was made after the first few epochs, and therefore additional training might have led to overfitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "As we previously mentioned, this task appeared to be the most challenging one, given the high amount of labels and the great class imbalance. In particular, the first subtask required the models to be able to accurately classify languages which were represented by a few dozen sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ULI",
"sec_num": "3.2"
},
{
"text": "The strategy we adopted to address these issues was to train two separate classifiers for two different classification problems. First, we want to be able to distinguish the 29 'target' Uralic languages from the 149 'non-target' languages. Once we have a model that can distinguish the two types of languages, we can train a second classifier to distinguish among the 29 target languages using features only extracted from such languages. In this way, the second model would be able to extract more features which are only needed to separate the 29 target languages, even though employing them in the first stage might produce some errors, in addition to more computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ULI",
"sec_num": "3.2"
},
{
"text": "Defining a unique alphabet for the entire training dataset is unfeasible, since languages make use of different writing systems, and this fact makes it practically difficult to train a character-based CNN for the task. Therefore, we decided to focus on shallow models in this first stage. For this binary classification task, we trained a linear SVM and a NB method. In addition to the 646,043 sentences for the target languages, we used 5000 sentences for each of the non-target language, obtaining a total of 1,391,043 sentences for the 178 languages of the task. We then used 5-fold cross-validation to evaluate the classifier. After some trial runs, we found that a linear SVM classifier trained on TF-IDF transformed character ngrams in the range [3, 4] , with the number of features limited to the 100,000 most frequent ones, converges to a 0.995 macro F 1 score. We will then use this classifier to single out the target languages in the test dataset. ",
"cite_spans": [
{
"start": 752,
"end": 755,
"text": "[3,",
"ref_id": null
},
{
"start": 756,
"end": 758,
"text": "4]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distinguishing 'target' and 'non-target' languages",
"sec_num": "3.2.1"
},
{
"text": "We first attempted to train a CNN for this subtask, but we could not develop a system that was able to deal with the large class imbalance and with the high number of different labels that need to be learned. We then retrained our shallow models for the task of distinguishing among the target languages. The best performing model was a NB model trained on TFIDF character ngrams, with ngrams in the range [3,5] and alpha=10 -6 (cf. Table 5 ). Rare ngrams, those whose relative document frequency was less than <10 -4 , were excluded. The model parameters were selected through 5-fold cross-validation. ",
"cite_spans": [],
"ref_spans": [
{
"start": 433,
"end": 441,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Classifying 'target' languages",
"sec_num": "3.2.2"
},
{
"text": "In order to evaluate our system, we decided to create a 80/20 training-test split. First, the SVM is trained on 80% of the dataset in order to determine whether the sentences in the test set are from target or non-target languages. Then, the NB model assigns a label to the sentences that are recognized as target sentences. The results are in Table 6 . We see that the full model yields a macro F 1 score of 0.905. We noted that precision was higher than recall (0.958 versus 0.878), and in particular rare languages are associated to low recall scores. This means that the system does not make 'wrong' predictions often, but it can fail to identify the less common varieties.",
"cite_spans": [],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation -Track 1 and 2",
"sec_num": "3.2.3"
},
{
"text": "Model Macro F 1 Micro F 1 Linear SVM + NB 0.905 0.988",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation -Track 1 and 2",
"sec_num": "3.2.3"
},
{
"text": "The micro F 1 score for the system is about 0.99, as expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation -Track 1 and 2",
"sec_num": "3.2.3"
},
{
"text": "The model we devised could not be used for a submission to Track 3, because the non-target languages are excluded in the first step. For this reason, we decided to retrain our models to directly predict the labels, and to use 5-fold cross-validation to evaluate them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation -Track 3",
"sec_num": "3.2.4"
},
{
"text": "In this case, we need to increase the threshold for filtering rare ngrams to 10 -3 for memory constraints, and to limit the analysis to NB systems, since the high number of labels makes working with SVMs more challenging. The best model we selected was a NB model with TFIDF transformed character 5-grams (F 1 =0.949, see Table 7 ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 329,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Evaluation -Track 3",
"sec_num": "3.2.4"
},
{
"text": "We submitted the two systems we described in the preceding sections (SVM+NB for Track 1 and 2, and NB for Track 3) for evaluation. In addition, we submitted the prediction of two ensemble models, that we summarize in Table 8 . The models are derived from the two main models, but their predictions change when the two main models are in disagreement about which of two target languages is the correct label. The first ensemble (Ensemble 1 ) model uses all the predictions of the model developed for Track 1 and 2, but there is one case in which the prediction of the model developed for Track 3 are selected instead: when the first model predicts one of the five rare languages of the dataset, Ingrian (izh), Nganasan (nio), Kemi Sami (sjk), Ume Sami (sju), Votic (vot), and the second model predicts a different target language, then this second target language is selected instead. Our motivation is the following: if the model erroneously predicts a language which is not in the test set, precision for that language will go to zero, and the performance of the classifier will significantly drop. This strategy will make sure that the prediction is indeed accurate when dealing with rare languages. In this case, if the signal between the two classifiers is conflicting, it appears wiser to default to the prediction of the most common class. The second ensemble (Ensemble 2 ) takes the predictions of the NB model developed for Track 3, but when the SVM+NB model predicts a different target language, then this more refined prediction is chosen instead, under the assumption that since the SVM+NB model has been developed specifically for distinguishing among target languages, its predictions should be more accurate.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2.5"
},
{
"text": "The results for the three tasks are summarized in Table 9, Table 10 and Table 11 .",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 80,
"text": "Table 9, Table 10 and Table 11",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2.5"
},
{
"text": "As for Track 1 (Table 9) , the first ensemble system was the best performing system among those we submitted, and the second best overall, although all teams provided systems with similar performances and architectures. It is noticeable how combining the predictions of the two different systems Note how the predictions of the two ensemble models are essentially the same, respectively, of the two main models, but they differ in how they assign a label to a target language when there is disagreement between the two main classifiers. we devised for the task, with the aim of improving the classification of the rare languages, led to a significant improvement over the performance of the two systems taken separately. As for Track 2 (Table 10) , Ensemble 1 and the SVM+NB model yielded the same performance (0.84), which was clearly below the baseline established by HeLI (Jauhiainen et al., 2020b) , and below the performance of the systems submitted by the other teams. During the evaluation of our submissions, the organizers also provided us with the precision and recall scores, and it was clear that the failure was entirely due to the low precision of the systems. Since our evaluation set was balanced between target and non-target languages (with about 30% of the sentences belonging to the target set), the precision scores looked acceptable, but an error analysis clearly showed that the systems had the tendency of assigning a target label to a non-target language more often than the opposite, even though it was precisely this behavior that we were hoping to avoid with the SVM classifier.",
"cite_spans": [
{
"start": 875,
"end": 901,
"text": "(Jauhiainen et al., 2020b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 15,
"end": 24,
"text": "(Table 9)",
"ref_id": "TABREF13"
},
{
"start": 736,
"end": 746,
"text": "(Table 10)",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2.5"
},
{
"text": "Since our system still failed to filter out some non-target languages, precision was drastically reduced in the test phase, where non-target languages clearly outnumbered target languages (sentences belonging to target languages were about 2% of the whole sample, according to the SVM model).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2.5"
},
{
"text": "Indeed, we also submitted the model we developed for Track 3, and the second ensemble, but both attempts yielded an even lower performance, which suggests that the SVM filter was partially successful at filtering out non-target languages, even though it was not sufficient to achieve stateof-the-art performance. Finally, for Track 3 (Table 11 ) none of the systems submitted to the task were able to beat the strong baseline set by HeLI. Even though in this case the performance of our systems was close to that of the systems submitted by LAST and NRC, ours were not able to reach the performance of the other teams. ",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 343,
"text": "(Table 11",
"ref_id": "TABREF17"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2.5"
},
{
"text": "As we said above, this task is focused on codeswitching, and this means that many features that could be extracted are completely irrelevant to determine the original language of the text. The great class imbalance is another problem that needs to be addressed. Especially in the design of deep learning architectures, some strategy to prevent overfitting was required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DLI",
"sec_num": "3.3"
},
{
"text": "We addressed this task using the same CNN developed for the RDI task, with an important difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.3.1"
},
{
"text": "Since in this case we have to deal with class imbalance, we decided to perform balanced sampling during the training phase. First, 1/5 of the labeled data was randomly selected for evaluation purposes as a validation dataset, and the rest was used for training. Then, we sampled a total of 25,000 sentences uniformly across the four categories by selecting each of the four classes with p=0.25, and each sentence with p=1/n C , with n C being the number of sentences available for each class. This will necessarily imply that many sentences will be picked more than once, especially for the classes which are not well represented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.3.1"
},
{
"text": "In order to avoid repeating the same sentences for the more uncommon classes, we decided to shuffle the order of the words in the sentences, essentially adopting the data augmentation strategy that was employed for the RDI task. This strategy had two purposes: dealing with class imbalance by augmenting the data of the classes which were not well represented, and addressing the problem of the influence of the English grammar, by exposing the network to sentences in which the order of the words was changed, with the aim of retrieving word sequences that were not in the training data, but were still possible in the language. We also trained a separate model where instead the order of the words was not shuffled, and therefore sentences in the training dataset were just repeated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.3.1"
},
{
"text": "Since most of the comments are short, only the first 160 characters per comment were used as input to the network. After some parameter tuning, we set the learning rate to 0.001, and the batch size is 256. We also reduced the output of the second linear layer to 500. Training was interrupted after 10 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.3.1"
},
{
"text": "Following the strategy adopted for the RDI task, we trained a NB and a linear SVM model on TFIDFtransformed character ngrams in the [5] [6] [7] [8] range. The models have not been fine-tuned, and have been evaluated using the same dataset used to evaluate the CNN. Table 12 shows the micro F 1 score, which was the metric used to rank the submissions, for the models evaluated.",
"cite_spans": [
{
"start": 132,
"end": 135,
"text": "[5]",
"ref_id": null
},
{
"start": 136,
"end": 139,
"text": "[6]",
"ref_id": null
},
{
"start": 140,
"end": 143,
"text": "[7]",
"ref_id": null
},
{
"start": 144,
"end": 147,
"text": "[8]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 265,
"end": 273,
"text": "Table 12",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Shallow models",
"sec_num": "3.3.2"
},
{
"text": "The patterns are similar to those we have obtained in the RDI task: shuffling words had the effect of improving the performance of the CNN. Figure 3 and Figure 4 show that in this case shuffling had only a marginal effect on the task, since in both cases training and validation performances were comparable. We then submitted our best model for evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 148,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 153,
"end": 161,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.3.3"
},
{
"text": "Micro F 1 score CNN (shuffle) 0.880 NB 0.878 CNN (no shuffle) 0.870 Linear SVM 0.848 Table 12 : Final performance of the models on the evaluation of the DLI task.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Table 12",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The results of the evaluation campaign are in Table 13 . Our model performed as expected, with a micro F 1 score of 0.9. The performance was comparable to the submission of the other three teams, which however all yielded a better F 1 score. The only systematic difference between our submission and the others was a low F 1 score for the 'other-languages' class (0.46), while all teams were able to achieve a score of at least 0.54. This suggests that our network was not able to obtain a rep-resentation of this class as robust as that obtained by the other classifiers.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 55,
"text": "Table 13",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3.4"
},
{
"text": "Micro F 1 score LAST 0.93 Nayel 0.92 HWR 0.92 Phlyers 0.90 Table 13 : Final performance of the teams' submissions to the DLI task.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Table 13",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The last editions of the VarDial evaluation campaign (Zampieri et al., 2019; G\u0203man et al., 2020) have seen an increased use of deep learning techniques for language identification, which in several cases yielded the best performance at the tasks (Tudoreanu, 2019; Bernier-Colborne et al., 2019) . In this work, we tried to compare the performance of CNNs and shallow models for three out of the four tasks at VarDial 2021. While for the ULI task developing a CNN turned out to be challenging, for both the RDI and the DLI task CNNs yielded performances which were in line with the baselines established by the more classic shallow models, even though the final results showed that they are prone to overfitting. It is interesting to note that shuffling the words in the training data improved the accuracy of our CNN classifiers, in particular for the RDI task. The procedure essentially introduces noise in the data, because the order of the words in the sentences will be ungrammatical after they are shuffled, so why it improves the performance of the classifier is not clear. One possibility is that it introduces the network to word combinations that would be possible in the language (for instance, but switching a subject and an object, or by juxtaposing words separated by modifiers), increasing the diversification of the training data. Another possibility is that since shuffling words does not affect character sequences within words, but at word boundaries, shuffling has the effect of preventing the network from focusing on sequences with spaces in the middle, which could be less meaningful than sequences within words to learn the lexicon and the morphology associated to each language variety. In the current experiments, this strategy had the effect of reducing overfitting. This outcome will require more investigation in the future.",
"cite_spans": [
{
"start": 53,
"end": 76,
"text": "(Zampieri et al., 2019;",
"ref_id": "BIBREF41"
},
{
"start": 77,
"end": 96,
"text": "G\u0203man et al., 2020)",
"ref_id": null
},
{
"start": 246,
"end": 263,
"text": "(Tudoreanu, 2019;",
"ref_id": "BIBREF34"
},
{
"start": 264,
"end": 294,
"text": "Bernier-Colborne et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "While data augmentation is popular in image classification (Wang and Perez, 2017; Cubuk et al., 2019) , it has so far had limited application in NLP (Coulombe, 2018; Kobayashi, 2018; Wei and Zou, 2019) . Our experiments on the VarDial 2021 shared tasks suggest that data augmentation can play an important role in adapting neural models to the task of language identification.",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "(Wang and Perez, 2017;",
"ref_id": "BIBREF35"
},
{
"start": 82,
"end": 101,
"text": "Cubuk et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 149,
"end": 165,
"text": "(Coulombe, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 166,
"end": 182,
"text": "Kobayashi, 2018;",
"ref_id": "BIBREF26"
},
{
"start": 183,
"end": 201,
"text": "Wei and Zou, 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "A Appendix -Data augmentation Following Wei and Zou (2019), we ran some data augmentation experiments on the dataset of the RDI task. We experimented with Random Swap (swap the position of two words in the sentence), Random Delection (remove one word in the sentence), and Random Insertion (insert one extra word in the sentence). We did not experiment with Random Replacement, which involves the replacement of a word with a synonym. We also experimented with an additional technique, Shuffling, in which the words of the sentence are simply shuffled, and which is essentially a variation of Random Swap. We trained the network, on each augmented dataset, for 5 different times, and determined its test accuracy on the same hold-out dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "The results of the experiments on the model pretrained on news (the news+tweets model) are in Table 14 . All techniques led to an improvement of the performance of the network, and the best improvement was obtained by shuffling the words of the sentences. ",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 102,
"text": "Table 14",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "The code for the models employed in this work is found at https://github.com/AndreaCeolin/ VarDial2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://colab.research.google.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to the co-founder of Team Phlyers, Hong Zhang.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Unsupervised Morphological Criterion for Discriminating Similar Languages",
"authors": [
{
"first": "Adrien",
"middle": [],
"last": "Barbaresi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)",
"volume": "",
"issue": "",
"pages": "212--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrien Barbaresi. 2016. An Unsupervised Morpho- logical Criterion for Discriminating Similar Lan- guages. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3), pages 212-220, Osaka, Japan.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language identification for creating language-specific Twitter collections",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "Mossaab",
"middle": [],
"last": "Bagdouri",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Fink",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the second workshop on language in social media",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma, Paul McNamee, Mossaab Bagdouri, Clayton Fink, and Theresa Wilson. 2012. Language identification for creating language-specific Twitter collections. In Proceedings of the second workshop on language in social media, pages 65-74.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Challenges in neural language identification: NRC at VarDial 2020",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Bernier-Colborne",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "273--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Bernier-Colborne and Cyril Goutte. 2020. Challenges in neural language identification: NRC at VarDial 2020. In Proceedings of the 7th Work- shop on NLP for Similar Languages, Varieties and Dialects, pages 273-282.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improving cuneiform language identification with BERT",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Bernier-Colborne",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "L\u00e9ger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "17--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Bernier-Colborne, Cyril Goutte, and Serge L\u00e9ger. 2019. Improving cuneiform language iden- tification with BERT. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 17-25.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "MOROCO: The Moldavian and Romanian Dialectal Corpus",
"authors": [
{
"first": "Andrei",
"middle": [
"M"
],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "688--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei M. Butnaru and Radu Tudor Ionescu. 2019. MOROCO: The Moldavian and Romanian Dialectal Corpus. In Proceedings of ACL, pages 688-698.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discriminating between standard Romanian and Moldavian tweets using filtered character ngrams",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Ceolin",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "265--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Ceolin and Hong Zhang. 2020. Discriminating between standard Romanian and Moldavian tweets using filtered character ngrams. In Proceedings of the 7th Workshop on NLP for Similar Languages, Va- rieties and Dialects, pages 265-272.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Eswari Rajagopal, Yves Scherrer, and Marcos Zampieri. 2021. Findings of the VarDial Evaluation Campaign 2021",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "G\u0203man",
"suffix": ""
},
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Purschke",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Mihaela G\u0203man, Radu Tu- dor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Ruba Priyadharshini, Christoph Purschke, Eswari Rajagopal, Yves Scherrer, and Marcos Zampieri. 2021. Findings of the VarDial Evaluation Campaign 2021. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A sentiment analysis dataset for codemixed Malayalam-English",
"authors": [
{
"first": "Navya",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Shardul",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Suryawanshi",
"suffix": ""
},
{
"first": "John",
"middle": [
"Philip"
],
"last": "Sherly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Crae",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "177--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, and John Philip Mc- Crae. 2020a. A sentiment analysis dataset for code- mixed Malayalam-English. pages 177-184, Mar- seille, France. European Language Resources asso- ciation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Corpus creation for sentiment analysis in code-mixed Tamil-English text",
"authors": [
{
"first": "Vigneshwaran",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Muralidaran",
"suffix": ""
},
{
"first": "John",
"middle": [
"Philip"
],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Crae",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
"volume": "",
"issue": "",
"pages": "202--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Vigneshwaran Murali- daran, Ruba Priyadharshini, and John Philip Mc- Crae. 2020b. Corpus creation for sentiment anal- ysis in code-mixed Tamil-English text. In Pro- ceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced lan- guages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 202-210, Marseille, France. European Language Re- sources association.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "CLUZH at VarDial GDI 2017: Testing a variety of machine learning tools for the classification of swiss German dialects",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "170--177",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1221"
]
},
"num": null,
"urls": [],
"raw_text": "Simon Clematide and Peter Makarov. 2017. CLUZH at VarDial GDI 2017: Testing a variety of machine learning tools for the classification of swiss German dialects. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 170-177, Valencia, Spain. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dialect identification under domain shift: Experiments with discriminating Romanian and Moldavian",
"authors": [
{
"first": "",
"middle": [],
"last": "\u00c7 Agr\u0131 \u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "186--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Dialect identification under do- main shift: Experiments with discriminating Roma- nian and Moldavian. In Proceedings of the 7th Work- shop on NLP for Similar Languages, Varieties and Dialects, pages 186-192.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "T\u00fcbingen system in VarDial 2017 shared task: experiments with language identification and cross-lingual parsing",
"authors": [
{
"first": "\u00c7",
"middle": [],
"last": "Agr\u0131 \u00c7\u00f6ltekin",
"suffix": ""
},
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "146--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 agr\u0131 \u00c7\u00f6ltekin and Taraka Rama. 2017. T\u00fcbingen sys- tem in VarDial 2017 shared task: experiments with language identification and cross-lingual parsing. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 146-155, Valencia, Spain.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Text data augmentation made simple by leveraging NLP Cloud APIs",
"authors": [
{
"first": "Claude",
"middle": [],
"last": "Coulombe",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.04718"
]
},
"num": null,
"urls": [],
"raw_text": "Claude Coulombe. 2018. Text data augmentation made simple by leveraging NLP Cloud APIs. arXiv preprint arXiv:1812.04718.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Autoaugment: Learning augmentation strategies from data",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ekin",
"suffix": ""
},
{
"first": "Barret",
"middle": [],
"last": "Cubuk",
"suffix": ""
},
{
"first": "Dandelion",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Mane",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Vasudevan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "113--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. 2019. Autoaugment: Learning augmentation strategies from data. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 113-123.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The birth of Romanian BERT",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dumitrescu",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "4324--4328",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.387"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of Romanian BERT. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 4324-4328, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical identification of language",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Dunning. 1994. Statistical identification of lan- guage. Computing Research Laboratory, New Mex- ico State University Las Cruces, NM, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The NRC System for Discriminating Similar Languages",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "L\u00e9ger",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "139--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Goutte, Serge L\u00e9ger, and Marine Carpuat. 2014. The NRC System for Discriminating Similar Lan- guages. In Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (VarDial), pages 139-145, Dublin, Ire- land.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Report on the VarDial Evaluation Campaign 2020",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Purschke",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Purschke, Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Cam- paign 2020. In Proceedings of the Seventh Work- shop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 1-14.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection",
"authors": [
{
"first": "Adeep",
"middle": [],
"last": "Hande",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "Bharathi Raja",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adeep Hande, Ruba Priyadharshini, and Bharathi Raja Chakravarthi. 2020. KanCMD: Kannada CodeMixed dataset for sentiment analysis and offensive language detection. In Proceedings of the Third Workshop on Computational Modeling of Peo- ple's Opinions, Personality, and Emotion's in Social Media, pages 54-63, Barcelona, Spain (Online). Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Toward automatic identification of the language of an utterance. I. Preliminary methodological considerations",
"authors": [
{
"first": "S",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"P"
],
"last": "House",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neuburg",
"suffix": ""
}
],
"year": 1977,
"venue": "The Journal of the Acoustical Society of America",
"volume": "62",
"issue": "3",
"pages": "708--713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur S House and Edward P Neuburg. 1977. Toward automatic identification of the language of an utter- ance. I. Preliminary methodological considerations. The Journal of the Acoustical Society of America, 62(3):708-713.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Ensemble Methods to Distinguish Mainland and Taiwan Chinese",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zuoyu",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Yiwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "165--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Hu, Wen Li, He Zhou, Zuoyu Tian, Yiwen Zhang, and Liang Zou. 2019. Ensemble Methods to Distin- guish Mainland and Taiwan Chinese. In Proceed- ings of the Sixth Workshop on NLP for Similar Lan- guages, Varieties and Dialects, pages 165-171.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A neural model for language identification in code-switched tweets",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Jaech",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "60--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Jaech, George Mulcaire, Mari Ostendorf, and Noah A Smith. 2016. A neural model for language identification in code-switched tweets. In Proceed- ings of The Second Workshop on Computational Ap- proaches to Code Switching, pages 60-64.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Experiments in language variety geolocation and dialect identification",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "220--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Heidi Jauhiainen, and Krister Lind\u00e9n. 2020a. Experiments in language variety ge- olocation and dialect identification. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 220-231.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Uralic Language Identification (ULI) 2020 shared task dataset and the Wanca 2017 corpora",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "173--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Heidi Jauhiainen, Niko Partanen, and Krister Lind\u00e9n. 2020b. Uralic Language Iden- tification (ULI) 2020 shared task dataset and the Wanca 2017 corpora. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 173-185.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "HeLI, a word-based backoff method for language identification",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "153--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Krister Lind\u00e9n, and Heidi Jauhi- ainen. 2016. HeLI, a word-based backoff method for language identification. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2016)., pages 153-162.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Contextual augmentation: Data augmentation by words with paradigmatic relations",
"authors": [
{
"first": "Sosuke",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "452--457",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2072"
]
},
"num": null,
"urls": [],
"raw_text": "Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic re- lations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 452-457, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploring classifier combinations for language variety identification",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Kreutz",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "191--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Kreutz and Walter Daelemans. 2018. Exploring classifier combinations for language variety identi- fication. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 191-198.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Accurate language identification of Twitter messages",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 5th workshop on language analysis for social media (LASM)",
"volume": "",
"issue": "",
"pages": "17--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2014. Accurate lan- guage identification of Twitter messages. In Pro- ceedings of the 5th workshop on language analysis for social media (LASM), pages 17-25.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Language identification: a solved problem suitable for undergraduate instruction",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of computing sciences in colleges",
"volume": "20",
"issue": "3",
"pages": "94--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul McNamee. 2005. Language identification: a solved problem suitable for undergraduate instruc- tion. Journal of computing sciences in colleges, 20(3):94-101.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "When sparse traditional models outperform dense neural networks: the curious case of discriminating between similar languages",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Medvedeva",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kroon",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "156--163",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1219"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Medvedeva, Martin Kroon, and Barbara Plank. 2017. When sparse traditional models outperform dense neural networks: the curious case of discrimi- nating between similar languages. In Proceedings of the Fourth Workshop on NLP for Similar Lan- guages, Varieties and Dialects (VarDial), pages 156- 163, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Applying multilingual and monolingual transformer-based models for dialect identification",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Popa",
"suffix": ""
},
{
"first": "Vlad",
"middle": [],
"last": "Stef\u0203nescu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "193--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Popa and Vlad Stef\u0203nescu. 2020. Apply- ing multilingual and monolingual transformer-based models for dialect identification. In Proceedings of the 7th Workshop on NLP for Similar Languages, Va- rieties and Dialects, pages 193-201.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "HeLju@ VarDial 2020: Social media variety geolocation with BERT models",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "202--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Scherrer and Nikola Ljube\u0161i\u0107. 2020. HeLju@ VarDial 2020: Social media variety geolocation with BERT models. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Di- alects, pages 202-211.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Twist bytes: German dialect identification with data mining optimization",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites De Azevedo E Souza",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Grubenmann",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Pius Von D\u00e4niken",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"Milan"
],
"last": "Von Gruenigen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Deriu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cieliebak",
"suffix": ""
}
],
"year": 2018,
"venue": "27th International Conference on Computational Linguistics (COLING 2018)",
"volume": "",
"issue": "",
"pages": "218--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Benites de Azevedo e Souza, Ralf Gruben- mann, Pius von D\u00e4niken, Dirk Von Gruenigen, Jan Milan Deriu, and Mark Cieliebak. 2018. Twist bytes: German dialect identification with data min- ing optimization. In 27th International Confer- ence on Computational Linguistics (COLING 2018), Santa Fe, August 20-26, 2018, pages 218-227. Var- Dial.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Ensemble based on skip-gram and triplet loss neural networks for Moldavian vs. Romanian cross-dialect topic identification",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Tudoreanu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "202--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Tudoreanu. 2019. DTeam@ VarDial 2019: En- semble based on skip-gram and triplet loss neural networks for Moldavian vs. Romanian cross-dialect topic identification. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 202-208.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The effectiveness of data augmentation in image classification using deep learning",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Perez",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.04621"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Wang and Luis Perez. 2017. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6382--6388",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1670"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data aug- mentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Language discrimination and transfer learning for similar languages: experiments with feature combinations and adaptation",
"authors": [
{
"first": "Nianheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Demattos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwok Him So",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pin-Zhen Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianheng Wu, Eric DeMattos, Kwok Him So, Pin-zhen Chen, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2019. Language discrim- ination and transfer learning for similar languages: experiments with feature combinations and adapta- tion. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 54-63.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exploring the power of Romanian BERT for dialect identification",
"authors": [
{
"first": "George-Eduard",
"middle": [],
"last": "Zaharia",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "Traian",
"middle": [],
"last": "Rebedea",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "232--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George-Eduard Zaharia, Andrei-Marius Avram, Dumitru-Clementin Cercel, and Traian Rebedea. 2020. Exploring the power of Romanian BERT for dialect identification. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 232-241.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Findings of the VarDial Evaluation Campaign",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "No\u00ebmi",
"middle": [],
"last": "Aepli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Nikola Ljube\u0161i\u0107, Preslav Nakov, Ahmed Ali, J\u00f6rg Tiedemann, Yves Scherrer, and No\u00ebmi Aepli. 2017. Findings of the VarDial Evaluation Campaign 2017. In Proceedings of the Fourth Workshop on NLP for Similar Lan- guages, Varieties and Dialects (VarDial), pages 1- 15, Valencia, Spain.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "Suwon",
"middle": [],
"last": "Shuon",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Grondelaers",
"suffix": ""
},
{
"first": "Nelleke",
"middle": [],
"last": "Oostdijk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Ahmed Ali, Suwon Shuon, James Glass, Yves Scherrer, Tanja Samard\u017ei\u0107, Nikola Ljube\u0161i\u0107, J\u00f6rg Tiedemann, Chris van der Lee, Stefan Grondelaers, Nelleke Oostdijk, Antal van den Bosch, Ritesh Ku- mar, Bornini Lahiri, and Mayank Jain. 2018. Lan- guage Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign. In Pro- ceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 1-17, Santa Fe, USA.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A Report on the Third VarDial Evaluation Campaign",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samard\u017ei\u0107",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Klyueva",
"suffix": ""
},
{
"first": "Tung-Le",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Yves Scherrer, Tanja Samard\u017ei\u0107, Francis Tyers, Miikka Silfverberg, Natalia Klyueva, Tung-Le Pan, Chu-Ren Huang, Radu Tudor Ionescu, Andrei Butnaru, and Tommi Jauhiainen. 2019. A Report on the Third VarDial Evaluation Campaign. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial). Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "TweetLID: a benchmark for tweet language identification. Language Resources and Evaluation",
"authors": [
{
"first": "Arkaitz",
"middle": [],
"last": "Zubiaga",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Inaki San Vicente",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gamallo",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "50",
"issue": "",
"pages": "729--766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkaitz Zubiaga, Inaki San Vicente, Pablo Gamallo, Jos\u00e9 Ramom Pichel, Inaki Alegria, Nora Aranberri, Aitzol Ezeiza, and V\u00edctor Fresno. 2016. TweetLID: a benchmark for tweet language identification. Lan- guage Resources and Evaluation, 50(4):729-766.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "CNN news + tweets model. Comparing the training and validation performance measured by macro F 1 score through 10 epochs of training. Blue line: training; orange line: validation. The training dataset is not augmented."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "CNN news + tweets model. Comparing the training and validation performance measured by macro F 1 score through 5 epochs of training. Blue line: training; orange line: validation. Training set augmented."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "CNN model. Comparing the training and validation performance measured by micro F 1 score through 10 epochs of training. Blue line: training; orange line: validation. Sentences are not shuffled."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "CNN model. Comparing the training and validation performance measured by micro F 1 score through 10 epochs of training. Blue line: training; orange line: validation. Sentences are shuffled."
},
"TABREF2": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Final performance of the models on the evaluation of the RDI task.",
"html": null
},
"TABREF4": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF6": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Evaluation of the models used to distinguish among the 'target' languages.",
"html": null
},
"TABREF7": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF9": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Evaluation of the models developed for Track 3.",
"html": null
},
"TABREF11": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Ensemble methods developed for the three subtasks. Target a refers to the prediction of a language on target made by the classifier developed for Track 1 and 2, while Target b refers to the prediction of a language on target made by the classifier developed for Track 3.",
"html": null
},
"TABREF13": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Results for Track 1.",
"html": null
},
"TABREF15": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Results for Track 2.",
"html": null
},
"TABREF17": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Results for Track 3.",
"html": null
},
"TABREF19": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Summary of our experiments on data augmentation in the RDI task, on the news+tweets model.",
"html": null
}
}
}
}