ACL-OCL / Base_JSON /prefixC /json /calcs /2021.calcs-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:13:48.774621Z"
},
"title": "A Language-aware Approach to Code-switched Morphological Taggin\u0123",
"authors": [
{
"first": "Saziye",
"middle": [
"Bet\u00fcl"
],
"last": "\u00d6zate\u015f",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "saziye.oezates@ims.uni-stuttgart.de"
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "\u00c7etinoglu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "ozlem.cetinoglu@ims.uni-stuttgart.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Morphological tagging of code-switching (CS) data becomes more challenging especially when language pairs composing the CS data have different morphological representations. In this paper, we explore a number of ways of implementing a language-aware morphological tagging method and present our approach for integrating language IDs into a transformerbased framework for CS morphological tagging. We perform our set of experiments on the Turkish-German SAGT Treebank. Experimental results show that including language IDs to the learning model significantly improves accuracy over other approaches.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Morphological tagging of code-switching (CS) data becomes more challenging especially when language pairs composing the CS data have different morphological representations. In this paper, we explore a number of ways of implementing a language-aware morphological tagging method and present our approach for integrating language IDs into a transformerbased framework for CS morphological tagging. We perform our set of experiments on the Turkish-German SAGT Treebank. Experimental results show that including language IDs to the learning model significantly improves accuracy over other approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphological tagging is a well known sequence labelling task in Natural Language Processing (NLP). It is the task of finding the correct morphological analysis for a given word form. The analysis is usually represented with a set of morphological features. Tagging these features is beneficial in solving most NLP tasks since having knowledge about the morphological analysis of natural language words gives clues about their syntactic nature and their roles in context (M\u00fcller and Sch\u00fctze, 2015) . Morphological tagging becomes more important when the language in question is a morphologically rich one and the part-of-speech (POS) information about word forms is not sufficient to syntactically classify them (Tsarfaty et al., 2013) .",
"cite_spans": [
{
"start": 471,
"end": 497,
"text": "(M\u00fcller and Sch\u00fctze, 2015)",
"ref_id": "BIBREF20"
},
{
"start": 712,
"end": 735,
"text": "(Tsarfaty et al., 2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Morphological tagging is challenging in itself 1 and it becomes more challenging when the processed language is code-switched, a phenomenon that occurs when bilingual speakers frequently switch between languages and produce utterances 1 For instance, in the CoNLL 2018 Shared Task of Multilingual Parsing from Raw Text to Universal Dependencies, morphological tagging has the lowest range of scores among sentence segmentation, word segmentation, tokenisation, lemmatisation, and POS tagging. universaldependencies. org/conll18/results.html that include word forms and phrases from both languages. The challenge amplifies as the linguistic difference between the composing languages increases. This is because unlike POS annotation that can be made common across languages (e.g. Universal Dependencies ), morphological annotation is more language-specific.",
"cite_spans": [
{
"start": 235,
"end": 236,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The example in Figure 1 shows this difference explicitly. Even though both Autos in German and arabalarda in Turkish share the same POS tag as NOUN, they have different morphological analyses. This difference stems from inherent properties of these languages. German employs grammatical gender while Turkish does not. Additionally in the example, the Turkish locative case corresponds to German dative. Such structural differences, combined with the rich morphology of individual languages taking part in CS data, make CS morphological tagging even more challenging with respect to CS POS tagging, a task that is a more common and more studied NLP task (cf. Section 2). In fact, there has not been any research focused on CS morphological tagging before. We hypothesise that the language-dependent nature of morphological tagging can be solved more successfully for the case of CS data when the learning model has the knowledge of which language a word form belongs to. Starting from this hypothesis, we search ways of including the language ID (LID) information to tagging and present a language-aware approach. The proposed approach integrates LIDs to the dense representation of input tokens in a transformer-based learning model. We conducted experiments on the only CS dataset with complete morphological annotation (Turkish-German SAGT Treebank (\u00c7etinoglu and \u00c7\u00f6ltekin, 2019) ). 2 Results show that the proposed approach outperforms all of the baselines significantly and the use of LIDs is beneficial in tagging morphology for CS data. Our contributions are twofold: We present the first study on CS morphological tagging, and our data-driven method of integrating LIDs is applicable to any CS dataset and task that can exploit language IDs.",
"cite_spans": [
{
"start": 1351,
"end": 1381,
"text": "(\u00c7etinoglu and \u00c7\u00f6ltekin, 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although there does not exist any prior study on CS morphological tagging, utilising language IDs in other CS tasks has been quite common. We divide how LID is utilised into three methods: as part of a pipeline, as part of joint processing, and as Machine Learning (ML) features. While one or more of these techniques have been applied to many CS tasks, e.g. parsing (Bhat et al., 2017) , sentiment analysis (Vilares et al., 2016) , and normalisation (van der Goot and \u00c7etinoglu, 2021), we focus here mainly on POS tagging, as it is a sequence labelling task and the closest one to morphological tagging.",
"cite_spans": [
{
"start": 367,
"end": 386,
"text": "(Bhat et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 408,
"end": 430,
"text": "(Vilares et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One of the most commonly used pipeline approach is processing the data as monolingual fragments (Vyas et al., 2014; Jamatia et al., 2015; Barman et al., 2016; Bhat et al., 2017; AlGhamdi et al., 2016) . For each language in the mixed data, a monolingual model is trained. During prediction, the input is split into fragments according to their language IDs and each fragment is processed by the respective monolingual model. The output is then merged into its original form. The advantage of this approach is to eliminate the need of CS data for training. However, context information is lost.",
"cite_spans": [
{
"start": 96,
"end": 115,
"text": "(Vyas et al., 2014;",
"ref_id": "BIBREF35"
},
{
"start": 116,
"end": 137,
"text": "Jamatia et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 138,
"end": 158,
"text": "Barman et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 159,
"end": 177,
"text": "Bhat et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 178,
"end": 200,
"text": "AlGhamdi et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The other common pipeline approach is using LIDs in decision-making after getting predictions from monolingual models. In this setup the mixed input is given to both monolingual models. The predicted LID is then used to select the model output of the corresponding language. Solorio and Liu (2008) is the first to use this approach on English-Spanish POS tagging. Later Barman et al. (2016) and AlGhamdi et al. (2016) used this setup for English-Bengali-Hindi, and for English-Spanish and Modern Standard Arabic-Egyptian Arabic, as well as the first pipeline technique. While in Barman et al.'s (2016) case using the second pipeline method slightly outperforms the first one, AlGhamdi et al. (2016) show the first pipeline outperforms by a large margin. Thus we opted for the first architecture as one of our baselines.",
"cite_spans": [
{
"start": 275,
"end": 297,
"text": "Solorio and Liu (2008)",
"ref_id": "BIBREF26"
},
{
"start": 370,
"end": 390,
"text": "Barman et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 395,
"end": 417,
"text": "AlGhamdi et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 579,
"end": 601,
"text": "Barman et al.'s (2016)",
"ref_id": null
},
{
"start": 676,
"end": 698,
"text": "AlGhamdi et al. (2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another model of Barman et al.'s (2016) was jointly trained LID and POS taggers that achieve a quite large improvement over their pipeline models. Soto and Hirschberg (2018) also trained LID and POS taggers together in their BiLSTM architecture. AlGhamdi and Diab (2019) choose joint LID and POS tagging as one of their architectures and show that distant language pairs Spanish-English and Hindi-English benefit from multi-task learning.",
"cite_spans": [
{
"start": 17,
"end": 39,
"text": "Barman et al.'s (2016)",
"ref_id": null
},
{
"start": 147,
"end": 173,
"text": "Soto and Hirschberg (2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In many work from pre-neural era, LIDs are given as one of the features to ML models. While Solorio and Liu (2008) did not observe any significant improvement in doing so, Jamatia et al. (2015) shows that adding the LID of a token improves its POS tagging for English-Hindi. Sequiera et al. (2015) and Bhat et al. (2017) also inserted LID as a feature into their ML models. As a neural approach, Soto and Hirschberg (2018) represented the six LID labels existing in their data as boolean features and concatenated them with word vectors in a BiLSTM along with other features they used.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "Solorio and Liu (2008)",
"ref_id": "BIBREF26"
},
{
"start": 172,
"end": 193,
"text": "Jamatia et al. (2015)",
"ref_id": "BIBREF13"
},
{
"start": 275,
"end": 297,
"text": "Sequiera et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 302,
"end": 320,
"text": "Bhat et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 396,
"end": 422,
"text": "Soto and Hirschberg (2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Different from the previous approaches, Aguilar and Solorio (2020) use language identification to create a code-switching ELMo from English ELMo (Peters et al., 2018) . Later they show the effectiveness of their CS-ELMo by achieving state-of-theart POS tagging results on a Hindi-English dataset (Singh et al., 2018) . They also employ multi-task learning where their auxiliary task is language identification with a simplified LID tag set for LID, POS, and NER tagging.",
"cite_spans": [
{
"start": 145,
"end": 166,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 296,
"end": 316,
"text": "(Singh et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For morphological tagging of CS data, we chose to use STEPS 3 (Gr\u00fcnewald et al., 2020) as our framework. STEPS is an NLP tool for tagging and syntactic parsing in Universal Dependencies (UD) style . Our motivation behind deciding on STEPS as our framework is based on two reasons. First, for token representation it utilises transformer-based language models, which have recently become famous for their outstanding success in various NLP tasks (Kondratyuk and Straka, 2019; Hoang et al., 2019) . Second, STEPS is an open-source system with a minimum use of black-box modules that make the modification of the source codes very challenging, if not impossible. Moreover, STEPS is a current state-of-the-art NLP tool that outperformed other state-of-the-art tools Udify (Kondratyuk and Straka, 2019) and UD-Pipe 2.0 in tagging and parsing of several languages (Gr\u00fcnewald et al., 2020) . Section 3.1 gives a brief description about STEPS. Sections 3.2 and 3.3 describe the baseline methods and our proposed approach for integrating LIDs to CS morphological tagging, respectively.",
"cite_spans": [
{
"start": 62,
"end": 86,
"text": "(Gr\u00fcnewald et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 445,
"end": 474,
"text": "(Kondratyuk and Straka, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 475,
"end": 494,
"text": "Hoang et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 768,
"end": 797,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 858,
"end": 882,
"text": "(Gr\u00fcnewald et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "STEPS is mainly developed as a multilingual system for parsing. It also performs sequence labelling tasks such as POS and morphological tagging in a multi-task learning (MTL) setup. For our purposes, we adapted STEPS to solely perform sequence labelling. When this adapted version is used standalone, it becomes a baseline for our task. We mention this version as the Standalone approach throughout the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "The STEPS architecture follows Kondratyuk and Straka (2019) for computing token embeddings from the transformer-based language model and performing tagging and parsing. Token embeddings are calculated as a weighted sum of all intermediate outputs of the transformer layers. Coefficients of this weighted sum are learned during training. For sequence labelling, STEPS utilises a single-layer feed-forward neural network on top of token representations to extract the logit vectors for respective label vocabularies. More detailed information about the STEPS architecture can be found in (Gr\u00fcnewald et al., 2020) .",
"cite_spans": [
{
"start": 31,
"end": 59,
"text": "Kondratyuk and Straka (2019)",
"ref_id": "BIBREF15"
},
{
"start": 586,
"end": 610,
"text": "(Gr\u00fcnewald et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "In a given dataset, the language-dependent morphological annotation of words that share the same POS tag gives us the intuition that feeding a model with token-wise LID information can help improve its accuracy for CS morphological tagging. Starting from this hypothesis, we designed and experimented with three ways of using token-level LID information in the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines for Language ID Integration",
"sec_num": "3.2"
},
{
"text": "One of the first methods that come to mind when dealing with CS data is splitting the data from CS points and treating the split parts as monolingual data as in the first pipeline method mentioned in Section 2. For our case, this method consists of three steps. First, input data is split to sub-parts containing monolingual data only. Second, monolingual models for each sub-part are trained. Each trained model processes its corresponding sub-part separately. In the last step, the output of models are joined to reach the processed version of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Split (DSplit)",
"sec_num": "3.2.1"
},
{
"text": "To achieve the split of CS data into monolingual parts, we created a simple algorithm. Starting from the first token in a sentence, the algorithm creates sentence fragments whenever it encounters a switch between tokens with LIDs denoting one of the main languages in the CS data. Tokens with other LIDs (e.g., punctuation or mixed tokens where intra-word CS occurs) stay in the fragment created at that moment. Figure 2 depicts this process on a Turkish-German sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 420,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Split (DSplit)",
"sec_num": "3.2.1"
},
{
"text": "Another frequently applied method is the multi-task learning approach when two or more related tasks have the potential of benefitting each other through the domain information they contain. The main idea of this approach is improving the learning of a model for a task with the help of the knowledge contained by another task (Zhang and Yang, 2017) . MTL has been shown effective in various areas in NLP (Collobert and Weston, 2008; Fang et al., 2019) , especially in low-resource scenarios, usually as a way of transferring knowledge from a high-resource auxiliary task to a low-resource target task as in Lin et al. (2018) . Our case is also a low-resource scenario where we have two related tasks, morphological tagging as the target and LID tagging as a simpler auxiliary task. In our setup, these two tasks are trained together with the same model and the loss is computed by summing losses of each task. The loss for LID tagging is scaled down 5% in training, as it was done for simpler tasks in (Gr\u00fcnewald et al., 2020) . This loss scaling is for preventing the validation accuracy for LID tagging to go up too quickly and cause an underfitting for morphological tagging.",
"cite_spans": [
{
"start": 327,
"end": 349,
"text": "(Zhang and Yang, 2017)",
"ref_id": "BIBREF36"
},
{
"start": 405,
"end": 433,
"text": "(Collobert and Weston, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 434,
"end": 452,
"text": "Fang et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 608,
"end": 625,
"text": "Lin et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 1003,
"end": 1027,
"text": "(Gr\u00fcnewald et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Learning (MTL)",
"sec_num": "3.2.2"
},
{
"text": "Our proposal to integrate LIDs to the model is via creating LID embeddings and concatenating them to the embeddings of input tokens. The motivation behind this approach is to directly encode the LID information to each token inside the learning model and by this way to lessen the model's confusion caused by the tokens with different LIDs having different morphological annotations. Moreover, this way we can represent each LID label in contrast to DSplit that uses only main LID labels. There are more than one method to represent LIDs as vectors inside the model. One-hot encoding of each LID is one of them. 4 Another method would be starting from a random embedding for each LID and training these embeddings with the rest of the model. Instead of random initialisation, LID embeddings can also be initialised with the average vectors of token embeddings in the training set, calculated for each LID label. Our motivation behind this clustering method is to see whether starting the training of the LID vectors from a more reasonable point will improve accuracy. We experimented with all of these models and chose to continue with the randomly initialised LID embeddings method based on our observation that this method works best among others. The comparison of these methods is discussed in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposal: LID Vectors (LIDVec)",
"sec_num": "3.3"
},
{
"text": "In LIDVec, each LID label is assigned a 100dimensional embedding vector at the beginning of training. The embedding of each input token is then concatenated with its corresponding LID embedding. These concatenated vectors are then given to the model for training. The loss at each epoch is backpropagated to both the token embeddings and the LID embeddings. We apply batch normalisation to token embeddings right after the concatenation. 4 Soto and Hirschberg (2018) use a similar way. They represent LIDs as boolean features concatenated to word vectors in a BiLSTM architecture.",
"cite_spans": [
{
"start": 438,
"end": 439,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposal: LID Vectors (LIDVec)",
"sec_num": "3.3"
},
{
"text": "We evaluate our approaches on the Turkish-German SAGT Treebank (\u00c7etinoglu and \u00c7\u00f6ltekin, 2019) UD version 2.7.1. 5 It is based on a Turkish-German code-switching corpus created from conversation recordings of bilinguals. Although the treebank consists of spoken sentences, the transcriptions are normalised and hence the orthography does not pose a challenge in terms of morphological tagging. The SAGT Treebank includes five LID labels: TR for Turkish, DE for German, LANG3 for tokens that belong to a third language other than Turkish and German, OTHER for punctuation, and MIXED for tokens with intra-word code-switching. Example (1) shows the structure of a mixed word from Figure 2. (1) Abendgymnasiumdan night school.from 'from the night school' Here the first part (Abendgymnasium) is a German noun and the second part (-dan) is a Turkish suffix. Although they are from different languages, the token Abendgymnasiumdan has a single language ID since the two parts of the token are written orthographically together.",
"cite_spans": [
{
"start": 63,
"end": 93,
"text": "(\u00c7etinoglu and \u00c7\u00f6ltekin, 2019)",
"ref_id": null
},
{
"start": 112,
"end": 113,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 677,
"end": 686,
"text": "Figure 2.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments 4.1 Data",
"sec_num": "4"
},
{
"text": "We use the original training, development, and test splits in experiments, only further splitting a small part from the development set as the finetuning set. 6 Sentence counts and LID distribution is given in 2.19 on the whole treebank. The counts of unique morphological tags and morphological features that constitute the tags are depicted in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 346,
"end": 353,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments 4.1 Data",
"sec_num": "4"
},
{
"text": "Note that previous studies that follow a similar approach to DSplit use monolingual data that are usually available in large amounts in training (Vyas et al., 2014; Jamatia et al., 2015; Barman et al., 2016; Bhat et al., 2017; AlGhamdi et al., 2016 ). However we do not utilise monolingual Turkish and German data in the current setting of DSplit experiments. We experimented with using morphological features of two Turkish treebanks -IMST (Sulubacak et al., 2016) and BOUN (T\u00fcrk et al., 2020) and two German treebanks -GSD (McDonald et al., 2013) and HDT (Borges V\u00f6lker et al., 2019) as additional monolingual data but this resulted in a decrease in DSplit's accuracy possibly due to conflicting morphological annotations of these treebanks. So, we only use the corresponding parts of the SAGT Treebank in training and evaluation of DSplit. We also experimented with the second pipeline approach mentioned in Section 2. In line with our expectations, it gives worse performance. So, we stick to our current DSplit method (cf. Table 8 in Appendix A for a comparison of two approaches).",
"cite_spans": [
{
"start": 145,
"end": 164,
"text": "(Vyas et al., 2014;",
"ref_id": "BIBREF35"
},
{
"start": 165,
"end": 186,
"text": "Jamatia et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 187,
"end": 207,
"text": "Barman et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 208,
"end": 226,
"text": "Bhat et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 227,
"end": 248,
"text": "AlGhamdi et al., 2016",
"ref_id": "BIBREF2"
},
{
"start": 441,
"end": 465,
"text": "(Sulubacak et al., 2016)",
"ref_id": "BIBREF29"
},
{
"start": 475,
"end": 494,
"text": "(T\u00fcrk et al., 2020)",
"ref_id": null
},
{
"start": 525,
"end": 548,
"text": "(McDonald et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 553,
"end": 585,
"text": "HDT (Borges V\u00f6lker et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1028,
"end": 1035,
"text": "Table 8",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Experiments 4.1 Data",
"sec_num": "4"
},
{
"text": "STEPS can be used with both BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) . We chose to use multilingual XLM-R observing it outperforms multilingual BERT in our preliminary experiments, which is in line with previous findings (Liang et al., 2020; Conneau et al., 2020) . We use XLM-R Base with 12 layers and 768 hidden states in all the experiments. We stick to the default configuration of STEPS (Gr\u00fcnewald et al., 2020) for all the models except LIDVec. For LIDVec, token embedding size was changed from 768 to 868 since embeddings are expanded with the concatenation of 100-dimensional LID embeddings.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 65,
"end": 87,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 240,
"end": 260,
"text": "(Liang et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 261,
"end": 282,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 411,
"end": 435,
"text": "(Gr\u00fcnewald et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configuration",
"sec_num": "4.2"
},
{
"text": "DSplit and LIDVec need LIDs; the former during splitting the dataset into languages, the latter during the concatenation of a token embedding with its corresponding LID vector. We evaluate these models with both gold and predicted LIDs. Predicted labels are obtained by training the STEPS Standalone model for LID tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Language IDs",
"sec_num": "4.3"
},
{
"text": "We use accuracy as the evaluation metric. We count a morphological tag prediction of a token correct only when it is an exact match with the gold one. In addition to reporting the overall accuracy, we also provide accuracy on each LID label separately. This enables us to easily observe the parts each model has the most difficulty with. The significance between the performance of the models is measured using the randomisation test (van der Voet, 1994). When we mention a performance difference being significant, it means the difference is found statistically significant with p < 0.05. Table 3 shows experimental results for each model on the development and test sets. 7 It also demonstrates the evaluation of another baseline -Udify, a well-known, state-of-the-art transformer-based multi-task tool, which uses multilingual BERT as its language model (Kondratyuk and Straka, 2019) .",
"cite_spans": [
{
"start": 857,
"end": 886,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 590,
"end": 597,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.4"
},
{
"text": "We see that all three models that utilise LIDs outperform Standalone as well as Udify on both development and test sets. Although Standalone and Udify have similar architectures, the performance of the former surpasses that of the latter in terms of accuracy. Besides some design decisions, the main difference between these two models is the choice of the pretrained lan- The best performing model is LIDVec as we expected. It outperforms Standalone more than 2 and 3 points on the development and test sets, respectively. The two baselines for LID integration, DSplit and MTL, perform better than Standalone although they are less successful than LIDVec. We observe that integrating LIDs to the system improves the accuracy in morphological tagging in all three scenarios, although the amount of the improvement differs across the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.5"
},
{
"text": "To see how LID prediction affects DSplit and LIDVec, we repeated the same experiments with predicted LIDs. The results are given in Table 4 . As introduced in Section 4.3, Standalone is used for LID tagging. Its performance on the development and test sets is shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 270,
"end": 277,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.5"
},
{
"text": "In Table 4 , we see that LID accuracy has a stronger influence on DSplit while LIDVec stays almost unaffected. This might stem from LIDs playing a key role in DSplit by splitting the data into monolingual parts that are then used to train two separate models. So, the errors in LIDs are more explicitly propagated to the two models that learn to predict the morphological features of monolingual data only. However, LIDs have a more implicit effect in LIDVec. The errors in LIDs cause the wrong LID vector to be concatenated to the embeddings of some tokens but this error can later be compensated through the training of the whole model where both token and LID embeddings being updated at each step. Considering the high overall accuracy in LID prediction in Table 5 , LIDVec seems to compensate the small error rate in predicted LIDs. Although LANG3 prediction accuracy is low, this does not cause a substantial effect in the overall accuracy of LID prediction since this label is rare in the treebank.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 761,
"end": 768,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.5"
},
{
"text": "LID representation and initialisation In Section 3.3, we mention two more ways in addition to our preferred approach for the representation of LIDs as vectors. The first way is representing LIDs as one-hot vectors. We define each LID label as a one-hot vector and concatenate these vectors with token embeddings provided by the lan-guage model as in LIDVec. We experimented with this approach on the development set. However, this method showed poorer performance than Standalone which does not utilise LIDs in any way. We believe that one-hot vector representation might be too rigid to be used together with token embeddings due to the fact that the range of the values in these two representations greatly vary. The second method for the LID vector representation includes the initialisation of LID embeddings by averaging the embeddings of same-LID tokens in the training set. In the initial experiments we see that when we use the average initialisation instead of a random initialisation, the training phase progresses faster and the learning stops early when the training accuracy is around 85%, in contrast to the random initialisation in which the training phase ends after a higher number of epochs and with a higher training accuracy. So, we extended the training time by changing the early stop criteria from 15 epochs to 50 epochs to give the average initialisation an opportunity to show its true capacity. Figure 3 compares the performance of these two initialisation methods for two different early stop criteria on the development set. We see that the underfitting in the average initialisation method is eliminated as the number of epochs increases. Overall, the performance of both initialisation methods is the same when they are trained sufficiently. We conclude that random initialisation can be preferred if there are time restrictions. The impact of LID prediction We proposed three different approaches for LID integration. In terms of resources needed, MTL does not need an external LID prediction by definition, since it predicts LIDs and morphology jointly. However, it is also the worst performing one among the three approaches. DSplit and LIDVec both outperform MTL, but require predicted LIDs to function. To test how sensitive these models to the LID prediction accuracy, we evaluated DSplit and LIDVec with MarMoT, a CRF-based sequence tagger (M\u00fcller et al., 2013) which has~96% accuracy in LID prediction instead of the STEPS LID model with~99% accuracy (cf. Table 9 in Appendix B for complete results). Although LIDVec's performance stays almost unaffected by the accuracy drop in LID prediction, DSplit accuracy drops approximately 1 point and more than 2 points in development and test sets, respectively. We conclude that DSplit is more vulnerable to LID accuracy whereas LIDVec can be paired with a faster and computationally less costly LID model if needed be. Another disadvantage of DSplit is the need to train multiple monolingual models to deal with different languages in CS data, in contrast to the single model architecture of LIDVec.",
"cite_spans": [
{
"start": 2377,
"end": 2398,
"text": "(M\u00fcller et al., 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1421,
"end": 1429,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 2494,
"end": 2501,
"text": "Table 9",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Analysis on LID Integration",
"sec_num": "5"
},
{
"text": "DSplit also requires pre-and post-processing of the input and output, respectively. Considering its superior performance, and the robustness and compactness of its architecture, we suggest LIDVec as the best approach to CS morphological tagging among the models discussed in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on LID Integration",
"sec_num": "5"
},
{
"text": "The impact of LIDs on POS tagging We also performed experiments for POS tagging, the other possible sequence labelling task we can employ LID integration. Table 6 shows the overall accuracies for each model on the development and test sets of the SAGT Treebank. We do not observe any significant difference between the accuracies of the models, which is in line with our expectations. This is because universal POS tags used in the SAGT treebank are common to all languages in contrast to morphological tags that include many languagespecific features. Hence, identifying the language a token belongs to does not add extra benefits in POS prediction. ",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Analysis on LID Integration",
"sec_num": "5"
},
{
"text": "Most Common Improvements We observe that integrating language IDs contributes to a 10% in-crease in predicting the presence of possessive markers in Turkish nouns, which are not a feature of German nouns. This is something expected since providing LIDs enables the model to differentiate between the different sets of morphological features of two languages better. Similarly, the LID knowledge makes a 4% enhancement in predicting the existence of the Gender feature that is present in German nouns but absent in Turkish ones (cf. Figure 1 ). To understand this better, we compared LIDVec and Standalone in terms of their feature-based success. In this feature-based performance measurement, partial matches are also given scores in contrast to the evaluation metric we adopted, which counts a predicted morphological tag as correct only if it is an exact match -i.e., all the features that constitute the morphological tag are predicted correctly. We measure the featurebased performance of the models by dividing each morphological tag into features and counting each feature match as a point. When we look at what categories benefit most from including LIDs, we see that for Turkish they are verbs and nouns with an improvement of 11% and 10%, respectively. For German they are pronouns and nouns with 9% improvement. The success of morphology prediction for German verbs is already high for all models. Hence, there is not much improvement in German verbs. We observe that all the nouns and pronouns in both languages and also the verbal nouns in Turkish which are derived from verbs have the Case feature in their morphological analyses.",
"cite_spans": [],
"ref_spans": [
{
"start": 532,
"end": 540,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6"
},
{
"text": "Case feature values Although all models easily predicted the existence of the Case feature, they had the most trouble in deciding the value of it. Hence, we created confusion matrices of Standalone and LIDVec for different values of the Case feature on the development set as given in Figure 4 . There are only four case markers in German: nominative, accusative, dative, and genitive. In Turkish, there are three additional case markers, namely ablative, instrumental, and locative. Albeit having a German lemma, MIXED tokens in the SAGT Treebank are annotated according to Turkish morphological annotation style due to the presence of Turkish suffixes in them. We observe that the most confusion occurs between nominative and accusative cases for all three token types. This confusion in TR and MIXED tokens results from the fact that the accusative suffix which makes the case of a word accusative and the possessive suffix in nominative nouns sometimes correspond to the same form in Turkish. In DE tokens, the situation is similar in the sense that nominative and accusative forms of German articles are different only for masculine, whereas they have the same form when their gender is feminine or neutral, or when they are in plural. LIDVec consistently reduces this confusion and predicts correct cases that plays an important role in its overall performance. Improvement on MIXED tokens When observing the results in Tables 3 and 4 , the notable success of LIDVec on predicting morphological analyses of MIXED tokens caught our attention. Even when predicted LIDs are used, LIDVec outperforms Standalone by a large margin in the development and test sets. We observe that MIXED tokens in the SAGT Treebank are mostly nouns. Therefore MIXED tokens get their share from overall Case improvements. When proportioned to the total number of cases in each category, the success of LIDVec is most visible in MIXED tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 293,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 1426,
"end": 1440,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Confusion in",
"sec_num": null
},
{
"text": "Performance of LIDVec on LANG3 and OTHER tokens We observe a pattern in the results that seems like a trade-off between the success on TR, DE, and MIXED and the success on LANG3 and OTHER. This is most visible in LIDVec. We do not see the consistent improvement trend over Standalone in LANG3 and OTHER accuracies as in TR, DE, and MIXED accuracies. To inspect this case, we compare confusion matrices of Standalone and LIDVec in Figure 5 for LANG3 and OTHER types. Both models confused LANG3 mostly with DE. We believe this situation stems from the fact that LANG3 tokens in the treebank are mostly English proper nouns and some of them are also common in German. Nonetheless, the low success rates in this token type by all models demonstrate once again how important the amount of training data is for data-driven models.",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 438,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Confusion in",
"sec_num": null
},
{
"text": "On the contrary, all models perform very well in predicting the absence of morphology in OTHER tokens. However, LIDVec makes a few more false predictions than Standalone. We believe this might stem from a slight overfitting of LIDVec towards TR tokens. Yet, accuracy of all models are above 98% for this type and we need more data to justify that there is a difference between the models for morphology prediction of OTHER tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion in",
"sec_num": null
},
{
"text": "In this paper, we tackle the morphological tagging problem for CS data. We present some challenging aspects of the task and suggest the use of token-wise LID information. We experience with different ways of using LIDs on a transformerbased model and propose the LID Vectors approach. Our proposed model outperforms all the baselines significantly and proves to be a robust and compact way of LID integration. Being first on focusing morphological tagging on CS data, our study shows that utilising LIDs is an effective method in this task. We also give the first results on LID, POS, and morphological tagging on the Turkish-German SAGT dataset. An implementation of our model is available at https: //github.com/sb-b/steps-parser. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "There is also the NArabizi Treebank(Seddah et al., 2020) which includes partial morphological annotation where the total number of unique annotations is 46 in contrast to the SAGT Treebank which has 795 unique morphological annotations. Hence, we did not use this treebank in our study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/boschresearch/steps-parser",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/UniversalDependencies/UD_ Turkish_German-SAGT/tree/dev 6 The fine-tuning set is created by randomly extracting equal amount of sentences from each document in the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The scores on the development set are the average of three separate runs while the scores on the test set are obtained by using the run that gives the best result in the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Stefan Gr\u00fcnewald for his valuable help in using the STEPS tool. This work is funded by DFG via project CE 326/1-1 \"Computational Structural Analysis of German-Turkish Code-Switching\" (SAGT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "From English to code-switching: Transfer learning with strong morphological clues",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8033--8044",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.716"
]
},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar and Thamar Solorio. 2020. From English to code-switching: Transfer learning with strong morphological clues. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 8033-8044, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Leveraging pretrained word embeddings for part-of-speech tagging of code switching data",
"authors": [
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "99--109",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Fahad AlGhamdi and Mona Diab. 2019. Leveraging pretrained word embeddings for part-of-speech tag- ging of code switching data. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Vari- eties and Dialects, pages 99-109, Ann Arbor, Michi- gan. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Part of speech tagging for code switched data",
"authors": [
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Molina",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "98--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fahad AlGhamdi, Giovanni Molina, Mona Diab, Thamar Solorio, Abdelati Hawwari, Victor Soto, and Julia Hirschberg. 2016. Part of speech tagging for code switched data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 98-107, Austin, Texas. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Part-of-speech tagging of code-mixed social media content: Pipeline, stacking and joint modelling",
"authors": [
{
"first": "Utsab",
"middle": [],
"last": "Barman",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "30--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Utsab Barman, Joachim Wagner, and Jennifer Foster. 2016. Part-of-speech tagging of code-mixed social media content: Pipeline, stacking and joint mod- elling. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 30-39, Austin, Texas. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data",
"authors": [
{
"first": "Irshad",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Riyaz",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Dipti",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "324--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2017. Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics: Volume 2, Short Papers, pages 324-330, Valencia, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "HDT-UD: A very large Universal Dependencies treebank for German",
"authors": [
{
"first": "Emanuel",
"middle": [],
"last": "Borges V\u00f6lker",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Wendt",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hennig",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Universal Dependencies",
"volume": "",
"issue": "",
"pages": "46--57",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8006"
]
},
"num": null,
"urls": [],
"raw_text": "Emanuel Borges V\u00f6lker, Maximilian Wendt, Felix Hen- nig, and Arne K\u00f6hn. 2019. HDT-UD: A very large Universal Dependencies treebank for German. In Proceedings of the Third Workshop on Universal De- pendencies (UDW, SyntaxFest 2019), pages 46-57, Paris, France. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Challenges of annotating a code-switching treebank",
"authors": [
{
"first": "\u00d6zlem",
"middle": [],
"last": "\u00c7etinoglu",
"suffix": ""
},
{
"first": "\u00c7agr\u0131",
"middle": [],
"last": "\u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th International Workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019)",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {
"DOI": [
"10.18653/v1/W19-7809"
]
},
"num": null,
"urls": [],
"raw_text": "\u00d6zlem \u00c7etinoglu and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2019. Chal- lenges of annotating a code-switching treebank. In Proceedings of the 18th International Workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019), pages 82-90, Paris, France. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {
"DOI": [
"10.1145/1390156.1390177"
]
},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th International Conference on Machine Learning, ICML '08, page 160-167, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural multi-task learning for stance prediction",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Moin",
"middle": [],
"last": "Nadeem",
"suffix": ""
},
{
"first": "Mitra",
"middle": [],
"last": "Mohtarami",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "13--19",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6603"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Fang, Moin Nadeem, Mitra Mohtarami, and James Glass. 2019. Neural multi-task learning for stance prediction. In Proceedings of the Second Work- shop on Fact Extraction and VERification (FEVER), pages 13-19, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Graph-based universal dependency parsing in the age of the transformer: What works",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Gr\u00fcnewald",
"suffix": ""
},
{
"first": "Annemarie",
"middle": [],
"last": "Friedrich",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.12699"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Gr\u00fcnewald, Annemarie Friedrich, and Jonas Kuhn. 2020. Graph-based universal dependency parsing in the age of the transformer: What works, and what doesn't. arXiv preprint arXiv:2010.12699.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Aspect-based sentiment analysis using BERT",
"authors": [
{
"first": "Mickel",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alija",
"middle": [],
"last": "Oskar",
"suffix": ""
},
{
"first": "Jacobo",
"middle": [],
"last": "Bihorac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rouces",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "187--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mickel Hoang, Oskar Alija Bihorac, and Jacobo Rouces. 2019. Aspect-based sentiment analysis us- ing BERT. In Proceedings of the 22nd Nordic Con- ference on Computational Linguistics, pages 187- 196, Turku, Finland. Link\u00f6ping University Elec- tronic Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Part-of-speech tagging for code-mixed",
"authors": [
{
"first": "Anupam",
"middle": [],
"last": "Jamatia",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anupam Jamatia, Bj\u00f6rn Gamb\u00e4ck, and Amitava Das. 2015. Part-of-speech tagging for code-mixed",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Twitter and Facebook chat messages",
"authors": [
{
"first": "",
"middle": [],
"last": "English-Hindi",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "239--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "English-Hindi Twitter and Facebook chat messages. In Proceedings of the International Conference Re- cent Advances in Natural Language Processing, pages 239-248, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "75 languages, 1 model: Parsing Universal Dependencies universally",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Kondratyuk",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2779--2795",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1279"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing Universal Dependencies universally. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2779-2795, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "XGLUE: A new benchmark dataset for cross-lingual pretraining, understanding and generation",
"authors": [
{
"first": "Yaobo",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fenfei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Linjun",
"middle": [],
"last": "Shou",
"suffix": ""
},
{
"first": "Daxin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Guihong",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ruofei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Sining",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Taroon",
"middle": [],
"last": "Bharti",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Jiun-Hung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Winnie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shuguang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6008--6018",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.484"
]
},
"num": null,
"urls": [],
"raw_text": "Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fen- fei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Win- nie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre- training, understanding and generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008-6018, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A multi-lingual multi-task architecture for low-resource sequence labeling",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shengqi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "799--809",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1074"
]
},
"num": null,
"urls": [],
"raw_text": "Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 799-809, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Universal Dependency annotation for multilingual parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Bedini",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Bertomeu Castell\u00f3",
"suffix": ""
},
{
"first": "Jungmee",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuz- man Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Uni- versal Dependency annotation for multilingual pars- ing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 92-97, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient higher-order CRFs for morphological tagging",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "322--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M\u00fcller, Helmut Schmid, and Hinrich Sch\u00fctze. 2013. Efficient higher-order CRFs for morphologi- cal tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 322-332, Seattle, Washington, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Robust morphological tagging with word representations",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "526--536",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1055"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas M\u00fcller and Hinrich Sch\u00fctze. 2015. Robust morphological tagging with word representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 526-536, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Universal Dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Haji\u010d, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Building a user-generated content North-African Arabizi treebank: Tackling hell",
"authors": [
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Essaidi",
"suffix": ""
},
{
"first": "Amal",
"middle": [],
"last": "Fethi",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Futeral",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Srivastava",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1139--1150",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.107"
]
},
"num": null,
"urls": [],
"raw_text": "Djam\u00e9 Seddah, Farah Essaidi, Amal Fethi, Matthieu Futeral, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Abhishek Srivastava. 2020. Build- ing a user-generated content North-African Arabizi treebank: Tackling hell. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 1139-1150, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "POS tagging of Hindi-English code mixed text from social media: Some machine learning experiments",
"authors": [
{
"first": "Royal",
"middle": [],
"last": "Sequiera",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 12th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "237--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Royal Sequiera, Monojit Choudhury, and Kalika Bali. 2015. POS tagging of Hindi-English code mixed text from social media: Some machine learning ex- periments. In Proceedings of the 12th International Conference on Natural Language Processing, pages 237-246, Trivandrum, India. NLP Association of In- dia.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A Twitter corpus for Hindi-English code mixed POS tagging",
"authors": [
{
"first": "Kushagra",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Indira",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Ponnurangam",
"middle": [],
"last": "Kumaraguru",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "12--17",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3503"
]
},
"num": null,
"urls": [],
"raw_text": "Kushagra Singh, Indira Sen, and Ponnurangam Ku- maraguru. 2018. A Twitter corpus for Hindi-English code mixed POS tagging. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 12-17, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Part-of-speech tagging for English-Spanish code-switched text",
"authors": [
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1051--1060",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thamar Solorio and Yang Liu. 2008. Part-of-speech tagging for English-Spanish code-switched text. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1051-1060, Honolulu, Hawaii. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Joint part-ofspeech and language ID tagging for code-switched data",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3201"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Soto and Julia Hirschberg. 2018. Joint part-of- speech and language ID tagging for code-switched data. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code- Switching, pages 1-10, Melbourne, Australia. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Evaluating contextualized embeddings on 54 languages in POS tagging, lemmatization and dependency parsing",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.07448"
]
},
"num": null,
"urls": [],
"raw_text": "Milan Straka, Jana Strakov\u00e1, and Jan Haji\u010d. 2019. Eval- uating contextualized embeddings on 54 languages in POS tagging, lemmatization and dependency pars- ing. arXiv preprint arXiv:1908.07448.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Universal Dependencies for Turkish",
"authors": [
{
"first": "Umut",
"middle": [],
"last": "Sulubacak",
"suffix": ""
},
{
"first": "Memduh",
"middle": [],
"last": "G\u00f6k\u0131rmak",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "\u00c7agr\u0131",
"middle": [],
"last": "\u00c7\u00f6ltekin",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "G\u00fcl\u015fen",
"middle": [],
"last": "Eryigit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3444--3454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Umut Sulubacak, Memduh G\u00f6k\u0131rmak, Francis Tyers, \u00c7agr\u0131 \u00c7\u00f6ltekin, Joakim Nivre, and G\u00fcl\u015fen Eryigit. 2016. Universal Dependencies for Turkish. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 3444-3454, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Parsing morphologically rich languages: Introduction to the special issue",
"authors": [
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "1",
"pages": "15--22",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00133"
]
},
"num": null,
"urls": [],
"raw_text": "Reut Tsarfaty, Djam\u00e9 Seddah, Sandra K\u00fcbler, and Joakim Nivre. 2013. Parsing morphologically rich languages: Introduction to the special issue. Com- putational Linguistics, 39(1):15-22.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Seyyit Talha Bedir, Abdullatif K\u00f6ksal, Balk\u0131z \u00d6zt\u00fcrk Ba\u015faran, Tunga G\u00fcng\u00f6r, and Arzucan \u00d6zg\u00fcr. 2020. Resources for Turkish dependency parsing: Introducing the BOUN treebank and the BoAT annotation tool",
"authors": [
{
"first": "Utku",
"middle": [],
"last": "T\u00fcrk",
"suffix": ""
},
{
"first": "Furkan",
"middle": [],
"last": "Atmaca",
"suffix": ""
},
{
"first": "\u015eaziye",
"middle": [],
"last": "Bet\u00fcl \u00d6zate\u015f",
"suffix": ""
},
{
"first": "G\u00f6zde",
"middle": [],
"last": "Berk",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.10416"
]
},
"num": null,
"urls": [],
"raw_text": "Utku T\u00fcrk, Furkan Atmaca,\u015eaziye Bet\u00fcl \u00d6zate\u015f, G\u00f6zde Berk, Seyyit Talha Bedir, Abdullatif K\u00f6k- sal, Balk\u0131z \u00d6zt\u00fcrk Ba\u015faran, Tunga G\u00fcng\u00f6r, and Arzucan \u00d6zg\u00fcr. 2020. Resources for Turkish de- pendency parsing: Introducing the BOUN treebank and the BoAT annotation tool. arXiv preprint arXiv:2002.10416.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Lexical normalization for code-switched data and its effect on POS tagging",
"authors": [
{
"first": "Rob",
"middle": [],
"last": "Van Der Goot",
"suffix": ""
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "\u00c7etinoglu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rob van der Goot and \u00d6zlem \u00c7etinoglu. 2021. Lexical normalization for code-switched data and its effect on POS tagging. In Proceedings of the 16th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Comparing the predictive accuracy of models using a simple randomization test",
"authors": [
{
"first": "",
"middle": [],
"last": "Hilko Van Der",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Voet",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "25",
"issue": "",
"pages": "313--323",
"other_ids": {
"DOI": [
"10.1016/0169-7439(94)85050-X"
]
},
"num": null,
"urls": [],
"raw_text": "Hilko van der Voet. 1994. Comparing the predictive ac- curacy of models using a simple randomization test. Chemometrics and Intelligent Laboratory Systems, 25(2):313-323.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "EN-ES-CS: An English-Spanish code-switching twitter corpus for multilingual sentiment analysis",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"A"
],
"last": "Alonso",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "4149--4153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vilares, Miguel A. Alonso, and Carlos G\u00f3mez- Rodr\u00edguez. 2016. EN-ES-CS: An English-Spanish code-switching twitter corpus for multilingual sen- timent analysis. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 4149-4153, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "POS tagging of English-Hindi code-mixed social media content",
"authors": [
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Spandana",
"middle": [],
"last": "Gella",
"suffix": ""
},
{
"first": "Jatin",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "974--979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, and Monojit Choudhury. 2014. POS tagging of English-Hindi code-mixed social media content. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 974-979, Doha, Qatar. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A survey on multitask learning",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.08114"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Zhang and Qiang Yang. 2017. A survey on multi- task learning. arXiv preprint arXiv:1707.08114.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "The morphological analyses of German (a) and Turkish (b) translations of the phrase in cars.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Comparison of random vs. average initialisation in the LIDVec model when the early stop criteria is 15 epochs vs. 50 epochs.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Confusion matrices of Standalone and LIDVec for different Case values.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "Confusion matrices for the tokens with LANG3 and OTHER LID labels on the development set.",
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>CS sentence</td><td/><td/><td>TR fragments</td><td/></tr><tr><td>1</td><td colspan=\"2\">Abendgymnasiumdan MIXED</td><td>1</td><td colspan=\"2\">Abendgymnasiumdan MIXED</td></tr><tr><td>2</td><td>sonra</td><td>TR</td><td>2</td><td>sonra</td><td>TR</td></tr><tr><td>3</td><td>da</td><td>TR</td><td>3</td><td>da</td><td>TR</td></tr><tr><td>4</td><td>Evangelische</td><td>DE</td><td>6</td><td>zaten</td><td>TR</td></tr><tr><td>5 6 7</td><td>Hochschule'de zaten Soziale</td><td>MIXED DE TR</td><td colspan=\"2\">10 . 9 okudum</td><td>OTHER TR</td><td>4 Evangelische DE fragments DE</td></tr><tr><td>8</td><td>Arbeit</td><td>DE</td><td/><td/><td/><td>5 Hochschule'de MIXED</td></tr><tr><td>9</td><td>okudum</td><td>TR</td><td/><td/><td/><td>7 Soziale</td><td>DE</td></tr><tr><td colspan=\"2\">10 .</td><td>OTHER</td><td/><td/><td/><td>8 Arbeit</td><td>DE</td></tr><tr><td>Figure 2:</td><td/><td/><td/><td/><td/></tr></table>",
"text": "Splitting an example code-switching sentence to Turkish (TR) and German (DE) fragments. German tokens and token parts are shown in bold. (Sentence translation: After the night school, I studied Social Work in the Protestant University.)"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>Sent</td><td/><td/><td colspan=\"2\">Token Count</td><td/></tr><tr><td/><td>Count</td><td>TR</td><td colspan=\"4\">DE MIXED LANG3 OTHER Total</td></tr><tr><td colspan=\"4\">Tra 578 3,727 5,149</td><td>105</td><td colspan=\"2\">69 1,034 10,084</td></tr><tr><td/><td/><td>37%</td><td>51%</td><td colspan=\"3\">1% 0.7% 10.3%</td></tr><tr><td>FT</td><td>101</td><td>721</td><td>864</td><td>21</td><td>16</td><td>158 1,780</td></tr><tr><td/><td/><td>41%</td><td colspan=\"4\">49% 1.2% 0.9% 8.9%</td></tr><tr><td colspan=\"4\">Dev 700 4,389 5,589</td><td>122</td><td colspan=\"2\">48 1,128 11,276</td></tr><tr><td/><td/><td colspan=\"2\">39% 49.6%</td><td colspan=\"2\">1% 0.4%</td><td>10%</td></tr><tr><td colspan=\"4\">Test 805 5,341 7,139</td><td>183</td><td colspan=\"2\">46 1,384 14,093</td></tr><tr><td/><td/><td colspan=\"5\">38% 50,6% 1.3% 0.3% 9.8%</td></tr><tr><td colspan=\"4\">Total 2,184 14,178 18,741</td><td>431</td><td colspan=\"2\">179 3,704 37,233</td></tr><tr><td/><td/><td colspan=\"4\">38% 50.3% 1.2% 0.5%</td><td>10%</td></tr></table>",
"text": "The average sentence length is 15.35 and the average code switches per sentence is"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>TR</td><td colspan=\"4\">DE MIXED LANG3 OTHER</td></tr><tr><td>Tags</td><td colspan=\"2\">526 293</td><td>53</td><td>5</td><td>1</td></tr><tr><td>Features</td><td>61</td><td>37</td><td>22</td><td>5</td><td>1</td></tr></table>",
"text": "Sentence and token counts of the Turkish-German SAGT Treebank used in the experiments (Tra: training, FT: fine-tuning, Dev: development)."
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": ""
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"4\">Accuracy on the Development Set</td></tr><tr><td>TR</td><td>DE</td><td colspan=\"4\">MIXED LANG3 OTHER ALL</td></tr><tr><td colspan=\"2\">STEPS -DSplit (w. pred. LIDs) 80.66 82.95</td><td>70.43</td><td>41.50</td><td>100.0</td><td>83.43</td></tr><tr><td colspan=\"2\">STEPS -LIDVec (w. pred. LIDs) 81.85 83.53</td><td>73.22</td><td>49.31</td><td>99.17</td><td>84.18</td></tr><tr><td/><td colspan=\"3\">Accuracy on the Test Set</td><td/></tr><tr><td>TR</td><td>DE</td><td colspan=\"4\">MIXED LANG3 OTHER ALL</td></tr><tr><td colspan=\"2\">STEPS -DSplit (w. pred. LIDs) 77.65 80.01</td><td>65.59</td><td>48.78</td><td>100.0</td><td>80.78</td></tr><tr><td colspan=\"2\">STEPS -LIDVec (w. pred. LIDs) 79.22 80.73</td><td>78.69</td><td>34.78</td><td>98.77</td><td>81.75</td></tr></table>",
"text": "Morphological tagging accuracy of the models on the Turkish-German SAGT Treebank."
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>Accuracy</td><td/></tr><tr><td/><td colspan=\"2\">Development set Test set</td></tr><tr><td>TR</td><td>99.09</td><td>99.42</td></tr><tr><td>DE</td><td>98.43</td><td>98.80</td></tr><tr><td>MIXED</td><td>90.16</td><td>92.90</td></tr><tr><td>LANG3</td><td>52.08</td><td>67.39</td></tr><tr><td>OTHER</td><td>99.91</td><td>99.86</td></tr><tr><td>ALL</td><td>98.55</td><td>98.96</td></tr></table>",
"text": "Morphological tagging accuracy of DSplit and LIDVec when predicted LID labels were used."
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "LID prediction accuracy of STEPS on the development and test sets of the SAGT Treebank."
},
"TABREF8": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">Accuracy</td></tr><tr><td/><td>Dev</td><td>Test</td></tr><tr><td>STEPS -Standalone</td><td colspan=\"2\">93.72 92.27</td></tr><tr><td>STEPS -MTL</td><td colspan=\"2\">93.74 92.10</td></tr><tr><td colspan=\"3\">STEPS -DSplit (w. gold LIDs) 93.53 92.07</td></tr><tr><td>STEPS -</td><td/></tr></table>",
"text": "LIDVec (w. gold LIDs) 93.94 92.24"
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "The POS tagging accuracy scores of the models on the development and test sets of the Turkish-German SAGT Treebank."
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">Precision Recall</td><td>F1</td><td>Acc</td></tr><tr><td>Standalone</td><td>87.72</td><td>87.19</td><td colspan=\"2\">87.24 82.03</td></tr><tr><td>LIDVec</td><td>89.96</td><td>89.41</td><td colspan=\"2\">89.50 84.20</td></tr></table>",
"text": "compares featurebased results of LIDVec and Standalone. We observe that LIDVec improves both Precision and Recall by more than 2%. These results suggest that LIDVec facilitates predicting the full set of features."
},
"TABREF11": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Feature-based partial scores of Standalone and LIDVec models on the development set of the Turkish-German SAGT Treebank."
},
"TABREF12": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"5\">B Comparison of MarMoT and STEPS</td></tr><tr><td colspan=\"3\">for LID Prediction</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Accuracy</td><td/></tr><tr><td/><td colspan=\"2\">Development set</td><td colspan=\"2\">Test set</td></tr><tr><td/><td colspan=\"4\">MarMoT STEPS MarMoT STEPS</td></tr><tr><td>TR</td><td>96.40</td><td>99.09</td><td>97.38</td><td>99.42</td></tr><tr><td>DE</td><td>97.84</td><td>98.43</td><td>97.88</td><td>98.80</td></tr><tr><td>MIXED</td><td>23.77</td><td>90.16</td><td>27.32</td><td>92.90</td></tr><tr><td>LANG3</td><td>41.67</td><td>52.08</td><td>0.0</td><td>67.39</td></tr><tr><td>OTHER</td><td>98.23</td><td>99.91</td><td>99.06</td><td>99.86</td></tr><tr><td>ALL</td><td>96.12</td><td>98.55</td><td>96.57</td><td>98.96</td></tr></table>",
"text": "Morphological tagging accuracy of the two pipeline approaches for the DSplit method. The first part shows the scores in the existence of gold LIDs and the second part demonstrates the results when predicted LIDs are used instead of gold ones."
},
"TABREF13": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Comparsion of MarMoT and STEPS tools for LID prediction on the development and test sets of the SAGT Treebank."
}
}
}
}