ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2020.sigmorphon-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:30:49.021039Z"
},
"title": "Transliteration for Cross-Lingual Morphological Inflection",
"authors": [
{
"first": "Nikitha",
"middle": [],
"last": "Murikinati",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "gneubig@cs.cmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Cross-lingual transfer between typologically related languages has been proven successful for the task of morphological inflection. However, if the languages do not share the same script, current methods yield more modest improvements. We explore the use of transliteration between related languages, as well as grapheme-to-phoneme conversion, as data preprocessing methods in order to alleviate this issue. We experimented with several diverse language pairs, finding that in most cases transliterating the transfer language data into the target one leads to accuracy improvements, even up to 9 percentage points. Converting both languages into a shared space like the International Phonetic Alphabet or the Latin alphabet is also beneficial, leading to improvements of up to 16 percentage points. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Cross-lingual transfer between typologically related languages has been proven successful for the task of morphological inflection. However, if the languages do not share the same script, current methods yield more modest improvements. We explore the use of transliteration between related languages, as well as grapheme-to-phoneme conversion, as data preprocessing methods in order to alleviate this issue. We experimented with several diverse language pairs, finding that in most cases transliterating the transfer language data into the target one leads to accuracy improvements, even up to 9 percentage points. Converting both languages into a shared space like the International Phonetic Alphabet or the Latin alphabet is also beneficial, leading to improvements of up to 16 percentage points. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The majority of the world's languages are synthetic, meaning they have rich morphology. As a result, modeling morphological inflection computationally can have a significant impact on downstream quality, not only in analysis tasks such as named entity recognition and morphological analysis (Zhu et al., 2019) , but also for language generation systems for morphologically-rich languages.",
"cite_spans": [
{
"start": 291,
"end": 309,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, morphological inflection has been extensively studied in monolingual high resource settings, especially through the recent SIG-MORPHON challenges (Cotterell et al., 2016 (Cotterell et al., , 2017 (Cotterell et al., , 2018 . The latest SIGMOPRHON 2019 challenge (McCarthy et al., 2019) focused on lowresource settings and encouraged cross-lingual training, an approach that has been successfully applied in other low-resource tasks such as Machine 1 Our code and data are available at https://github. com/nikim99/Inflection-Transliteration. Table 1 : The languages' script can affect the effectiveness of cross-lingual transfer (using L 1 data to train a L 2 inflection system). Bengali results display low variance, as all transfer languages differ in script. Maltese is typologically closer to Arabic and Hebrew than Italian, but accuracy is higher when transferring from a same-script language.",
"cite_spans": [
{
"start": 163,
"end": 186,
"text": "(Cotterell et al., 2016",
"ref_id": null
},
{
"start": 187,
"end": 212,
"text": "(Cotterell et al., , 2017",
"ref_id": "BIBREF7"
},
{
"start": 213,
"end": 238,
"text": "(Cotterell et al., , 2018",
"ref_id": null
},
{
"start": 278,
"end": 301,
"text": "(McCarthy et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 464,
"end": 465,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 557,
"end": 564,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Translation (MT) or parsing. Cross-lingual learning is a particularly promising direction, due to its potential to utilize similarities across languages (often languages from the same linguistic family, which we will refer to as \"related\") in order to overcome the lack of training data. In fact, leveraging data from several related languages was crucial for the current state-of-the-art system over the SIGMORPHON 2019 dataset . However, as Anastasopoulos and Neubig (2019) point out, cross-lingual learning even between closely related languages can be impeded if the languages do not use the same script. We present a few examples taken from in Table 1 . The first example presents cross-lingual transfer for Bengali, with the transfer languages varying from very related (Hindi, Sanskrit, Urdu) to only distantly related (Greek). Nevertheless, there is notably little variance in the performance of the systems. We believe that the culprit is the difference in writing systems between all the transfer and test languages, which does not allow the system to easily leverage cross-lingual information: the Bengali data uses the Bengali script, the Urdu data uses the Nastaliq script (a derivative of the Arabic alphabet), the Hindi and Sanskrit data uses Devanagari, and the Greek data uses the Greek alphabet. In the second example, with transfer from Arabic, Hebrew, and Italian for morphological inflection in Maltese, we note that although Maltese is much closer typologically to Arabic and Hebrew (they are all Semitic languages), the test accuracy is higher when transferring from Italian, which despite only sharing a few typological elements with Maltese happens to also share the same script.",
"cite_spans": [],
"ref_spans": [
{
"start": 649,
"end": 656,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The aim of this work is to investigate this potential issue further. We first quantify the effect of script differences on the accuracy of morphological inflection systems through a series of controlled experiments ( \u00a72). Then, we attempt to remedy this problem by bringing the representations of the transfer and the test languages in the same, shared space before training the morphological inflection system. In one setting, we achieve this through transliteration of the transfer language into the test language's script as a preprocessing step. In another setting, we convert both languages into a shared space, using grapheme-to-phoneme (G2P) conversion into the International Phonetic Alphabet (IPA) as well as romanization. We discuss both settings and their effects on morphological inflection in low-resource settings ( \u00a73).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach bears similarities to pseudo-corpus approaches that have been used in machine translation (MT), where low-resource language data are augmented with data generated from a related highresource language. Among many, for instance, De Gispert and Marino (2006) built a Catalan-English MT by bridging through Spanish, while Xia et al. (2019) show that word-level substitutions can convert a high-resource (related) language corpus into a pseudo low-resource one leading to large improvements in MT quality. Such approaches typically operate at the word level, hence they do not need to handle script differences explicitly. NLP models that handle script differences do exist, but focus mostly on analysis tasks such as named entity recognition Chaudhary et al., 2018; Rahimi et al., 2019) or entity linking (Rijhwani et al., 2019 ), whereas we focus in a generation task. Character-level transliteration was typically incorporated in phrase-based statistical MT systems (Durrani et al., 2014) , but was only used to handle named entity translation. Notably, there exist NLP approaches such as the document classification approach of showing that indeed shared character-level information can facilitate cross-lingual transfer, but limit their analysis to same-script languages only. Specific to the the morphological inflection task, (Hauer et al., 2019) use cognate projection to augment low-resource data, while (Wiemerslage et al., 2018) explore the inflection task using inputs in phonological space as well as bundles of phonological features from PanPhon , showing improvements for both settings. Our work, in contrast, focuses on better cross-lingual transfer, attempting to combine the phonological and the orthographic space.",
"cite_spans": [
{
"start": 331,
"end": 348,
"text": "Xia et al. (2019)",
"ref_id": "BIBREF31"
},
{
"start": 751,
"end": 774,
"text": "Chaudhary et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 775,
"end": 795,
"text": "Rahimi et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 814,
"end": 836,
"text": "(Rijhwani et al., 2019",
"ref_id": "BIBREF25"
},
{
"start": 977,
"end": 999,
"text": "(Durrani et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 1341,
"end": 1361,
"text": "(Hauer et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 1421,
"end": 1447,
"text": "(Wiemerslage et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Table 1 we offered a few examples from the literature to indicate that differences in script between the transfer and test language in a cross-lingual learning setting can be a potential issue. In this section, we provide additional evidence that this is indeed the case.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantifying the Issue",
"sec_num": "2"
},
{
"text": "The intuition behind our analysis is that a model trained cross-lingually can only claim to indeed learn cross-lingually if it ends up sharing the representations of the different inputs, at least to some extent. This observation of a learned shared space has also been noted in massively multilingual models like the multilingual BERT (Pires et al., 2019) , or for cross-lingual learning of word-level representations (Wang et al., 2020) . For a character-level model, such as the ones typically used for neural morphological inflection, this implies a learned mapping between the characters of the two inputs. Our hypothesis is that such a learned character mapping, and in particular between related languages, should resemble a transliteration mapping, assuming that both languages use a phonographic writing system (such as the Latin or the Cyrillic alphabet and their variations), to use the notation of Faber (1992). 2 To verify whether this intuition holds, we trained Figure 1 : 2-D projection of the character embeddings learned after cross-lingual learning in two settings (Armenian-Kabardian and Bashkir-Tatar). The shaded area denotes the mean \u00b1 three standard deviations. models on Armenian-Kabardian and Bashkir-Tatar (see details in Section \u00a73). In the first setting, the transfer language (Armenian) uses the Armenian alphabet, while the test language (Kabardian) uses the Cyrillic one. In the second, we are transferring from Bashkir, which currently uses the Cyrillic alphabet, to Tatar, which is written with the Latin alphabet. We obtain the character representations from the final trained models, and we perform a simple search over the embedding space, returning for each of the transfer language characters the nearest neighbor from the test language alphabet. Our findings are that this type of mapping does not resemble a transliteration one, at all.",
"cite_spans": [
{
"start": 336,
"end": 356,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 419,
"end": 438,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 924,
"end": 925,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 977,
"end": 985,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantifying the Issue",
"sec_num": "2"
},
{
"text": "2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying the Issue",
"sec_num": "2"
},
{
"text": "For example, one would expect that the Bashkir characters \u0435, \u04d9, or \u044d would map to the Tatar e character, or at least to another vowel. Bashkir \u0435 indeed maps to Tatar e, but \u04d9 maps to Tatar i (which might be somewhat fine since they are both vowels), while Bashkir \u044d maps to Tatar r. After a manual annotation of the mappings in both language pairs, we find that the absolute accuracy is less than 5% in both settings (2 of 54 are correct in Bashkir-Tatar, and 1 of 47 in Armenian-Kabardian). We also present a visualization (obtained through PCA (Wold et al., 1987) ) of the character embeddings in Figure 1 for these two settings, which shows that the two languages are still, to an extent, separable.",
"cite_spans": [
{
"start": 546,
"end": 565,
"text": "(Wold et al., 1987)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 599,
"end": 607,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantifying the Issue",
"sec_num": "2"
},
{
"text": "In an attempt to also take into account potential slight differences in pronunciation, which are common across related languages, we also count mappings that agree in coarse phonetic categories as correct. We obtain rough grapheme-to-phoneme mappings from Omniglot 3 (Ager, 2008) which allows us to classify each character as mapping to a vowel, or a consonant category (we devise categories across both manner and place). For instance, the Bashkir characters \u0441,\u04ab,\u04bb,\u0499,\u0448 map to sibilant",
"cite_spans": [
{
"start": 267,
"end": 279,
"text": "(Ager, 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifying the Issue",
"sec_num": "2"
},
{
"text": "The previous section ( \u00a72) showcases that different scripts can inhibit the model's ability to represent both languages in a shared space, which can be damaging for downstream performance in crosslingual learning scenarios. In order to bring the transfer and test languages into a shared space we explore two straightforward approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "1. We first transliterate the transfer language data into the script of the test language, and then use the data to train an inflection model. As our baseline or control experiment, we use the exact same data, model, and process, only removing the transliteration preprocessing step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "2. We convert both languages into a shared space, such as the International Phonetic Alphabet (IPA) or the Latin alphabet. In this case, we use both the converted and the original datasets during training. We note that this approach is perhaps the most viable one, for cases in which a transliteration tool between the transfer and the test scripts is not available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The following sections provide details on transliteration, grapheme-to-phoneme conversion, the inflection model, and the data that we use for training and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Transliteration In the absence of some sort of a universal transliteration approach, we rely on various libraries for our experiments. For transliterating between the Indic scripts (Devanagari, Bengali, ) into the test language (L 2 ) improves accuracy in some cases (top), with and without hallucinated data (H). In some language pairs (bottom) it can be harmful. We report exact match accuracy on the test set. We highlight statistically significant improvements (p < 0.05) over the baseline. \"both\" denotes that both L 1 languages are used for transfer. * marks an additional control experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 204,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Transfer Test Baseline with Transliteration Baseline with Transliteration L 1 L 2 L 1 +L 2 L 1 Conversion Tr(L 1 )+L 2 L 1 +L 2 +H Tr(L 1 )+L 2 +H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Kannada, and Telugu in our experiments) we rely on the IndicNLP library. 4 We also use the URoman 5 library (Hermjakob et al., 2018) to transliterate into the Roman alphabet for the Arabic, Hebrew, Armenian, and Cyrillic scripts. The lack of resources and transliteration tools for some directions severely limited the extent of the experiments that we could conduct. Notably, even though romanization is fairly well-studied and are easily attainable through tools like URoman, the opposite direction is fairly understudied. Most of the related work has focused on either to-English transliteration specifically (Lin et al., 2016; Durrani et al., 2014) or on named entity transliteration (Kundu et al., 2018; Grundkiewicz and Heafield, 2018) . Even then, the state-of-the-art results on the recent NEWS named entity transliteration task (Chen et al., 2018) ranged from 10% to 80% in terms of accuracy across several scripts. The high variance in expected quality depending on the transliteration direction showcases the need for further work towards tackling hard transliteration problems.",
"cite_spans": [
{
"start": 73,
"end": 74,
"text": "4",
"ref_id": null
},
{
"start": 108,
"end": 132,
"text": "(Hermjakob et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 612,
"end": 630,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 631,
"end": 652,
"text": "Durrani et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 688,
"end": 708,
"text": "(Kundu et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 709,
"end": 741,
"text": "Grundkiewicz and Heafield, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 837,
"end": 856,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Conversion For G2P conversion, we used the Epitran 6 library 4 https://github.com/anoopkunchukuttan/ indic_nlp_library 5 https://github.com/isi-nlp/uroman 6 https://github.com/dmort27/epitran for transliteration into IPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme",
"sec_num": null
},
{
"text": "Since the library's script coverage is not extensive, it imposed another limitation on the amount of experiments we could conduct. Also, note that the library does not account for vowelization phenomena in Perso-Arabic scripts such as Arabic, Persian, and Urdu, which presents an avenue for further work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme",
"sec_num": null
},
{
"text": "We use the morphological inflection model of which achieved the highest rank in terms of average accuracy in the SIGMORPHON 2019 shared task, using the publicly available code. 7 The neural character-level LSTM-based model uses decoupled representations of the morphological tags and the lemma learned from separate encoders. To generate the inflected form, the model first attends over tag sequence, before using the updated decoder state to attend over the character sequence of the lemma. In addition to standard cross-entropy loss, the model is trained with additional adversarial objectives and heavy regularization, in order to encourage attention monotonicity and cross-lingual learning. The authors also use a data hallucination technique similar to the one of Silfverberg et al. Table 3 : G2P Conversion of both the transfer (L 1 ) and the test languages (L 2 ) into IPA improves accuracy in almost all cases, with and without hallucinated data (H). Romanization of the both languages improves accuracy in all cases, with and without hallucinated data. We report exact match accuracy on the test set, and highlight statistically significant improvements (p < 0.05) over the baseline.",
"cite_spans": [
{
"start": 177,
"end": 178,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 788,
"end": 795,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inflection Model",
"sec_num": null
},
{
"text": "Transfer Test Baseline with g2p Baseline with g2p L 1 L 2 L 1 +L 2 +g2p(L 1 ) +g2p(L 2 ) L 1 +L 2 +H +g2p(L 1 )+ g2p(L 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Conversion",
"sec_num": null
},
{
"text": "L 2 L 1 +L 2 +Rom(L 1 )+Rom(L 2 ) L 1 +L 2 +H +Rom(L 1 )+Rom(L 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Conversion",
"sec_num": null
},
{
"text": "(2017), which we also use in ablation experiments. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Conversion",
"sec_num": null
},
{
"text": "Data and Evaluation We use the data from the SIGMORPHON 2019 Shared Task on Morphological Inflection (McCarthy et al., 2019) . We stick to the transfer learning cases that were studied in the shared task, but limit ourselves to the language pairs where (1) the two languages use different writing scripts, and (2) we have access to a transliteration model from the transfer to the test language. As a result, we evaluate our approach on the following language pairs: {Hindi,Sanskrit}-Bengali, Kannada-Telugu, {Arabic,Hebrew}-Maltese, Bashkir-Tatar, Bashkir-Crimean Tatar, Armenian-Kabardian, and Russian-Portuguese. We compare our systems' performance with the baselines using exact match accuracy over the test set. We also perform statistical significance testing using bootstrap resampling (Koehn, 2004) . 9",
"cite_spans": [
{
"start": 76,
"end": 124,
"text": "Morphological Inflection (McCarthy et al., 2019)",
"ref_id": null
},
{
"start": 793,
"end": 806,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Conversion",
"sec_num": null
},
{
"text": "We perform experiments both with single-language transfer as well as transfer from multiple related languages, if available. We also perform ablations in two settings, with and without hallucinated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Transliterating the Transfer into the Test language We first focus on the setting where a 8 We direct the reader to for further details on the model. 9 We use 10,000 bootstrap samples and a 1 2 ratio of samples in each iteration. transliteration tool between the transfer and the target language is available (in all cases, the target language data do not get converted -only the transfer language data are transliterated). Table 2 presents the exact match accuracy obtained on the test set for a total of 12 language settings. In 7 of them, we observe improvements due to our transliteration preprocessing step, some of them statistically significant.",
"cite_spans": [
{
"start": 90,
"end": 91,
"text": "8",
"ref_id": null
},
{
"start": 150,
"end": 151,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 424,
"end": 431,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Specifically, in the top two cases (for Bengali and Maltese as test languages) where the transfer and test languages are closely related, we see improvements across the board. In fact, for Hindi-Bengali and Arabic-Maltese the improvement is statistically significant with p < 0.05. Interestingly, the improvements are significant also when we use hallucinated data, which indicates that our transliteration preprocessing step is orthogonal to monolingual data augmentation through hallucination. For the case of Kannada-Telugu, despite the exact match accuracy being the same (66%) for the case without hallucinated data, we observed small improvements on the average Levenshtein distance between the produced and the gold forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "On the other hand, when transferring from Bashkir to Tatar and Crimean Tatar, even though all three languages belong to the same branch (Kipchak) of the Turkic language family, transliterating Bashkir into the Roman alphabet that Tatar and Crimean Tatar use leads to performance degradation. In the case of Bashkir-Tatar, the degrada- tion is statistically significant. It is of note, though, that hallucination also does not offer any improvements in these language pairs. In a surprising result, transliterating Russian into the Roman alphabet, and using it for cross-lingual transfer to Portuguese also leads to statistically significant improvements. Both languages are Indo-European ones, but belong to different branches (Slavic and Romance). Nevertheless, both with and without hallucinated data the performance improves with transliteration, a finding that surely warrants further study. Last, we discuss the control experiment of Armenian-Kabardian. Kabardian (and Adyghe, displayed for comparison) belong to the Circassian branch of the Northwest Caucasian languages, and are considered closely related, both using the Cyrillic alphabet; Armenian, in contrast, is an Indo-European language spoken in the same re-gion. First, transferring from Adyghe leads to better performance compared to transfer from Armenian. Converting Armenian to the Roman script has no effect on downstream performance, as expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "In the second exploratory thread, we focus on cases where the shared space is not the one of the test language. In the first set of experiments, we use a G2P model to transliterate both languages into IPA. The results in three language pairs are shown in Table 3 (top), where we observe statistically significant improvements in two cases (Hindi-Bengali and Russian-Portuguese). In fact, in the case of Russian-Portuguese, one can increase the performance by almost 60% (in the case without hallucinated data) from 33.5 to 53.9.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Converting both Transfer and Test Languages",
"sec_num": null
},
{
"text": "Similarly, using the Roman alphabet as the shared space is also beneficial in almost all cases. As the bottom part of Table 3 showcases, the increase can be significant. Our best Kannada-Telugu system, for example, is the one trained using additional romanized versions of both language data, improving even over the cases where hallucinated data are used (cf. accuracy of 84% to 72%). 10 Last, we note that the trend of somewhat surprising results continues in these settings too, as we observe that transfer between Russian and Portuguese (and vice versa) is very beneficial. The improvement of 19.6 accuracy points that we observe in the G2P Russian-Portuguese experiment is in fact the largest we observe in our experiments.",
"cite_spans": [
{
"start": 386,
"end": 388,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Converting both Transfer and Test Languages",
"sec_num": null
},
{
"text": "We further analyze the results of the Russian-Portuguese and Portuguese-Russian experiments, in the hopes of understanding where the improvements come from, when using cross-lingual transfer. For each of the experiments (transliteration into the test languages, G2P conversion, and romanization), we compute the percentage of times that an inflection with each morphological tag failed. Table 4 reports the tags with the highest difference in these ratios, between the baseline and our models for each method. The higher the number, the larger the improvements for this particular tag. For inflecting Portuguese (top and bottom sets of results), we find it hard to make any conclusions: both noun, adjective, and verb tags appear in the top lists. For inflecting Russian (middle set), it is mostly noun/adjective tags pertaining to animacy (ANIM, INAN), gender (MASC) and case (GEN, DAT) that show the largest improvements. We still cannot explain the improvements we see in these language pairs, except for vague hypotheses that either the languages do share some similar inflection processes (besides, they are both Indo-European) or that the harder multi-task training setting regularizes the model leading to better accuracy overall.",
"cite_spans": [],
"ref_spans": [
{
"start": 387,
"end": 394,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Russian-Portuguese Investigation",
"sec_num": null
},
{
"text": "With this work we study whether using transliteration as a preprocessing step can improve the accuracy of morphological inflection models under cross-lingual learning regimes. With a few exceptions, most cases indeed show accuracy improvements, some of them statistically significant. We also note that the improvements are orthogonal to those obtained by data augmentation through hallucination, even in typologically distant languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "While this work represents a first step in the direction of understanding the effect of script differences in morphological inflection, it is still limited in scope, as the experiments were restricted by the lack of reliable transliteration tools for most scripts. The SIGMORPHON 2020 Shared Task on Morphological Inflection also provides more languages and better systems are being developed, so we plan to expand our analysis to the latest stateof-the-art models (Vylomova et al., 2020) . Additionally, some of the transliteration models do not account for phenomena that could have an impact in downstream performance, such as vowelization for Abjad scripts like Arabic. As we aim to expand the scale of this study, a future direction will involve training transliteration models between most scripts of the world. This will allow more extensive experimentation, both by incorporating more language pairs and by allowing more control experiments across various scripts. We will also further explore the usage of more advanced G2P systems, such as those developed for the SIGMORPHON 2020 Shared Task on Grapheme-to-Phoneme conversion, or the models of .",
"cite_spans": [
{
"start": 465,
"end": 488,
"text": "(Vylomova et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In contrast, one should not expect this to hold if one of the scripts is logographic, like the Chinese one, or if the two languages are coded differently, e.g. one script is syllabic and segmentally coded, like the Japanese kana, but the other is segmentally linear using a complete alphabet like the Latin script. If both scripts use the same level of coding, then the intuition holds (i.e. between Hebrew and Arabic).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://omniglot.com/ fricatives, so we count any mapping to Tatar characters that also map to sibilant fricatives (\u00e7,z,s,\u015f) as correct. Overall, however, even this more flexible evaluation only leads to an accuracy of less than 30% (16 out of 54 characters for Bashkir-Tatar, 12 of 47 in Armenian-Kabardian).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/antonisa/ inflection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, our submission to the SIGMORPHON 2020 Shared Task(Murikinati and Anastasopoulos, 2020) following this approach tied for first for Telugu(Vylomova et al., 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors extend their gratitude to the anonymous reviewers for their constructive remarks and suggestions. This work is supported by the National Science Foundation under grant 1761548.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Omniglot writing systems and languages of the world",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Ager",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Ager. 2008. Omniglot writing systems and lan- guages of the world. Retrieved March 30, 2020.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pushing the limits of low-resource morphological inflection",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological in- flection. In Proc. EMNLP, Hong Kong.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Phonologically aware neural model for named entity recognition in low resource transfer settings",
"authors": [
{
"first": "Akash",
"middle": [],
"last": "Bharadwaj",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "David R Mortensen",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Dyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1462--1472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akash Bharadwaj, David R Mortensen, Chris Dyer, and Jaime G Carbonell. 2016. Phonologically aware neural model for named entity recognition in low re- source transfer settings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1462-1472.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adapting word embeddings to new languages with morphological and phonological subword representations",
"authors": [
{
"first": "Aditi",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Mortensen",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3285--3295",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1366"
]
},
"num": null,
"urls": [],
"raw_text": "Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R. Mortensen, and Jaime Carbonell. 2018. Adapting word embeddings to new languages with morphological and phonological subword rep- resentations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3285-3295, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Report of NEWS 2018 named entity transliteration shared task",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "55--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Chen, Rafael E. Banchs, Min Zhang, Xiangyu Duan, and Haizhou Li. 2018. Report of NEWS 2018 named entity transliteration shared task. In Proceed- ings of the Seventh Named Entities Workshop, pages 55-73, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Garrett",
"middle": [],
"last": "Mielke",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. CoNLL-SIGMORPHON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCarthy, Katharina Kann, Sebastian Mielke, Gar- rett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL-SIGMORPHON 2018 shared task: Univer- sal morphological reinflection. In Proc. CoNLL- SIGMORPHON.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Walther",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. CoNLL SIGMORPHON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Univer- sal morphological reinflection in 52 languages. In Proc. CoNLL SIGMORPHON.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared taskmorphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": null,
"venue": "Proc. SIGMOR-PHON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task- morphological reinflection. In Proc. SIGMOR- PHON.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Catalanenglish statistical machine translation without parallel corpus: bridging through spanish",
"authors": [
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "Jose B",
"middle": [],
"last": "Marino",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of 5th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "65--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adri\u00e0 De Gispert and Jose B Marino. 2006. Catalan- english statistical machine translation without paral- lel corpus: bridging through spanish. In Proc. of 5th International Conference on Language Resources and Evaluation (LREC), pages 65-68. Citeseer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Integrating an unsupervised transliteration model into statistical machine translation",
"authors": [
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "148--153",
"other_ids": {
"DOI": [
"10.3115/v1/E14-4029"
]
},
"num": null,
"urls": [],
"raw_text": "Nadir Durrani, Hassan Sajjad, Hieu Hoang, and Philipp Koehn. 2014. Integrating an unsupervised transliter- ation model into statistical machine translation. In Proceedings of the 14th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 148- 153, Gothenburg, Sweden. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Phonemic segmentation as epiphenomenon. The linguistics of literacy",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Faber",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "21",
"issue": "",
"pages": "111--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Faber. 1992. Phonemic segmentation as epiphe- nomenon. The linguistics of literacy, 21:111-134.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural machine translation techniques for named entity transliteration",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "89--94",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2413"
]
},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz and Kenneth Heafield. 2018. Neural machine translation techniques for named en- tity transliteration. In Proceedings of the Seventh Named Entities Workshop, pages 89-94, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cognate projection for low-resource inflection generation",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Hauer",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Yixing",
"middle": [],
"last": "Habibi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Luan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "6--11",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4202"
]
},
"num": null,
"urls": [],
"raw_text": "Bradley Hauer, Amir Ahmad Habibi, Yixing Luan, Rashed Rubby Riyadh, and Grzegorz Kondrak. 2019. Cognate projection for low-resource in- flection generation. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 6-11, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Out-of-the-box universal Romanization tool uroman",
"authors": [
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018. Out-of-the-box universal Romanization tool uroman. In Proceedings of ACL 2018, System Demonstrations, pages 13-18, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceed- ings of the 2004 Conference on Empirical Meth- ods in Natural Language Processing, pages 388- 395, Barcelona, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A deep learning based approach to transliteration",
"authors": [
{
"first": "Soumyadeep",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Sayantan",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Santanu",
"middle": [],
"last": "Pal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "79--83",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2411"
]
},
"num": null,
"urls": [],
"raw_text": "Soumyadeep Kundu, Sayantan Paul, and Santanu Pal. 2018. A deep learning based approach to translitera- tion. In Proceedings of the Seventh Named Entities Workshop, pages 79-83, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Leveraging entity linking and related language projection to improve name transliteration",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Aliya",
"middle": [],
"last": "Deri",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Sixth Named Entity Workshop",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Lin, Xiaoman Pan, Aliya Deri, Heng Ji, and Kevin Knight. 2016. Leveraging entity linking and related language projection to improve name translit- eration. In Proceedings of the Sixth Named Entity Workshop, pages 1-10, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection",
"authors": [
{
"first": "D",
"middle": [],
"last": "Arya",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Garrett",
"middle": [],
"last": "Wolf-Sonkin",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [
"J"
],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Mielke",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Heinz",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "229--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Gar- rett Nicolai, Miikka Silfverberg, Sebastian J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Mor- phological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229-244, Flo- rence, Italy.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Epitran: Precision G2P for many languages",
"authors": [
{
"first": "David",
"middle": [
"R"
],
"last": "Mortensen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Dalmia",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R. Mortensen, Siddharth Dalmia, and Patrick Lit- tell. 2018. Epitran: Precision G2P for many lan- guages. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), Paris, France. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Panphon: A resource for mapping ipa segments to articulatory feature vectors",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "David R Mortensen",
"suffix": ""
},
{
"first": "Akash",
"middle": [],
"last": "Littell",
"suffix": ""
},
{
"first": "Kartik",
"middle": [],
"last": "Bharadwaj",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3475--3484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R Mortensen, Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, and Lori Levin. 2016. Panphon: A resource for mapping ipa segments to articulatory feature vectors. In Proceedings of COL- ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3475-3484.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The cmu-lti submission to the sigmorphon 2020 shared task 0: Language-specific cross-lingual transfer",
"authors": [
{
"first": "Nikitha",
"middle": [],
"last": "Murikinati",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikitha Murikinati and Antonios Anastasopoulos. 2020. The cmu-lti submission to the sigmorphon 2020 shared task 0: Language-specific cross-lingual transfer. In Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "String transduction with target language models and insertion handling",
"authors": [
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "Saeed",
"middle": [],
"last": "Najafi",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "43--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garrett Nicolai, Saeed Najafi, and Grzegorz Kondrak. 2018. String transduction with target language mod- els and insertion handling. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 43- 53.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1493"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Massively multilingual transfer for NER",
"authors": [
{
"first": "Afshin",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "151--164",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1015"
]
},
"num": null,
"urls": [],
"raw_text": "Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Mas- sively multilingual transfer for NER. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151-164, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Zero-shot neural transfer for cross-lingual entity linking",
"authors": [
{
"first": "Shruti",
"middle": [],
"last": "Rijhwani",
"suffix": ""
},
{
"first": "Jiateng",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6924--6931",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shruti Rijhwani, Jiateng Xie, Graham Neubig, and Jaime Carbonell. 2019. Zero-shot neural transfer for cross-lingual entity linking. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6924-6931.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Data augmentation for morphological reinflection",
"authors": [
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Wiemerslage",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lingshuang Jack",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. SIGMORPHON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and Lingshuang Jack Mao. 2017. Data augmentation for morphological reinflection. Proc. SIGMORPHON.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Miikka Silfverberg, and Mans Hulden. 2020. The SIGMORPHON 2020 Shared Task 0: Typologically diverse morphological inflection",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Edoardo",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Hall Maudslay",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Valvoda",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Toldova",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Klyachko",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Yegorov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Krizhanovsky",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Czarnowska",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Nikkarinen",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Krizhanovsky",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Torroba Hennigen",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Ponti, Rowan Hall Maudslay, Ran Zmigrod, Joseph Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrej Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Adina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfver- berg, and Mans Hulden. 2020. The SIGMORPHON 2020 Shared Task 0: Typologically diverse mor- phological inflection. In Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cross-lingual alignment vs joint training: A comparative study and a simple unified framework",
"authors": [
{
"first": "Zirui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiateng",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Ruochen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime G. Carbonell. 2020. Cross-lingual alignment vs joint training: A com- parative study and a simple unified framework. In International Conference on Learning Representa- tions (ICLR), Addis Ababa, Ethiopia.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Phonological features for morphological inflection",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Wiemerslage",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "161--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Wiemerslage, Miikka Silfverberg, and Mans Hulden. 2018. Phonological features for morpho- logical inflection. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 161-166.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Principal component analysis. Chemometrics and intelligent laboratory systems",
"authors": [
{
"first": "Svante",
"middle": [],
"last": "Wold",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Esbensen",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Geladi",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "2",
"issue": "",
"pages": "37--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svante Wold, Kim Esbensen, and Paul Geladi. 1987. Principal component analysis. Chemometrics and intelligent laboratory systems, 2(1-3):37-52.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Generalized data augmentation for low-resource translation",
"authors": [
{
"first": "Mengzhou",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5786--5796",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1579"
]
},
"num": null,
"urls": [],
"raw_text": "Mengzhou Xia, Xiang Kong, Antonios Anastasopou- los, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5786- 5796, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Exploiting cross-lingual subword similarities in low-resource document classification",
"authors": [
{
"first": "Mozhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshinari",
"middle": [],
"last": "Fujinuma",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.09617"
]
},
"num": null,
"urls": [],
"raw_text": "Mozhi Zhang, Yoshinari Fujinuma, and Jordan Boyd- Graber. 2018. Exploiting cross-lingual subword similarities in low-resource document classification. arXiv preprint arXiv:1812.09617.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "On the importance of subword information for morphological tasks in truly low-resource languages",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Heinzerling",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "216--226",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1021"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Zhu, Benjamin Heinzerling, Ivan Vuli\u0107, Michael Strube, Roi Reichart, and Anna Korhonen. 2019. On the importance of subword information for morpho- logical tasks in truly low-resource languages. In Pro- ceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 216- 226, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"text": "",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF6": {
"text": "",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: The top 5 tags on which performance was im-</td></tr><tr><td>proved the most, compared to the simple cross-lingual</td></tr><tr><td>transfer baseline, in our Portuguese-Russian experi-</td></tr><tr><td>ments. The number reflects the proportion of forms</td></tr><tr><td>that were improved in the Russian-Portuguese combi-</td></tr><tr><td>nations using each of our techniques under each setting.</td></tr></table>"
}
}
}
}