ACL-OCL / Base_JSON /prefixK /json /K17 /K17-2001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K17-2001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:07:15.463072Z"
},
"title": "CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection in 52 Languages",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Walther",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {}
},
"email": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indiana University",
"location": {}
},
"email": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {}
},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The CoNLL-SIGMORPHON 2017 shared task on supervised morphological generation required systems to be trained and tested in each of 52 typologically diverse languages. In sub-task 1, submitted systems were asked to predict a specific inflected form of a given lemma. In sub-task 2, systems were given a lemma and some of its specific inflected forms, and asked to complete the inflectional paradigm by predicting all of the remaining inflected forms. Both sub-tasks included high, medium, and low-resource conditions. Sub-task 1 received 24 system submissions, while sub-task 2 received 3 system submissions. Following the success of neural sequence-to-sequence models in the SIGMORPHON 2016 shared task, all but one of the submissions included a neural component. The results show that high performance can be achieved with small training datasets, so long as models have appropriate inductive bias or make use of additional unlabeled data or synthetic data. However, different biasing and data augmentation resulted in non-identical sets of inflected forms being predicted correctly, suggesting that there is room for future improvement.",
"pdf_parse": {
"paper_id": "K17-2001",
"_pdf_hash": "",
"abstract": [
{
"text": "The CoNLL-SIGMORPHON 2017 shared task on supervised morphological generation required systems to be trained and tested in each of 52 typologically diverse languages. In sub-task 1, submitted systems were asked to predict a specific inflected form of a given lemma. In sub-task 2, systems were given a lemma and some of its specific inflected forms, and asked to complete the inflectional paradigm by predicting all of the remaining inflected forms. Both sub-tasks included high, medium, and low-resource conditions. Sub-task 1 received 24 system submissions, while sub-task 2 received 3 system submissions. Following the success of neural sequence-to-sequence models in the SIGMORPHON 2016 shared task, all but one of the submissions included a neural component. The results show that high performance can be achieved with small training datasets, so long as models have appropriate inductive bias or make use of additional unlabeled data or synthetic data. However, different biasing and data augmentation resulted in non-identical sets of inflected forms being predicted correctly, suggesting that there is room for future improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphology interacts with both syntax and phonology. As a result, explicitly modeling morphology has been shown to aid a number of tasks in human language technology (HLT), including machine translation (MT) (Dyer et al., 2008) , speech recognition (Creutz et al., 2007) , parsing (Seeker and \u00c7 etino\u01e7lu, 2015) , keyword spot-ting (Narasimhan et al., 2014) , and word embedding (Cotterell et al., 2016b) . Dedicated systems for modeling morphological patterns and complex word forms have received less attention from the HLT community than tasks that target other levels of linguistic structure. Recently, however, there has been a surge of work in this area (Durrett and DeNero, 2013; Ahlberg et al., 2014; Nicolai et al., 2015; Faruqui et al., 2016) , representing a renewed interest in morphology and the potential to use advances in machine learning to attack a fundamental problem in string-to-string transformations: the prediction of one morphologically complex word form from another. This increased interest in morphology as an independent set of problems within HLT arrives at a particularly opportune time, as morphology is also undergoing a methodological renewal within theoretical linguistics where it is moving towards increased interdisciplinary work and quantitative methodologies (Moscoso del Prado Mart\u00edn et al., 2004; Milin et al., 2009; Ackerman et al., 2009; Sagot and Walther, 2011; Ackerman and Malouf, 2013; Baayen et al., 2013; Blevins, 2013; Pirrelli et al., 2015; Blevins, 2016) . Pushing the HLT research agenda forward in the domain of morphology promises to lead to mutually highly beneficial dialogue between the two fields.",
"cite_spans": [
{
"start": 208,
"end": 227,
"text": "(Dyer et al., 2008)",
"ref_id": null
},
{
"start": 249,
"end": 270,
"text": "(Creutz et al., 2007)",
"ref_id": null
},
{
"start": 281,
"end": 310,
"text": "(Seeker and \u00c7 etino\u01e7lu, 2015)",
"ref_id": null
},
{
"start": 331,
"end": 356,
"text": "(Narasimhan et al., 2014)",
"ref_id": null
},
{
"start": 378,
"end": 403,
"text": "(Cotterell et al., 2016b)",
"ref_id": null
},
{
"start": 659,
"end": 685,
"text": "(Durrett and DeNero, 2013;",
"ref_id": null
},
{
"start": 686,
"end": 707,
"text": "Ahlberg et al., 2014;",
"ref_id": null
},
{
"start": 708,
"end": 729,
"text": "Nicolai et al., 2015;",
"ref_id": null
},
{
"start": 730,
"end": 751,
"text": "Faruqui et al., 2016)",
"ref_id": null
},
{
"start": 1298,
"end": 1337,
"text": "(Moscoso del Prado Mart\u00edn et al., 2004;",
"ref_id": null
},
{
"start": 1338,
"end": 1357,
"text": "Milin et al., 2009;",
"ref_id": null
},
{
"start": 1358,
"end": 1380,
"text": "Ackerman et al., 2009;",
"ref_id": null
},
{
"start": 1381,
"end": 1405,
"text": "Sagot and Walther, 2011;",
"ref_id": null
},
{
"start": 1406,
"end": 1432,
"text": "Ackerman and Malouf, 2013;",
"ref_id": null
},
{
"start": 1433,
"end": 1453,
"text": "Baayen et al., 2013;",
"ref_id": null
},
{
"start": 1454,
"end": 1468,
"text": "Blevins, 2013;",
"ref_id": null
},
{
"start": 1469,
"end": 1491,
"text": "Pirrelli et al., 2015;",
"ref_id": null
},
{
"start": 1492,
"end": 1506,
"text": "Blevins, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Rich morphology is the norm among the languages of the world. The linguistic typology database WALS shows that 80% of the world's languages mark verb tense through morphology while 65% mark grammatical case (Haspelmath et al., 2005) . The more limited inflectional system of English may help to explain the fact that morphology has received less attention in the computational literature than it is arguably due.",
"cite_spans": [
{
"start": 207,
"end": 232,
"text": "(Haspelmath et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The CoNLL-SIGMORPHON 2017 shared task worked to promote the development of robust systems that can learn to perform cross-linguistically Table 1 : Example training data from sub-task 1. Each training example maps a lemma and inflection to an inflected form, The inflection is a bundle of morphosyntactic features. Note that inflected forms (and lemmata) can encompass multiple words. In the test data, the last column (the inflected form) must be predicted by the system. reliable morphological inflection and morphological paradigm cell filling using varying amounts of training data. We note that this is also the first CoNLL-hosted shared task to focus on morphology. The task itself featured training and development data from 52 languages representing a range of language families. Many of the languages included were extremely low-resource, e.g., Quechua, Navajo, and Haida. The chosen languages also encompassed diverse morphological properties and inflection processes. Whenever possible, three data conditions were given for each language: low, medium, and high. In the inflection sub-task, these corresponded to seeing 100 examples, 1,000 examples, and 10,000 examples respectively in the training data for almost all languages. The results show that encoder-decoder recurrent neural network models (RNNs) can perform very well even with small training sets, if they are augmented with various mechanisms to cope with the low-resource setting. The shared task training, development, and test data are released publicly. 1",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This year's shared task contained two sub-tasks, which represented slightly different learning scenarios that might be faced by an HLT engineer or (roughly speaking) a human learner. Beyond manually vetted 2 data for training, development and test, monolingual corpus data (Wikipedia dumps) was also provided for both of the sub-tasks. Figure 1 illustrates the two tasks and defines some terminology. Table 2 : Example training and test data from sub-task 2 in Spanish. At training time, the system is provided with complete paradigms, i.e., tables of all inflections for a given lemma, like the example at top. At test time, the system is asked to complete partially filled paradigms, like the example at bottom; note that the inflectional features for the missing paradigm cells are provided in the input.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 342,
"text": "Figure",
"ref_id": null
},
{
"start": 401,
"end": 408,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Evaluation Details",
"sec_num": "2"
},
{
"text": "The CoNLL-SIGMORPHON 2017 shared task is the second shared task in a series that began with the SIGMORPHON 2016 shared task on morphological reinflection (Cotterell et al., 2016a) . In contrast to 2016, it happens that both of the 2017 sub-tasks actually involve only inflection, not reinflection. 3 Nonetheless, we kept \"reinflection\" in this year's title to make it easier to refer to the series of tasks.",
"cite_spans": [
{
"start": 154,
"end": 179,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
},
{
"start": 298,
"end": 299,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Evaluation Details",
"sec_num": "2"
},
{
"text": "The first sub-task in Figure 1 required morphological generation with sparse training data, something that can be practically useful for MT and other downstream tasks in NLP. Here, participants were given examples of inflected forms as shown in Table 1 . Each test example asked them to produce some other inflected form when given a lemma and a bundle of morphosyntactic features.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 1",
"ref_id": null
},
{
"start": 245,
"end": 252,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "The training data was sparse in the sense that it included only a few inflected forms from each lemma. That is, as in human L1 learning, the learner does not necessarily observe any complete paradigms in a language where the paradigms are Sub-task 1 Sub-task 2 lemma2 Figure 1 : Overview of sub-tasks. Each large rectangle represents a paradigm, i.e., the full set of inflected forms for some lemma. Each small rectangle within the paradigm is a cell that is associated with a known morphological feature bundle, and lists a string that either is observed (shaded background) or must be predicted (white background). Sub-task 1 featured sparse training data and asked systems to inflect individual forms at test time. Sub-task 2 provides dense paradigms as training data and asks for full paradigm completion of unseen items.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 276,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "large (e.g., dozens of inflected forms per lemma). 4 Key points:",
"cite_spans": [
{
"start": 51,
"end": 52,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "1. Our sub-task 1 is similar to sub-task 1 of the SIGMORPHON 2016 shared task (Cotterell et al., 2016a) , but with structured inflectional tags (Sylak-Glassman et al., 2015a) , learning curve assessment, and many new typologically diverse languages, including lowresource languages.",
"cite_spans": [
{
"start": 78,
"end": 103,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
},
{
"start": 144,
"end": 174,
"text": "(Sylak-Glassman et al., 2015a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "2. The task is inflection: Given an input lemma and desired output tag, participants had to generate the correct output inflected form (a string).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "3. The supervised training data consisted of individual forms (Table 1 ) that were sparsely sampled from a large number of paradigms.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "(Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "4. Forms that are empirically more frequent were more likely to appear in both training and test data (see \u00a73 for details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "5. Unannotated corpus data was also provided to participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "6. Systems were evaluated after training on 10 2 , 10 3 , and 10 4 forms. 4 Of course, human L1 learners do not get to observe explicit morphological feature bundles for the types that they observe. Rather, they analyze inflected tokens in context to discover both morphological features (including inherent features such as noun gender (Arnon and Ramscar, 2012)) and paradigmatic structure (number of forms per lemma, number of expressed featural contrasts such as tense, number, person. . . ).",
"cite_spans": [
{
"start": 74,
"end": 75,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 1: Inflected Form from Lemma",
"sec_num": "2.1"
},
{
"text": "The second sub-task in Figure 1 focused on paradigm completion, also known as \"the paradigm cell filling problem\" (Ackerman et al., 2009) .",
"cite_spans": [
{
"start": 114,
"end": 137,
"text": "(Ackerman et al., 2009)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "Here, participants were given a few complete inflectional paradigms as training data. At test time, partially filled paradigms, i.e. paradigms with significant gaps in them, were to be completed by filling out the missing cells. Table 2 gives examples. Thus, sub-task 2 requires predicting many inflections of the same lemma. Recall that sub-task 1 also required the system to predict several inflections of the same lemma (when they appear as separate examples in test data). However, in sub-task 2, one of our test-time evaluation metrics ( \u00a72.3) is full-paradigm accuracy. Also, the sub-task 2 training data provides full paradigms, in contrast to sub-task 1 where it included only a few inflected forms per lemma. Finally, at test time, sub-task 2 presents each lemma along with some of its inflected forms, which is potentially helpful if the lemma had not appeared previously in training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "Apart from the theoretical interest in this problem (Ackerman and Malouf, 2013), this sub-task is grounded in the practical problem of extrapolation of basic resources for a language, where only a few complete paradigms may be available from a native speaker informant (Sylak-Glassman et al., 2016) or a reference grammar. L2 classroom instruction also asks human students to memorize example paradigms and generalize from them.",
"cite_spans": [
{
"start": 269,
"end": 298,
"text": "(Sylak-Glassman et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "Key points:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "1. The training data consisted of complete paradigms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "2. Not all paradigms within a language have the same shape. A noun lemma will have a different set of cells than a verb lemma does, and verbs of different classes (e.g., lexically perfective vs. imperfective) may also have different sets of cells.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "3. The task was paradigm completion: given a sparsely populated paradigm, participants should generate the inflected forms (strings) for all missing cells.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "4. The task simulates learning from compiled grammatical resources and inflection tables, or learning from a limited time with a nativelanguage informant in a fieldwork scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "5. Three training sets were given, building up in size from only a few complete paradigms to a large number (dozens).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Task 2: Paradigm Completion",
"sec_num": "2.2"
},
{
"text": "Each team participating in a given sub-task was asked to submit 156 versions of their system, where each version was trained using a different training set (3 training sizes \u00d7 52 languages) and its corresponding development set. We evaluated each submitted system on its corresponding test set, i.e., the test set for its language. We computed three evaluation metrics: (i) Overall 1-best test-set accuracy, i.e., is the predicted paradigm cell correct? (ii) average Levenshtein distance, i.e., how badly does the predicted form disagree with the answer? (iii) Fullparadigm accuracy, i.e., is the complete paradigm correct? This final metric only truly makes sense in sub-task 2, where full paradigms are given for evaluation. For each sub-task, the three data conditions (low, medium, and high) resulted in a learning curve. For each system in each condition, we report the average metrics across all 52 languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.3"
},
{
"text": "The data for the shared task was highly multilingual, comprising 52 unique languages. Data for 47 of the languages came from the English edition of Wiktionary, a large multi-lingual crowd-sourced dictionary containing morphological paradigms for many lemmata. 5 Data for Khaling, Kurmanji Kurdish, and Sorani Kurdish was created as part of the Alexina project (Walther et al., 2013 (Walther et al., , 2010 Walther and Sagot, 2010) . 6 Novel data for Haida, a severely endangered North American language isolate, was prepared by Jordan Lachler (University of Alberta). The Basque language data was extracted from a manually designed finite-state morphological analyzer (Alegria et al., 2009) .",
"cite_spans": [
{
"start": 360,
"end": 381,
"text": "(Walther et al., 2013",
"ref_id": null
},
{
"start": 382,
"end": 405,
"text": "(Walther et al., , 2010",
"ref_id": null
},
{
"start": 406,
"end": 430,
"text": "Walther and Sagot, 2010)",
"ref_id": null
},
{
"start": 433,
"end": 434,
"text": "6",
"ref_id": null
},
{
"start": 668,
"end": 690,
"text": "(Alegria et al., 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "3.1"
},
{
"text": "The shared task language set is genealogically diverse, including languages from 10 language stocks. Although the majority of the languages are Indo-European, we also include two language isolates (Haida and Basque) along with languages from Athabaskan (Navajo), Kartvelian (Georgian), Quechua, Semitic (Arabic, Hebrew), Sino-Tibetan (Khaling), Turkic (Turkish), and Uralic (Estonian, Finnish, Hungarian, and Northern Sami). The shared task language set is also diverse in terms of morphological structure, with languages which use primarily prefixes (Navajo), suffixes (Quechua and Turkish), and a mix, with Spanish exhibiting internal vowel variations along with suffixes and Georgian using both infixes and suffixes. The language set also exhibits features such as templatic morphology (Arabic, Hebrew), vowel harmony (Turkish, Finnish, Hungarian), and consonant harmony (Navajo) which require systems to learn non-local alternations. Finally, the resource level of the languages in the shared task set varies greatly, from major world languages (e.g. Arabic, English, French, Spanish, Russian) to languages with few speakers (e.g. Haida, Khaling).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "3.1"
},
{
"text": "For each language, the basic data consists of triples of the form (lemma, feature bundle, inflected form), as in Table 1 . The first feature in the bundle always specifies the core part of speech (e.g., verb). All features in the bundle are coded according to the UniMorph Schema, a crosslinguistically consistent universal morphological feature set (Sylak-Glassman et al., 2015a,b) .",
"cite_spans": [
{
"start": 350,
"end": 382,
"text": "(Sylak-Glassman et al., 2015a,b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Format",
"sec_num": "3.2"
},
{
"text": "For each of the 47 Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full inflectional paradigm for a particular lemma. These tables were initially extracted via a multi-dimensional table parsing strategy (Kirov et al., 2016; Sylak-Glassman et al., 2015a) .",
"cite_spans": [
{
"start": 245,
"end": 265,
"text": "(Kirov et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 266,
"end": 295,
"text": "Sylak-Glassman et al., 2015a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction from Wiktionary",
"sec_num": "3.3"
},
{
"text": "As noted in \u00a72.2, different paradigms may have different shapes. To prepare the shared task data, each language's parsed tables from Wiktionary were grouped according to their tabular structure and number of cells. Each group represents a different type of paradigm (e.g., verb). We used only groups with a large number of lemmata, relative to the number of lemmata available for the language as a whole. For each group, we associated a feature bundle with each cell position in the table, by manually replacing the prose labels describing grammatical features (e.g. \"accusative case\") with UniMorph features (e.g. acc). This allowed us to extract triples as described in the previous section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction from Wiktionary",
"sec_num": "3.3"
},
{
"text": "By applying this process across the 47 languages, we constructed a large multilingual dataset that refines the parsed tables from previous work. This dataset was sampled to create appropriately-sized data for the shared task, as described in \u00a73.4. 7 Full and sampled dataset sizes by language are given in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Extraction from Wiktionary",
"sec_num": "3.3"
},
{
"text": "Systematic syncretism is collapsed in Wiktionary. For example, in English, feature bundles do not distinguish between different person/number forms of past tense verbs, because they are identical. 8 Thus, the past-tense form went appears only once in the table for go, not six times, and gives rise to only one triple, whose feature bundle specifies past tense but not person and number.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction from Wiktionary",
"sec_num": "3.3"
},
{
"text": "From each language's collection of paradigms, we sampled the training, development, and test sets as follows. These datasets can be obtained from http://www.sigmorphon. org/conll2017. Our first step was to construct probability distributions over the (lemma, feature bundle, inflected form) triples in our full dataset. For each triple, we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language. Note that this simple \"string match\" heuristic overestimates the count, since strings are ambiguous: not all of the counted tokens actually render that feature bundle. 9 From these counts, we estimated a unigram distribution over triples, using Laplace smoothing (add-1 smoothing). We then sampled 12000 triples without replacement from this distribution. The first 100 were taken as the low-resource training set for sub-task 1, the first 1000 as the mediumresource training set, and the first 10000 as the high-resource training set. Note that these training sets are nested, and that the highest-count triples tend to appear in the smaller training sets.",
"cite_spans": [
{
"start": 613,
"end": 614,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling the Train-Dev-Test Splits",
"sec_num": "3.4"
},
{
"text": "The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each. The final shuffling was performed to ensure that the development set is similar to the test set. By contrast, the development and test sets tend to contain lower-count triples than the training set. 10 In those languages where we have less than 12000 total forms, we omit the high-resource training set (all languages have at least 3000 forms).",
"cite_spans": [
{
"start": 324,
"end": 326,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling the Train-Dev-Test Splits",
"sec_num": "3.4"
},
{
"text": "To sample the data for sub-task 2, we perform a similar procedure. For each paradigm in our full dataset, we counted the number of tokens in Wikipedia that matched any of the inflected forms in the paradigm. From these counts, we estimated a unigram distribution over paradigms, using Laplace smoothing. We sampled 300 paradigms without replacement from this 9 For example, in English, any token of the string walked will be double-counted as both the past tense and the past participle of the lemma walk. This problem holds for all regular English verbs. Similarly, when we are counting the present-tense tokens lay of the lemma lay, we will also include tokens of the string lay that are actually the past tense of lie, or are actually the adjective or noun senses of lay. The alternative to double-counting each ambiguous token would have been to use EM to split the token's count of 1 unequally among its possible analyses, in proportion to their estimated prior probabilities (Cotterell et al., 2015) .",
"cite_spans": [
{
"start": 981,
"end": 1005,
"text": "(Cotterell et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling the Train-Dev-Test Splits",
"sec_num": "3.4"
},
{
"text": "10 This is a realistic setting, since supervised training is usually employed to generalize from frequent words that appear in annotated resources to less frequent words that do not. Unsupervised learning methods also tend to generalize from more frequent words (which can be analyzed more easily by combining information from many contexts) to less frequent ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling the Train-Dev-Test Splits",
"sec_num": "3.4"
},
{
"text": "distribution. The low-resource training sets contain the first 10 paradigms, the medium-resource training set contains the first 50, and high-resource training set contains the first 200. Again, these training sets are nested. Note that since different languages have paradigms of different sizes, the actual number of training exemplars may differ drastically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling the Train-Dev-Test Splits",
"sec_num": "3.4"
},
{
"text": "With the same motivation as before, we shuffled the remaining 100 forms and took the first 50 as development and the next 50 as test. (In those languages with fewer than 300 forms, we again omitted the high-resource training setting.) For each development or test paradigm, we chose about 1 5 of the slots to provide to the system as input along with the lemma, asking the system to predict the remaining 4 5 . We determined which cells to keep by independently flipping a biased coin with probability 0.2 for each cell.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling the Train-Dev-Test Splits",
"sec_num": "3.4"
},
{
"text": "Because of the count overestimates mentioned above, our sub-task 1 dataset overrepresents triples where the inflected form (the answer) is ambiguous, and our sub-task 2 dataset overrepresents paradigms that contain ambiguous inflected forms. The degree of ambiguity varied among languages: the average number of triples per inflected form string ranged from 1.00 in Sorani to 2.89 in Khaling, with an average of 1.43 across all languages. Despite this distortion of true unigram counts, we believe that our datasets captured a sufficiently broad sample of the feature combinations for every language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling the Train-Dev-Test Splits",
"sec_num": "3.4"
},
{
"text": "Most recent work in inflection generation has focused on sub-task 1, i.e., generating inflected forms from the lemma. Numerous, methodologically diverse approaches have been published. We highlight a representative sample of recent work. Durrett and DeNero (2013) heuristically extracted transformation rules and trained a semi-Markov model (Sarawagi and Cohen, 2004) to learn when to apply them to the input. Nicolai et al. (2015) trained a discriminative string-tostring monotonic transduction tool-DIRECTL+ (Jiampojamarn et al., 2008) -to generate inflections. Ahlberg et al. (2014) reduced the problem to multi-class classification, where they used finite-state techniques to first generalize inflectional patterns and then trained a feature-rich classifier to choose the optimal such pattern to inflect unseen words (Ahlberg et al., 2015) . Finally, Malouf (2016), Faruqui et al. (2016) and Kann and Sch\u00fctze (2016) proposed a neural-based sequenceto-sequence models (Sutskever et al., 2014) , with Kann and Sch\u00fctze making use of an attention mechanism (Bahdanau et al., 2015) . Overall, the neural approaches have generally been found to be the most successful.",
"cite_spans": [
{
"start": 341,
"end": 367,
"text": "(Sarawagi and Cohen, 2004)",
"ref_id": null
},
{
"start": 510,
"end": 537,
"text": "(Jiampojamarn et al., 2008)",
"ref_id": "BIBREF10"
},
{
"start": 821,
"end": 843,
"text": "(Ahlberg et al., 2015)",
"ref_id": null
},
{
"start": 870,
"end": 891,
"text": "Faruqui et al. (2016)",
"ref_id": null
},
{
"start": 896,
"end": 919,
"text": "Kann and Sch\u00fctze (2016)",
"ref_id": "BIBREF13"
},
{
"start": 971,
"end": 995,
"text": "(Sutskever et al., 2014)",
"ref_id": null
},
{
"start": 1057,
"end": 1080,
"text": "(Bahdanau et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "4"
},
{
"text": "Some work has also focused on scenarios similar to sub-task 2. For example, Dreyer and Eisner (2009) modeled the distribution over the paradigms of a language as a Markov Random Field (MRF), where each cell is represented as a string-valued random variable. The MRF's factors are specified as weighted finite-state machines of the form given by Dreyer et al. (2008) . Building upon this, Cotterell et al. 2015proposed using a Bayesian network where both lemmata (repeated within a paradigm) and affixes (repeated across paradigms) were encoded as string-valued random variables. That work required its finitestate transducers to take a more restricted form (Cotterell et al., 2014) for computational reasons. Finally, Kann et al. (2017a) proposed a multisource sequence-to-sequence network, allowing a neural transducer to exploit multiple source forms simultaneously.",
"cite_spans": [
{
"start": 76,
"end": 100,
"text": "Dreyer and Eisner (2009)",
"ref_id": null
},
{
"start": 345,
"end": 365,
"text": "Dreyer et al. (2008)",
"ref_id": null
},
{
"start": 718,
"end": 737,
"text": "Kann et al. (2017a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "4"
},
{
"text": "SIGMORPHON 2016 Shared Task. Last year, the SIGMORPHON 2016 shared task (http:// sigmorphon.org/sharedtask) focused on 10 languages (including 2 surprise languages). As for the present 2017 task, most of the 2016 data was derived from Wiktionary. The 2016 shared task had submissions from 9 competing teams with members from 11 universities. As mentioned in \u00a72.1, our sub-task 1 is an extension of sub-task 1 from 2016. The other sub-tasks in 2016 focused on the more general reinflection problem, where systems had to learn to map from any inflected form to any other with varying degrees of annotations. See Cotterell et al. (2016a) for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "4"
},
{
"text": "The shared task provided a baseline system to participants that addressed both tasks and all languages. The system was designed for speed of application and also for adequate accuracy with little training data, in particular in the low and medium data conditions. The design of the baseline was inspired by the University of Colorado's submission (Liu and Mao, 2016) Table 4 : Quantity of data available in sub-task 2. For each possible part of speech in each language, we present the range in the number of forms that comprise a paradigm as an indication of the difficulty of the task of forming a full paradigm. These ranges were computed using the data in the Train Medium condition.",
"cite_spans": [
{
"start": 347,
"end": 366,
"text": "(Liu and Mao, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Baseline System",
"sec_num": "5"
},
{
"text": "shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Baseline System",
"sec_num": "5"
},
{
"text": "For each (lemma, feature bundle, inflected form) triple in training data, the system initially aligns the lemma with the inflected form by finding the minimum-cost edit path. Costs are computed with a weighted scheme such that substitutions have a slightly higher cost (1.1) than insertions or deletions (1.0). For example, the German training data pair schielen-geschielt 'to squint' (going from the lemma to the past participle) is aligned as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment",
"sec_num": "5.1"
},
{
"text": "--schielen geschielt-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment",
"sec_num": "5.1"
},
{
"text": "The system now assumes that each aligned pair can be broken up into a prefix, stem and a suffix, based on where the inputs or outputs have initial or trailing blanks after alignment. We assume that initial or trailing blanks in either input or output reflect boundaries between a prefix and a stem, or a stem and a suffix. This allows us to divide each training example into three parts. Using the example above, the pairs would be aligned as follows, after padding the edges with $-symbols: prefix stem suffix $ schiele n$ $ge schielt $",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment",
"sec_num": "5.1"
},
{
"text": "From this alignment, the system extracts a prefixchanging rule based on the prefix pairing, as well as a set of suffix-changing rules based on suffixes of the stem+suffix pairing. The example alignment above yields the eight extracted suffixmodifying rules",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflection Rules",
"sec_num": "5.2"
},
{
"text": "n$ \u2192 $ ielen$ \u2192 ielt$ en$ \u2192 t$ hielen$ \u2192 hielt$ len$ \u2192 lt$ chielen$ \u2192 chielt$ elen$ \u2192 elt$ schielen$ \u2192 schielt$",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflection Rules",
"sec_num": "5.2"
},
{
"text": "as well as the prefix-modifying rule $ \u2192 $ge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflection Rules",
"sec_num": "5.2"
},
{
"text": "Since these rules were obtained from the triple (schielen, V;V.PTCP;PST, geschielt), they are associated with a token of the feature bundle V;V.PTCP;PST.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflection Rules",
"sec_num": "5.2"
},
{
"text": "At test time, to inflect a lemma with features, the baseline system applies rules associated with Table 5 : The teams' abbreviations as well as their members' institutes and the accompanying system description paper are listed here. Note that in the main text the abbreviations are used with a integer index, indicating the specific submission. One team (marked * ), did not submit a system description.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation",
"sec_num": "5.3"
},
{
"text": "training tokens of the precise feature bundle. There is no generalization across bundles that share features. Specifically, the longest-matching suffix rule associated with the feature bundle is consulted and applied to the input form. Ties are broken by frequency, in favor of the rule that has occurred most often with this feature bundle. After this, the prefix rule that occurred most often with the bundle is likewise applied. That is, the prefix-matching rule has no longest-match preference, while the suffixmatching rule does.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "5.3"
},
{
"text": "For example, to inflect kaufen 'to buy' with the features V;V.PTCP;PST, using the single example above as training data, we would find that the longest matching stored suffix-rule is en$ \u2192 t$, which would transform kaufen into an intermediate form kauft, after which the most frequent prefix-rule, $ \u2192 $ge would produce the final output gekauft. If no rules have been associated with a particular feature bundle (as often happens in the low data condition), the inflected form is simply taken to be a copy of the lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "5.3"
},
{
"text": "In sub-task 2, paradigm completion, the baseline system simply repeats the sub-task 1 method and generates all the missing forms independently from the lemma. It does not take advantage of the other forms that are presented in the partially filled paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "5.3"
},
{
"text": "In addition to the above, the baseline system uses a heuristic to place a language into one of two categories: largely prefixing or largely suffixing. Some languages, such as Navajo, are largely prefixing and have more complex changes in the left periphery of the input rather than at the right. However, in the method described above, the operation of the prefix rules is more restricted than that of the suffix rules: prefix rules tend to perform no change at all, or insert or delete a prefix. For largely prefixing languages, the method performs better when operating with reversed strings. Classifying a language into prefixing or suffixing is done by simply counting how often there is a prefix change vs. suffix change in going from the lemma form to the inflected form in the training data. Whenever a language is found to be largely prefixing, the system works with reversed strings throughout to allow more expressive changes in the left edge of the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "5.3"
},
{
"text": "The CoNLL-SIGMORPHON 2017 shared task received submissions from 11 teams with members from 15 universities and institutes ( Table 5 ). Many of the teams submitted more than one system, yielding a total of 25 unique systems entered including the baseline system. In contrast to the 2016 shared task, all but one of the submitted systems included a neural component. Despite the relative uniformity of the sub- mitted architectures, we still observed large differences in the individual performances. Rather than differences in architecture, a major difference this year was the various methods for supplying the neural network with auxiliary training data. For ease of presentation, we break down the systems into the features of their system (see Table 6 ) and discuss the systems that had those features. In all cases, further details of the methods can be found in the system description papers, which are cited in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 5",
"ref_id": null
},
{
"start": 747,
"end": 754,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 917,
"end": 924,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "Neural Parameterization. All systems except for the EHU team employed some form of a neural network. Moreover, all teams except for SU-RUG, which employed a convolutional neural network, made use of some form of gated recurrent network-either a gated recurrent network (GRU) (Chung et al., 2014) or long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) . In these neural models, a common strategy was to feed in the morphological tag of the form to be predicted along with the input into the network, where each subtag was its own symbol.",
"cite_spans": [
{
"start": 329,
"end": 363,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "Hard Alignment versus Soft Attention. Another axis, along which the systems differ is the use of hard alignment, over soft attention. The neural attention mechanism was introduced in Bahdanau et al. (2015) for neural machine translation (NMT). In short, these mechanisms avoid the necessity of encoding the input word into a fixed length vector, by allowing the decoder to attend to different parts of the inputs. Just as in NMT, the attention mechanism has led to large gains in mor-phological inflection. The CMU, CU, IIT (BHU), LMU, UE-LMU, UF and UTNII systems all employed such mechanisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "An alternative to soft attention is hard, monotonic alignment, i.e., a neural parameterization of a traditional finite-state transduction system. These systems enforce a monotonic alignment between source and target forms. In the 2016 shared task (see Cotterell et al., 2016a, Table 6 ) such a system placed second (Aharoni et al., 2016) , and this year's winning system-CLUZH-was an extension of that one. (See, also, Aharoni and Goldberg (2017) for a further explication of the technique and Rastogi et al. 2016for discussion of a related neural parameterization of a weighted finite-state machine.) Their system allows for explicit biasing towards a copy action that appears useful in the low-resource setting. Despite its neural parameterization, the CLUZH system is most closely related to the systems of UA and EHU, which train weighted finite-state transducers, albeit with a log-linear parameterization.",
"cite_spans": [
{
"start": 315,
"end": 337,
"text": "(Aharoni et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "Reranking. Reranking the output of a weaker system was a tack taken by two systems: ISI and UA. The ISI system started with a heuristically induced candidate set, using the edit tree approach described by Chrupa\u0142a et al. (2008) , and then chose the best edit tree. This approach is effectively a neuralized version of the lemmatizer proposed in M\u00fcller et al. (2015) and, indeed, was originally intended for that task (Chakrabarty et al., 2017). The UA team, following their 2016 submission, proposed a linear reranking on top of the k-best output of their transduction system. Data Augmentation. Many teams made use of auxiliary training data-unlabeled or synthetic forms.",
"cite_spans": [
{
"start": 205,
"end": 227,
"text": "Chrupa\u0142a et al. (2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "Some teams leveraged the provided Wikipedia corpora (see \u00a73). The UE-LMU team used these unlabeled corpora to bias their methods towards copying by transducing an unlabeled word to itself. The same team also explored a similar setup that instead learned to transduce random strings to themselves, and found that using random strings worked almost as well as words that appeared in unlabeled corpora. CMU used a variational autoencoder and treated the tags of unannotated words in the Wikipedia corpus as latent variables (see Zhou and Neubig (2017b) for more details). Other teams attempted to get silver-standard labels for the unlabeled corpora. For example, the UA team trained a tagger on the given training examples, and then tagged the corpus with the goal to obtain additional instances, while the UE-LMU team used a series of unsupervised heuristics. The CU team-which did not make use of external resources-hallucinated more training data by identifying suffix and prefix changes in the given training pairs and then using that information to create new artificial training pairs. The LMU submission also experimented with handwritten rules to artificially generate more data. It seems likely that the primary difference in the performance of the various neural systems lay in these strategies for the creation of new data to train the parameters, rather than in the neural architectures themselves. Table 9 , which indicates the best perform accuracy achieved by a submitted system. Full results can be found in Appendix A, including full-paradigm accuracy. Three teams exploited external resources in some form: UA, CMU, and UE-LMU. In general, any relative performance gained was minimal. The CMU system was outranked by several systems that avoided external resource use in the High and Medium conditions in which it competed. UE-LMU only submitted a system that used additional resources in the Medium condition, and saw gains of \u223c%1 compared to their basic system, while it was still outranked overall by CLUZH. In the Low condition, UA saw gains of \u223c%3 using external data. However, all UA submissions were limited to a small handful of languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 1409,
"end": 1416,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "All but one of the systems submitted were neural. As expected given the results from SIGMOR-PHON 2016, these systems perform very well when in the High training condition where data is relatively plentiful. In the Low and Medium conditions, however, standard encoder-decoder architectures perform worse than the baseline using only the training data provided. Teams that beat the baseline succeeded by biasing networks towards the correct solutions through pre-training on synthetic data designed to capture the overall inflectional patterns in a language. As seen in Table 9, these techniques worked better for some languages than for others. Languages with smaller, more regular paradigms were handled well (e.g., English sub-task 1 low-resource accuracy was at 90%). Languages with more complex systems, like Latin, proved more challenging (the best system achieved only 19% accuracy in the low condition). For these languages, it is possible that the relevant variation required to learn a best per-form inflectional pattern was simply not present in the limited training data, and that a language-specific learning bias was required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "Even though the top-ranked systems do well on their own, different systems may contain some amount of complementary information, so that an ensemble over multiple approaches has a chance to improve accuracy. We present an upper bound on the possible performance of such an ensemble. Table 7 and Table 8 include an \"Ensemble Oracle\" system (oracle-e) that gives the correct answer if any of the submitted systems is correct. The oracle performs significantly better than any one system in both the Medium (\u223c10%) and Low (\u223c15%) conditions. This suggests that the different strategies used by teams to \"bias\" their systems in an effort to make up for sparse data lead to substantially different generalization patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 302,
"text": "Table 7 and Table 8",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "For sub-task 1, we also present a second \"Feature Combination\" Oracle (oracle-fc) that gives the correct answer for a given test triple iff its feature bundle appeared in training (with any lemma). Thus, oracle-fc provides an upper bound on the performance of systems that treat a feature bundle such as V;SBJV;FUT;3;PL as atomic. In the lowdata condition, this upper bound was only 71%, meaning that 29% of the test bundles had never been seen in training data. Nonetheless, systems should be able to make some accurate predictions on this 29% by decomposing each test bundle into individual morphological features such as FUT (future) and PL (plural), and generalizing from training examples that involve those features. For example, a particular feature or sub-bundle might be realized as a particular affix. Several of the systems treated each individual feature as a separate input to the recurrent network, in order to enable this type of generalization. In the medium data condition for some languages, these systems sometimes far surpassed oracle-fc. The most notable example of this is Basque, where oracle-fc produced a 47% accuracy while six of the submitted systems produced an accuracy of 85% or above. Basque is an extreme example with very large paradigms for the verbs that inflect in the language (only a few dozen common ones do). This result demonstrates the ability of the neural systems to generalize and correctly inflect according to unseen feature combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "6"
},
{
"text": "As regards morphological inflection, there is a plethora of future directions to consider. First, one might consider morphological transductions over pronunciations, rather than spellings. This is more challenging in the many languages (including English) where the orthography does not reflect the phonological changes that accompany morphological processes such as affixation. Orthography usually also does not reflect predictable allophonic distinctions in pronunciation (Sampson, 1985) , which one might attempt to predict, such as the difference in aspiration of /t/ in English [t h Ap] (top) vs.",
"cite_spans": [
{
"start": 474,
"end": 489,
"text": "(Sampson, 1985)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "8"
},
{
"text": "[stAp] (stop).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "8"
},
{
"text": "A second future direction involves the effective incorporation of external unannotated monolingual corpora into the state-of-the-art inflection or reinflection systems. The best systems in our competition did not make use of external data and those that did make heavy use of such data, e.g., the CMU team, did not see much gain. The best way to use external corpora remains an open question; we surmise that they can be useful, especially in the lower-resource cases. A related line of inquiry is the incorporation of cross-lingual information, which Kann et al. (2017b) did find to be helpful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "8"
},
{
"text": "A third direction revolves around the efficient elicitation of morphological information (i.e., active learning). In the low-resource section, we asked our participants to find the best approach to generate new forms given existing morphological annotation. However, it remains an open question, which of the cells in a paradigm are best to collect annotation for in the first place. Likely, it is better to collect diagnostic forms that are closer to principal parts of the paradigm (Finkel and Stump, 2007; Ackerman et al., 2009; Montermini and Bonami, 2013; Cotterell et al., 2017 ) as these will contain enough information such that the remaining transformations are largely deterministic. Experimental studies however suggest that speakers also strongly rely on pattern frequencies for inferring unknown forms (Seyfarth et al., 2014) . Another interesting direction would therefore also include the organization of data according to plausible real frequency distributions (especially in spoken data) and exploring possibly varying learning strategies associated with lexical items of various frequencies.",
"cite_spans": [
{
"start": 484,
"end": 508,
"text": "(Finkel and Stump, 2007;",
"ref_id": null
},
{
"start": 509,
"end": 531,
"text": "Ackerman et al., 2009;",
"ref_id": null
},
{
"start": 532,
"end": 560,
"text": "Montermini and Bonami, 2013;",
"ref_id": null
},
{
"start": 561,
"end": 583,
"text": "Cotterell et al., 2017",
"ref_id": null
},
{
"start": 815,
"end": 838,
"text": "(Seyfarth et al., 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "8"
},
{
"text": "Finally, there is a wide variety of other tasks involving morphology. While some of these have had a shared task, e.g., the parsing of morphologically-rich languages (Tsarfaty et al., 2010) and unsupervised morphological segmentation (Kurimo et al., 2010 ), many have not, e.g., supervised morphological segmentation and morphological tagging. A key purpose of shared tasks in the NLP community is the preparation and release of standardized data sets for fair comparison among methods. Future shared tasks in other areas of computational morphology would seem in order, giving the overall effectiveness of shared tasks in unifying research objectives in subfields of NLP, and as a starting point for possible cross-over with cognitively-grounded theoretical and quantitative linguistics.",
"cite_spans": [
{
"start": 234,
"end": 254,
"text": "(Kurimo et al., 2010",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Directions",
"sec_num": "8"
},
{
"text": "The CoNLL-SIGMORPHON shared task provided an evaluation on 52 languages, with large and small datasets, of systems for inflection and paradigm completion-two core tasks in computational morphological learning. On sub-task 1 (inflection), 24 systems were submitted, while on sub-task 2 (paradigm completion), 3 systems were submitted. All but one of the systems used rather similar neural network models, popularized by the SIGMORPHON shared task in 2016.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "The results reinforce the conclusions of the 2016 shared task that encoder-decoder architectures perform strongly when training data is plentiful, with exact-match accuracy on held-out forms surpassing 90% on many languages; we note there was a shortage of non-neural systems this year to compare with. In addition, and contrary to common expectation, many participants showed that neural systems can do reasonably well even with small training datasets. A baseline sequence-tosequence model achieves close to zero accuracy: e.g., Silfverberg et al. (2017) reported that all the team's neural models on the low data condition delivered accuracies in the 0-1% range without data augmentation, and other teams reported similar findings. However, with judicious application of biasing and data augmentation techniques, the best neural systems achieved over 50% exactmatch prediction of inflected form strings on 100 examples, and 80% on 1,000 examples, as compared to 38% for a baseline system that learns simple inflectional rules. It is hard to say whether these are \"good\" results in an absolute sense. An interesting experiment would be to pit the smalldata systems against human linguists who do not know the languages, to see whether the systems are able to identify the predictive patterns that humans discover (or miss).",
"cite_spans": [
{
"start": 531,
"end": 556,
"text": "Silfverberg et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "An oracle ensembling of all systems shows that there is still much room for improvement, in particular in low-resource settings. We have released the training, development, and test sets, and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and string-to-string transduction. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "This section contains detailed results for each submitted system on each language. Systems are ordered by average per-form accuracy for each sub-task and data condition. Three metrics are presented for each system/language combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Results",
"sec_num": null
},
{
"text": "1. Per-Form Accuracy: Percentage of test forms inflected correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Results",
"sec_num": null
},
{
"text": "2. Levenshtein Distance: Average Levenshtein distance of system-predicted form from gold inflected form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Results",
"sec_num": null
},
{
"text": "3. Per-Paradigm Accuracy: Percentage of unique lemmata (paradigms) for which every form was inflected correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Results",
"sec_num": null
},
{
"text": "Scores in bold include the highest scoring non-oracle system for each language as well as any other systems that did not differ significantly in terms of per-form accuracy according to a sign test (p >= 0.05).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Results",
"sec_num": null
},
{
"text": "Scores marked with a \u2020 indicate submissions that were significantly better than the feature combination oracle (p < 0.05), showing per-feature generalization. Scores marked with \u2021 did not differ significantly from the ensemble oracle, suggesting minimal complementary information across systems. .00/*/100.00 100.00/*/100.00 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 Bengali 100.00/*/100.00 100.00/*/100.00 100.00/0.00/100.00 \u2021 99.00/0. 100.00/*/100.00 100.00/*/100.00 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100.00 \u2021 100.00/0.00/100. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Detailed Results",
"sec_num": null
},
{
"text": "https://github.com/sigmorphon/ conll2017 2 Thanks to: I\u00f1aki Alegria, Gerlof Bouma, Zygmunt Frajzyngier, Chris Harvey, Ghazaleh Kazeminejad, Jordan Lachler, Luciana Marques, and Ruben Urizar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Cotterell et al. (2016a) defined the term: \"Systems developed for the 2016 Shared Task had to carry out reinflection of an already inflected form. This involved analysis of an already inflected word form, together with synthesis of a different inflection of that form.\" In 2016, sub-task 1 involved only inflection while sub-tasks 2-3 required reinflection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wiktionary.org/(08-2016 snapshot)6 https://gforge.inria.fr/projects/ alexina/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Full, unsampled Wiktionary parses are made available at unimorph.org on a rolling basis.8 In this example, Wiktionary omits the single exception: the lemma be distinguishes between past tenses was and were.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The first author would like to acknowledge the support of an NDSEG fellowship. Google provided support for the shared task in the form of an award. Several authors (CK, DY, JSG, MH) were supported in part by the Defense Advanced Research Projects Agency (DARPA) in the program Low Resource Languages for Emergent Incidents (LORELEI) under contract No. HR0011-15-C-0113. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sub-task 2 High Medium Low High Medium Low Albanian",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sub-task 2 High Medium Low High Medium Low Albanian 99.00(UE-LMU) 89.40(CU-1) 31.00(CU-1) 98.35(LMU-2) 88.81(LMU-1) 66.63(LMU-2)",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "LMU-2) 57.10(CU-1) 85.93(LMU-2) 55.95(LMU-2) 49.58(LMU-2) Catalan 98.40(CLUZH-1) 92.60(CLUZH-7) 66.40(CU-1) 99.35(LMU-2)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armenian 97.50(UE-LMU) 91.50(LMU-2) 58.70(CLUZH-7) 98.78(LMU-2) 97.77(LMU-2) 93.92(LMU-2) Basque 100.00(UTNII-1) 89.00(UE-LMU) 20.00(LMU-2) - 94.14(LMU-2) 93.02(LMU-2) Bengali 100.00(UE-LMU) 99.00(CLUZH-1) 68.00(CLUZH-3) 92.61(LMU-1) 91.72(LMU-2) 90.19(LMU-2) Bulgarian 98.10(UE-LMU) 82.50(LMU-2) 57.10(CU-1) 85.93(LMU-2) 55.95(LMU-2) 49.58(LMU-2) Catalan 98.40(CLUZH-1) 92.60(CLUZH-7) 66.40(CU-1) 99.35(LMU-2) 97.06(LMU-2) 94.16(baseline) Czech 94.10(UE-LMU) 86.30(CU-1) 44.00(CLUZH-7) 86.00(LMU-1) 58.61(LMU-2) 34.96(LMU-2) Danish 94.50(UE-LMU) 83.60(LMU-2) 75.50(CLUZH-7) 75.74(LMU-2) 71.15(baseline) 53.11(CU-1) Dutch 96.90(UE-LMU) 86.50(LMU-2) 53.60(baseline) 89.30(LMU-2) 86.53(LMU-2) 56.64(LMU-2) English 97.20(UE-LMU) 94.70(LMU-2) 90.60(UA-1) 91.60(baseline) 84.00(baseline) 84.40(CU-1) Estonian 98.90(UE-LMU) 82.40(UE-LMU) 32.90(CLUZH-7) 97.90(LMU-2) 92.43(LMU-2) 77.42(LMU-2) Faroese 87.80(CLUZH-7) 68.10(CLUZH-7) 42.40(CLUZH-7) 71.90(LMU-2) 68.31(LMU-2) 57.55(LMU-2) Finnish 95.10(UE-LMU) 78.40(UE-LMU) 19.70(CLUZH-7) 93.67(LMU-2) 89.48(LMU-2) 76.30(LMU-2) French 89.50(UE-LMU) 80.30(CLUZH-7) 66.00(CLUZH-7) 98.83(LMU-2) 95.38(LMU-2) 87.45(LMU-2) Georgian 99.40(LMU-2) 93.40(CLUZH-7) 85.60(LMU-2) 96.20(LMU-2) 89.67(LMU-2) 86.82(LMU-2)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "LMU-2) 84.63(LMU-2) 51.98(LMU-2) Latvian",
"authors": [],
"year": null,
"venue": "",
"volume": "87",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Latin 81.30(UE-LMU) 51.80(CLUZH-7) 19.30(CU-1) 87.70(LMU-2) 84.63(LMU-2) 51.98(LMU-2) Latvian 97.30(UE-LMU) 88.60(CLUZH-7) 68.10(CLUZH-4) 96.69(LMU-2) 89.19(LMU-2) 75.79(LMU-2) Lithuanian 95.80(UE-LMU) 62.60(UE-LMU) 23.30(baseline) 85.82(LMU-2) 82.87(LMU-2) 49.51(LMU-2)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LMU-2) 60.23(LMU-2) Navajo",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Macedonian 97.30(UE-LMU) 91.80(CLUZH-1) 65.50(CLUZH-7) 97.14(LMU-2) 88.98(LMU-2) 60.23(LMU-2) Navajo 92.30(UE-LMU) 50.80(CLUZH-7) 20.40(CLUZH-7) 58.22(LMU-2) 47.12(LMU-2) 35.48(LMU-2) Northern Sami 98.60(UE-LMU) 74.00(UE-LMU) 18.70(CU-1) 91.56(LMU-2) 83.51(LMU-2) 39.86(LMU-2)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "LMU-2) 99.56(LMU-2) 99.20(LMU-2) Polish 92",
"authors": [],
"year": null,
"venue": "",
"volume": "80",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norwegian Nynorsk 92.80(CLUZH-1) 65.60(LMU-2) 54.60(CLUZH-7) 64.42(baseline) 60.74(baseline) 42.33(baseline) Persian 99.90(LMU-2) 91.90(UE-LMU) 51.00(CLUZH-7) 100.00(LMU-2) 99.56(LMU-2) 99.20(LMU-2) Polish 92.80(UE-LMU) 79.90(CLUZH-7) 47.90(CLUZH-7) 90.27(baseline) 82.71(LMU-2) 64.53(LMU-2) Portuguese 99.30(LMU-2) 95.00(LMU-2) 73.30(CLUZH-7) 98.84(LMU-1) 98.58(LMU-2) 96.94(LMU-2) Quechua 100.00(CLUZH-4) 98.30(CLUZH-7) 61.10(CLUZH-7) 99.84(LMU-2) 99.60(LMU-2) 99.98(LMU-2) Romanian 89.10(UE-LMU) 77.40(CU-1) 46.30(CLUZH-7) 78.99(baseline) 76.63(LMU-2) 25.00(LMU-2) Russian 92.80(CLUZH-2) 84.10(CLUZH-2) 52.30(CLUZH-7) 87.42(CU-1) 85.74(LMU-2) 46.17(LMU-2)",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "LMU-2) Spanish 97.50(CLUZH-7) 91.70(UE-LMU) 66.40(CLUZH-7) 98",
"authors": [],
"year": null,
"venue": "",
"volume": "53",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slovene 97.10(CLUZH-5) 88.80(LMU-2) 63.00(CLUZH-7) 93.71(LMU-1) 85.10(LMU-2) 79.28(LMU-2) Sorani 89.40(CLUZH-7) 82.90(LMU-2) 27.10(CU-1) 86.39(LMU-2) 86.05(LMU-2) 57.65(LMU-2) Spanish 97.50(CLUZH-7) 91.70(UE-LMU) 66.40(CLUZH-7) 98.53(LMU-2) 97.89(LMU-2) 91.05(LMU-2) Swedish 93.10(UE-LMU) 79.70(UE-LMU) 64.20(CLUZH-3) 84.71(LMU-2) 70.88(LMU-2) 51.18(LMU-2) Turkish 98.40(UE-LMU) 89.70(UE-LMU) 42.00(CLUZH-7) 99.41(LMU-2) 98.65(LMU-2) 87.65(LMU-2) Ukrainian 95.00(UE-LMU) 82.50(CLUZH-7) 50.40(CU-1) 74.76(LMU-1) 67.14(baseline) 49.21(LMU-2) Urdu 99.70(UE-LMU) 98.00(CLUZH-4) 74.10(CLUZH-7) 98.44(LMU-1) 94.29(LMU-2) 88.53(LMU-2) Welsh 99.00(CLUZH-1) 93.00(LMU-2) 56.00(CLUZH-7) 97.96(LMU-2) 97.80(LMU-2) 89.89(LMU-2)",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Best per-form accuracy (and corresponding system) by language",
"authors": [],
"year": null,
"venue": "",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 9: Best per-form accuracy (and corresponding system) by language.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Joint processing and discriminative training for letter-to-phoneme conversion",
"authors": [
{
"first": "Sittichai",
"middle": [],
"last": "Jiampojamarn",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "905--913",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In Pro- ceedings of the 46th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 905- 913, Columbus, Ohio. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural multi-source morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "514--524",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Ryan Cotterell, and Hinrich Sch\u00fctze. 2017a. Neural multi-source morphological reinflec- tion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 514-524, Valencia, Spain. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "One-shot neural cross-lingual transfer for paradigm completion",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Ryan Cotterell, and Hinrich Sch\u00fctze. 2017b. One-shot neural cross-lingual transfer for paradigm completion. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), Van- couver, Canada. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Singlemodel encoder-decoder with explicit morphological representation for reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "555--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. Single- model encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "40--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2017. The LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 40-48, Vancouver. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Very-large scale parsing and normalization of Wiktionary morphological paradigms",
"authors": [
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Que",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "3121--3126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christo Kirov, John Sylak-Glassman, Roger Que, and David Yarowsky. 2016. Very-large scale pars- ing and normalization of Wiktionary morphological paradigms. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016), pages 3121-3126. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Morpho challenge competition 2005-2010: Evaluations and results",
"authors": [
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Ville",
"middle": [],
"last": "Turunen",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology",
"volume": "",
"issue": "",
"pages": "87--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikko Kurimo, Sami Virpioja, Ville Turunen, and Krista Lagus. 2010. Morpho challenge competition 2005-2010: Evaluations and results. In Proceed- ings of the 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonol- ogy, pages 87-95. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Morphological reinflection with conditional random fields and unsupervised features",
"authors": [
{
"first": "Ling",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lingshuang Jack",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Meeting of SIGMORPHON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling Liu and Lingshuang Jack Mao. 2016. Morpholog- ical reinflection with conditional random fields and unsupervised features. In Proceedings of the 2016 Meeting of SIGMORPHON, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Tatiana",
"middle": [],
"last": "Ruzsics",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Makarov, Tatiana Ruzsics, and Simon Clematide. 2017. Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 49-57, Vancouver. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sub-task 1 Low Condition Part 2",
"authors": [],
"year": null,
"venue": "",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 17: Sub-task 1 Low Condition Part 2.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sub-task 2 Low Condition Part",
"authors": [],
"year": null,
"venue": "",
"volume": "21",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 21: Sub-task 2 Low Condition Part 1.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "IND;FUT;2;SG liberar\u00e1s descomponer V;NEG;IMP;2;PL no descompong\u00e1is de aufbauen V;IND;PRS;2;SG baust auf Arztin N;DAT;PL\u00c4rztinnen",
"content": "<table><tr><td colspan=\"2\">Lang Lemma</td><td>Inflection</td><td>Inflected form</td></tr><tr><td>en</td><td>hug spark</td><td>V;PST V;V.PTCP;PRS</td><td>hugged sparking</td></tr><tr><td>es</td><td>liberar</td><td>V;</td><td/></tr></table>"
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"text": "IND;PST;1;PL;IPFV afront\u00e1bamos afrontar V;SBJV;PST;3;PL;LGSPEC1 afrontaran afrontar V;NEG;IMP;2;PL no afront\u00e9is afrontar V;NEG;IMP;3;SG no afronte afrontar V;COND;2;SG afrontar\u00edas afrontar V;IND;FUT;3;SG afrontar\u00e1 afrontar V;SBJV;FUT;3;PL afrontaren . . .",
"content": "<table><tr><td>Lemma</td><td>Inflections</td><td>Inflected forms</td></tr><tr><td/><td>Train</td><td/></tr><tr><td colspan=\"2\">afrontar V;Test</td><td/></tr><tr><td>revocar</td><td>V;IND;PST;1;PL;IPFV</td><td>revoc\u00e1bamos</td></tr><tr><td>revocar</td><td>V;SBJV;PST;3;PL;LGSPEC1</td><td>-</td></tr><tr><td>revocar</td><td>V;NEG;IMP;2;PL</td><td>no revoqu\u00e9is</td></tr><tr><td>revocar</td><td>V;NEG;IMP;3;SG</td><td>-</td></tr><tr><td>revocar</td><td>V;COND;2;SG</td><td>revocar\u00edas</td></tr><tr><td>revocar</td><td>V;IND;FUT;3;SG</td><td>-</td></tr><tr><td>revocar</td><td>V;SBJV;FUT;3;PL</td><td>-</td></tr><tr><td/><td>. . .</td><td/></tr></table>"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "to the SIGMORPHON 2016",
"content": "<table><tr><td>Language</td><td>Family</td><td>Lemmata / Forms</td><td colspan=\"5\">High Medium Low Dev Test</td><td>Pr</td><td>Su</td><td>Ap</td></tr><tr><td>Albanian</td><td>Indo-European</td><td>589 / 33483</td><td>587</td><td>379</td><td colspan=\"5\">82 384 369 56.24 95.14</td><td>1.09</td></tr><tr><td>Arabic</td><td>Semitic</td><td>4134 / 140003</td><td>3181</td><td>811</td><td colspan=\"6\">96 809 831 54.64 90.89 31.61</td></tr><tr><td>Armenian</td><td>Indo-European</td><td>7033 / 338461</td><td>4657</td><td>907</td><td colspan=\"5\">99 875 902 22.81 94.27</td><td>1.78</td></tr><tr><td>Basque</td><td>Isolate</td><td>26 / 11889</td><td>26</td><td>26</td><td>22</td><td>26</td><td colspan=\"4\">22 97.63 92.07 12.87</td></tr><tr><td>Bengali \u2020</td><td>Indo-Aryan</td><td>136 / 4443</td><td>136</td><td>134</td><td>65</td><td>65</td><td>68</td><td colspan=\"3\">0.04 94.98 17.59</td></tr><tr><td>Bulgarian</td><td>Slavic</td><td>2468 / 55730</td><td>2133</td><td>716</td><td colspan=\"5\">98 742 744 15.65 92.09</td><td>4.28</td></tr><tr><td>Catalan</td><td>Romance</td><td>1547 / 81576</td><td>1545</td><td>742</td><td colspan=\"3\">96 744 733</td><td colspan=\"2\">0.41 98.04</td><td>6.89</td></tr><tr><td>Czech</td><td>Slavic</td><td>5125 / 134527</td><td>3862</td><td>836</td><td colspan=\"3\">98 852 850</td><td colspan=\"2\">8.73 87.07</td><td>0.99</td></tr><tr><td>Danish</td><td>Germanic</td><td>3193 / 25503</td><td>3148</td><td colspan=\"4\">875 100 869 865</td><td colspan=\"2\">0.17 81.52</td><td>1.28</td></tr><tr><td>Dutch</td><td>Germanic</td><td>4993 / 55467</td><td>4146</td><td>895</td><td colspan=\"3\">99 899 899</td><td colspan=\"2\">3.06 80.61</td><td>4.30</td></tr><tr><td>English</td><td>Germanic</td><td>22765 / 115523</td><td>8377</td><td colspan=\"4\">989 100 985 983</td><td colspan=\"2\">0.06 79.00</td><td>0.79</td></tr><tr><td>Estonian</td><td>Uralic</td><td>886 / 38215</td><td>886</td><td>587</td><td colspan=\"6\">94 553 577 25.94 95.70 10.18</td></tr><tr><td>Faroese</td><td>Germanic</td><td>3077 / 45474</td><td>2967</td><td colspan=\"4\">842 100 839 880</td><td colspan=\"3\">0.66 80.52 12.93</td></tr><tr><td>Finnish</td><td>Uralic</td><td colspan=\"2\">57642 / 2490377 8668</td><td colspan=\"7\">981 100 984 986 31.47 94.47 10.57</td></tr><tr><td>French</td><td>Romance</td><td>7535 / 367732</td><td>5588</td><td>941</td><td colspan=\"3\">98 940 943</td><td colspan=\"2\">2.79 97.78</td><td>3.95</td></tr><tr><td>Georgian</td><td>Kartvelian</td><td>3782 / 74412</td><td>3537</td><td colspan=\"4\">861 100 872 874</td><td colspan=\"2\">3.28 94.70</td><td>0.42</td></tr><tr><td>German</td><td>Germanic</td><td>15060 / 179339</td><td>6767</td><td colspan=\"4\">959 100 964 964</td><td colspan=\"2\">5.03 65.83</td><td>5.01</td></tr><tr><td>Haida \u2020</td><td>Isolate</td><td>41 / 7040</td><td>41</td><td>41</td><td>40</td><td>34</td><td>38</td><td colspan=\"2\">0.26 98.96</td><td>0.49</td></tr><tr><td>Hebrew</td><td>Semitic</td><td>510 / 13818</td><td>510</td><td>470</td><td colspan=\"5\">95 431 453 43.58 78.96</td><td>2.40</td></tr><tr><td>Hindi</td><td>Indo-Aryan</td><td>258 / 54438</td><td>258</td><td>252</td><td colspan=\"3\">85 254 255</td><td colspan=\"3\">8.16 98.65 11.14</td></tr><tr><td>Hungarian</td><td>Uralic</td><td>13989 / 490394</td><td>7097</td><td colspan=\"4\">966 100 967 964</td><td colspan=\"2\">0.52 97.00</td><td>0.52</td></tr><tr><td>Icelandic</td><td>Germanic</td><td>4775 / 76915</td><td>4108</td><td colspan=\"4\">899 100 906 899</td><td colspan=\"2\">0.56 84.54</td><td>9.28</td></tr><tr><td>Irish</td><td>Celtic</td><td>7464 / 107298</td><td>5040</td><td>906</td><td colspan=\"5\">99 913 893 55.09 61.60</td><td>4.47</td></tr><tr><td>Italian</td><td>Romance</td><td>10009 / 509574</td><td>6365</td><td colspan=\"7\">953 100 940 936 18.81 92.38 20.92</td></tr><tr><td>Khaling</td><td>Sino-Tibetan</td><td>591 / 156097</td><td>584</td><td>426</td><td colspan=\"6\">92 411 422 76.39 99.04 24.87</td></tr><tr><td>Kurmanji Kurdish</td><td>Iranian</td><td>15083 / 216370</td><td>7046</td><td colspan=\"4\">945 100 949 958</td><td colspan=\"2\">9.62 91.43</td><td>0.90</td></tr><tr><td>Latin</td><td>Romance</td><td>17214 / 509182</td><td>6517</td><td colspan=\"4\">943 100 939 945</td><td colspan=\"3\">4.12 90.04 47.74</td></tr><tr><td>Latvian</td><td>Baltic</td><td>7548 / 136998</td><td>5293</td><td colspan=\"4\">923 100 920 924</td><td colspan=\"2\">3.69 91.50</td><td>2.91</td></tr><tr><td>Lithuanian</td><td>Baltic</td><td>1458 / 34130</td><td>1443</td><td>632</td><td colspan=\"3\">96 664 639</td><td colspan=\"3\">3.64 90.58 35.32</td></tr><tr><td>Lower Sorbian</td><td>Germanic</td><td>994 / 20121</td><td>994</td><td>626</td><td colspan=\"3\">96 625 630</td><td colspan=\"2\">0.24 93.33</td><td>0.48</td></tr><tr><td>Macedonian</td><td>Slavic</td><td>10313 / 168057</td><td>6079</td><td colspan=\"4\">958 100 939 946</td><td colspan=\"2\">1.15 90.56</td><td>0.53</td></tr><tr><td>Navajo</td><td>Athabaskan</td><td>674 / 12354</td><td>674</td><td>496</td><td colspan=\"6\">91 491 491 79.03 35.08 21.49</td></tr><tr><td>Northern Sami</td><td>Uralic</td><td>2103 / 62677</td><td>1964</td><td>745</td><td colspan=\"3\">93 738 744</td><td colspan=\"3\">4.62 90.39 18.12</td></tr><tr><td colspan=\"2\">Norwegian Bokm\u00e5l Germanic</td><td>5527 / 19238</td><td>5041</td><td colspan=\"4\">925 100 928 930</td><td colspan=\"2\">0.19 92.77</td><td>2.08</td></tr><tr><td colspan=\"2\">Norwegian Nynorsk Germanic</td><td>4689 / 15319</td><td>4413</td><td>915</td><td colspan=\"3\">98 914 919</td><td colspan=\"2\">0.35 88.59</td><td>1.98</td></tr><tr><td>Persian</td><td>Iranian</td><td>273 / 37128</td><td>273</td><td>269</td><td colspan=\"3\">82 268 267</td><td colspan=\"3\">27.1 95.28 15.70</td></tr><tr><td>Polish</td><td>Slavic</td><td>10185 / 201024</td><td>5926</td><td>929</td><td colspan=\"3\">99 934 942</td><td colspan=\"2\">5.24 91.68</td><td>1.79</td></tr><tr><td>Portuguese</td><td>Romance</td><td>4001 / 303996</td><td>3668</td><td colspan=\"4\">902 100 872 865</td><td colspan=\"2\">0.01 93.26</td><td>3.19</td></tr><tr><td>Quechua</td><td>Quechuan</td><td>1006 / 180004</td><td>963</td><td>521</td><td colspan=\"3\">93 495 526</td><td colspan=\"2\">1.25 98.92</td><td>0.05</td></tr><tr><td>Romanian</td><td>Romance</td><td>4405 / 80266</td><td>3351</td><td>858</td><td colspan=\"5\">99 854 828 22.40 87.65</td><td>4.78</td></tr><tr><td>Russian</td><td>Slavic</td><td>28068 / 473481</td><td>8186</td><td colspan=\"4\">974 100 980 980</td><td colspan=\"3\">5.20 79.88 11.33</td></tr><tr><td>Scottish Gaelic \u2020</td><td>Celtic</td><td>73 / 781</td><td>-</td><td>73</td><td>58</td><td>36</td><td colspan=\"3\">40 38.03 42.73</td><td>4.85</td></tr><tr><td>Serbo-Croatian</td><td>Slavic</td><td>24419 / 840799</td><td>6746</td><td colspan=\"6\">964 100 971 954 16.75 89.84</td><td>9.64</td></tr><tr><td>Slovak</td><td>Slavic</td><td>1046 / 14796</td><td>1046</td><td>631</td><td colspan=\"3\">93 622 622</td><td colspan=\"2\">0.48 88.21</td><td>1.55</td></tr><tr><td>Slovene</td><td>Slavic</td><td>2535 / 60110</td><td>2007</td><td colspan=\"4\">769 100 746 762</td><td colspan=\"2\">1.19 88.90</td><td>4.95</td></tr><tr><td>Sorani Kurdish</td><td>Iranian</td><td>274 / 22990</td><td>263</td><td>197</td><td colspan=\"6\">74 198 199 67.89 94.76 15.21</td></tr><tr><td>Spanish</td><td>Romance</td><td>5460 / 382955</td><td>4621</td><td>906</td><td colspan=\"5\">99 902 922 11.34 98.43</td><td>5.13</td></tr><tr><td>Swedish</td><td>Germanic</td><td>10553 / 78411</td><td>6511</td><td>962</td><td colspan=\"3\">99 956 962</td><td colspan=\"2\">0.36 81.82</td><td>0.79</td></tr><tr><td>Turkish</td><td>Turkic</td><td>3579 / 275460</td><td>2934</td><td>834</td><td colspan=\"3\">99 852 840</td><td colspan=\"2\">0.22 98.30</td><td>0.99</td></tr><tr><td>Ukrainian</td><td>Slavic</td><td>1493 / 20904</td><td>1490</td><td>722</td><td colspan=\"3\">98 744 729</td><td colspan=\"2\">1.89 84.75</td><td>5.19</td></tr><tr><td>Urdu</td><td>Indo-Aryan</td><td>182 / 12572</td><td>182</td><td>111</td><td colspan=\"3\">55 101 106</td><td colspan=\"2\">8.01 95.93</td><td>8.10</td></tr><tr><td>Welsh</td><td>Celtic</td><td>183 / 10641</td><td>183</td><td>183</td><td>76</td><td>80</td><td>78</td><td colspan=\"2\">1.98 96.90</td><td>7.31</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"text": "Total number of lemmata and forms available for sampling, and number of distinct lemmata present in each data condition in Task 1. For almost all languages, these were spread across 10000,1000, and 100 forms in the High, Medium, and Low conditions, respectively, and 1000 forms in each Dev and Test set. For \u2020-marked languages, there was not enough total data to support these numbers. Bengali had 4423 forms in the High condition, and Dev and Test sets of 100 forms each. Haida had 6840 forms in the High condition and Dev and Test sets of 100 forms. Scottish Gaelic had no High condition, a Medium condition of 681 forms, and Dev and Test sets of 50 forms each. The three last columns indicate how many inflected forms have undergone changes in a prefix (Pr), a change in a suffix (Su), or a stem-internal change (Ap) versus the given lemma form.",
"content": "<table><tr><td>Language Name</td><td>ADJ</td><td>N</td><td>V</td></tr><tr><td>Albanian</td><td>-</td><td>10-20</td><td>123</td></tr><tr><td>Arabic</td><td colspan=\"2\">40-48 12-36</td><td>61-115</td></tr><tr><td>Armenian</td><td colspan=\"2\">17-34 17-34</td><td>154-155</td></tr><tr><td>Basque</td><td>-</td><td>-</td><td>112-810</td></tr><tr><td>Bengali</td><td>-</td><td>9-12</td><td>51</td></tr><tr><td>Bulgarian</td><td>30</td><td>4-8</td><td>-</td></tr><tr><td>Catalan</td><td>-</td><td>-</td><td>50-53</td></tr><tr><td>Czech</td><td colspan=\"2\">25-35 14</td><td>30</td></tr><tr><td>Danish</td><td>-</td><td>6</td><td>8</td></tr><tr><td>Dutch</td><td>3-9</td><td>-</td><td>16</td></tr><tr><td>English</td><td>-</td><td>-</td><td>7</td></tr><tr><td>Estonian</td><td>-</td><td>30</td><td>79</td></tr><tr><td>Faroese</td><td>17</td><td>8-16</td><td>12</td></tr><tr><td>Finnish</td><td>28</td><td>13-28</td><td>141</td></tr><tr><td>French</td><td>-</td><td>-</td><td>49</td></tr><tr><td>Georgian</td><td>19</td><td>19</td><td>-</td></tr><tr><td>German</td><td>-</td><td>4-8</td><td>29</td></tr><tr><td>Haida</td><td>-</td><td>-</td><td>41-176</td></tr><tr><td>Hebrew</td><td>-</td><td>30</td><td>23-28</td></tr><tr><td>Hindi</td><td>-</td><td>-</td><td>219</td></tr><tr><td>Hungarian</td><td>-</td><td>17-34</td><td>-</td></tr><tr><td>Icelandic</td><td>-</td><td>8-16</td><td>28</td></tr><tr><td>Irish</td><td>13</td><td>7-13</td><td>65</td></tr><tr><td>Italian</td><td>-</td><td>-</td><td>47-51</td></tr><tr><td>Khaling</td><td>-</td><td>-</td><td>45-382</td></tr><tr><td>Kurmanji Kurdish</td><td>1-2</td><td>1-14</td><td>83</td></tr><tr><td>Latin</td><td colspan=\"2\">18-31 8-12</td><td>99</td></tr><tr><td>Latvian</td><td colspan=\"2\">20-24 7-14</td><td>49-50</td></tr><tr><td>Lithuanian</td><td colspan=\"2\">28-76 7-14</td><td>63</td></tr><tr><td>Lower Sorbian</td><td>33</td><td>18</td><td>21</td></tr><tr><td>Macedonian</td><td>16</td><td>5-11</td><td>20-29</td></tr><tr><td>Navajo</td><td>-</td><td>8</td><td>6-50</td></tr><tr><td>Northern Sami</td><td>13</td><td>13</td><td>45-54</td></tr><tr><td colspan=\"2\">Norwegian Bokm\u00e5l 2-5</td><td>1-3</td><td>3-9</td></tr><tr><td colspan=\"2\">Norwegian Nynorsk 1-5</td><td>1-3</td><td>8</td></tr><tr><td>Persian</td><td>-</td><td>-</td><td>140</td></tr><tr><td>Polish</td><td>28</td><td>7-14</td><td>47</td></tr><tr><td>Portuguese</td><td>-</td><td>-</td><td>74-76</td></tr><tr><td>Quechua</td><td>256</td><td>256</td><td>41</td></tr><tr><td>Romanian</td><td>8-16</td><td>5-6</td><td>37</td></tr><tr><td>Russian</td><td colspan=\"2\">26-30 6-14</td><td>15-16</td></tr><tr><td>Scottish Gaelic</td><td>12</td><td>-</td><td>8</td></tr><tr><td>Serbo-Croatian</td><td>1-43</td><td>2-14</td><td>63</td></tr><tr><td>Slovak</td><td>27</td><td>6-12</td><td>-</td></tr><tr><td>Slovene</td><td>53</td><td>6-18</td><td>22</td></tr><tr><td>Sorani Kurdish</td><td>1-15</td><td>1-28</td><td>95-186</td></tr><tr><td>Spanish</td><td>-</td><td>-</td><td>70</td></tr><tr><td>Swedish</td><td>5-15</td><td>4-8</td><td>11</td></tr><tr><td>Turkish</td><td>72</td><td colspan=\"2\">12-108 120</td></tr><tr><td>Ukrainian</td><td>26</td><td>7-14</td><td>17-24</td></tr><tr><td>Urdu</td><td>-</td><td>6</td><td>219</td></tr><tr><td>Welsh</td><td>-</td><td>-</td><td>20-65</td></tr></table>"
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"text": "Features of the various submitted systems.",
"content": "<table/>"
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td>High</td><td>Medium</td><td>Low</td></tr><tr><td>UE-LMU-1 CLUZH-7 CLUZH-6 CLUZH-2</td><td>95.32/0.10 95.12/0.10 95.12/0.10 94.95/0.10</td><td>81.02/0.41 82.80/0.34 82.80/0.34 81.80/0.37</td><td>-/-50.61/1.29 50.61/1.29 46.82/1.38</td></tr><tr><td>LMU-2</td><td>94.70/0.11</td><td>82.64/0.35</td><td>46.59/1.56</td></tr><tr><td>LMU-1</td><td>94.70/0.11</td><td>82.64/0.35</td><td>45.29/1.62</td></tr><tr><td>CLUZH-5</td><td>94.69/0.11</td><td>81.00/0.39</td><td>48.24/1.48</td></tr><tr><td>CLUZH-1</td><td>94.47/0.12</td><td>80.88/0.39</td><td>45.99/1.43</td></tr><tr><td>SU-RUG-1</td><td>93.56/0.14</td><td>-/-</td><td>-/-</td></tr><tr><td>CU-1</td><td>92.97/0.17</td><td>77.60/0.50</td><td>45.74/1.62</td></tr><tr><td>UTNII-1</td><td>91.46/0.17</td><td>65.06/0.73</td><td>1.28/5.71</td></tr><tr><td>CLUZH-4</td><td>89.53/0.23</td><td>80.33/0.41</td><td>48.53/1.52</td></tr><tr><td colspan=\"2\">IIT(BHU)-1 89.38/0.22</td><td>50.73/1.69</td><td>13.88/4.54</td></tr><tr><td>CLUZH-3</td><td>89.10/0.24</td><td>79.57/0.44</td><td>47.95/1.55</td></tr><tr><td>UF-1</td><td>87.33/0.27</td><td>68.82/0.78</td><td>27.46/2.70</td></tr><tr><td>CMU-1 \u2020</td><td>86.56/0.28</td><td>68.00/0.86 \u2021</td><td>-/-</td></tr><tr><td>ISI-1</td><td>74.01/0.78</td><td>54.47/1.39</td><td>26.00/2.43</td></tr><tr><td>EHU-1</td><td colspan=\"3\">64.38/0.72 \u2021 38.50/1.70 \u2021 3.50/3.23 \u2021</td></tr><tr><td>UE-LMU-2 \u2020</td><td>-/-</td><td>82.37/0.39</td><td>-/-</td></tr><tr><td>IIT(BHU)-2</td><td>-/-</td><td>55.46/1.78</td><td>14.27/4.33</td></tr><tr><td>UA-3 \u2020</td><td>-/-</td><td>-/-</td><td>57.70/1.34 \u2021</td></tr><tr><td>UA-4 \u2020</td><td>-/-</td><td>-/-</td><td>57.52/1.36 \u2021</td></tr><tr><td>UA-1</td><td>-/-</td><td>-/-</td><td>54.22/1.66 \u2021</td></tr><tr><td>UA-2</td><td>-/-</td><td>-/-</td><td>42.85/2.23 \u2021</td></tr><tr><td>baseline</td><td>77.81/0.50</td><td>64.70/0.90</td><td>37.90/2.15</td></tr><tr><td>oracle-fc</td><td>99.99/*</td><td>97.76/*</td><td>70.84/*</td></tr><tr><td>oracle-e</td><td>98.25/*</td><td>92.10/*</td><td>64.56/*</td></tr></table>"
},
"TABREF8": {
"html": null,
"num": null,
"type_str": "table",
"text": "88.52/0.22 82.02/0.38 67.76/0.75 LMU-1 87.40/0.24 77.02/0.47 54.74/1.22 CU-1 67.77/0.75 60.94/1.03 47.89/1.67 baseline 76.87/0.51 65.84/0.83 50.14/1.28",
"content": "<table><tr><td>High</td><td>Medium</td><td>Low</td></tr><tr><td>LMU-2 oracle-e 94.11/*</td><td>88.70/*</td><td>75.84/*</td></tr><tr><td colspan=\"3\">Table 8: Sub-task 2 results: Per-form accuracy (in %age points) and average Levenshtein distance from the correct form (in characters).</td></tr><tr><td colspan=\"3\">accuracy of each system by resource condition,</td></tr><tr><td colspan=\"3\">for each of the sub-tasks. The table reflects the</td></tr><tr><td colspan=\"3\">fact that some teams submitted more than one sys-</td></tr><tr><td colspan=\"3\">tem (e.g. LMU-1 &amp; LMU-2 in the table). Learn-</td></tr><tr><td colspan=\"3\">ing curves for each language across conditions are</td></tr><tr><td>shown in</td><td/><td/></tr></table>"
},
"TABREF9": {
"html": null,
"num": null,
"type_str": "table",
"text": "Chakrabarty and Utpal Garain. 2017. ISI at the SIGMORPHON 2017 shared task on morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 66-70, Vancouver. Association for Computational Linguistics. Murat Saraclar, and Andreas Stolcke. 2007. Analysis of morph-based speech recognition and the modeling of out-ofvocabulary words across languages. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 380-387. Association for Computational Linguistics. Harald Baayen. 2009. Words and paradigms bit by bit: An information-theoretic approach to the processing of inflection and derivation. pages 214-253. Oxford University Press, Oxford. Fabio Montermini and Olivier Bonami. 2013. Stem spaces and predictibility in verbal inflection. Lingue e Linguaggio, 12:171-190. Association for Computational Linguistics. Beno\u00eet Sagot and G\u00e9raldine Walther. 2011. Noncanonical inflection : data, formalisation and complexity measures. In Systems and Frameworks in Computational Morphology, volume 100, pages 23-45, Zurich, Switzerland. Springer. Christo Kirov, and David Yarowsky. 2016. Remote elicitation of inflectional paradigms to seed morphological analysis in lowresource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA).",
"content": "<table><tr><td>task for morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computa-tional Research in Phonetics, Phonology, and Mor-phology, pages 41-48, Berlin, Germany. Associa-tion for Computational Linguistics. Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2014. Semi-supervised learning of morphological paradigms and lexicons. In Proceedings of the 14th Conference of the European Chapter of the Asso-ciation for Computational Linguistics, pages 569-578, Gothenburg, Sweden. Association for Compu-tational Linguistics. Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learning of morphology. In Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 1024-1029, Denver, CO. Association for Computational Linguistics. I\u00f1aki Alegria, Izaskun Etxeberria, Mans Hulden, and Montserrat Maritxalar. 2009. Porting Basque mor-phological grammars to foma, an open-source tool. In International Workshop on Finite-State Methods and Natural Language Processing, pages 105-113. Springer. I\u00f1aki Alegria and Izaskun Etxeberria. 2016. EHU at the SIGMORPHON 2016 shared task. A sim-ple proposal: Grapheme-to-phoneme for inflection. In Proceedings of the 2016 Meeting of SIGMOR-PHON, Berlin, Germany. Association for Computa-tional Linguistics. Inbal Arnon and Michael Ramscar. 2012. Granular-ity and the acquisition of grammatical gender: How order-of-acquisition affects what gets learned. Cog-nition, 122:292-305. R. Harald Baayen, Peter Hendrix, and Michael Ram-scar. 2013. Sidestepping the combinatorial explo-sion: Towards a processing model based on discrim-inative learning. Language and Speech, 56(3):329-34. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio. 2015. Neural machine translation by jointly learning to align and translate. In Internationoal Conference on Learning Representations, volume abs/1409.0473. Toms Bergmanis, Katharina Kann, Hinrich Sch\u00fctze, and Sharon Goldwater. 2017. Training data aug-mentation for low-resource morphological inflec-tion. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Rein-flection, pages 31-39, Vancouver. Association for Computational Linguistics. Varjokallio, Ebru Arisoy, Markus Dreyer and Jason Eisner. 2009. Graphical models over multiple strings. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 101-110, Singapore. Association for Computational Linguistics. Markus Dreyer, Jason Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1080-1089, Honolulu, Hawaii. Association for Computational Linguistics. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa-tional Linguistics: Human Language Technologies, pages 1185-1195, Atlanta, Georgia. Association for Computational Linguistics. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice transla-tion. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 1012-1020, Columbus, Ohio. Association for Computational Linguistics. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection genera-tion using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa-tional Linguistics: Human Language Technologies, pages 634-643, San Diego, California. Association for Computational Linguistics. Raphael Finkel and Gregory Stump. 2007. Princi-pal parts and morphological typology. Morphology, 17(1):39-75. Martin Haspelmath, Matthew Dryer, David Gil, and Bernard Comrie. 2005. The world atlas of language structures (WALS). Petar Milin, Victor Kuperman, Aleksandar Kosti\u0107, and R. Thomas M\u00fcller, Ryan Cotterell, Alexander Fraser, and Hinrich Sch\u00fctze. 2015. Joint lemmatization and morphological tagging with Lemming. In Proceed-ings of the 2015 Conference on Empirical Meth-ods in Natural Language Processing, pages 2268-2274, Lisbon, Portugal. Association for Computa-tional Linguistics. Karthik Narasimhan, Damianos Karakos, Richard Schwartz, Stavros Tsakalidis, and Regina Barzilay. 2014. Morphological segmentation for keyword spotting. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 880-885, Doha, Qatar. Association for Computational Linguistics. Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discriminative string transduction. In Proceedings of the 2015 Confer-ence of the North American Chapter of the Associ-ation for Computational Linguistics: Human Lan-guage Technologies, pages 922-931, Denver, Col-orado. Association for Computational Linguistics. Garrett Nicolai, Bradley Hauer, Mohammad Motallebi, Saeed Najafi, and Grzegorz Kondrak. 2017. If you can't beat them, join them: The university of Alberta system description. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Mor-phological Reinflection, pages 79-84, Vancouver. Association for Computational Linguistics. Robert\u00d6stling and Johannes Bjerva. 2017. SU-RUG at the CoNLL-SIGMORPHON 2017 shared task: Morphological inflection with attentional sequence-to-sequence models. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Mor-phological Reinflection, pages 110-113, Vancouver. Association for Computational Linguistics. Vito Pirrelli, Marcello Ferro, and Claudia Marzi. 2015. Computational complexity of abstractive morphol-ogy. pages 141-166. Oxford University Press, Ox-ford. Fermin Moscoso del Prado Mart\u00edn, Aleksandar Kosti\u0107, and R. Harald Baayen. 2004. Putting the bits to-gether: An information-theoretical perspective on morphological processing. Cognition, 94:1-18. Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neu-ral context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language tion for universal morphological reinflection. In ter sequence-to-sequence model with global atten-Qile Zhu, Yanjun Li, and Xiaolin Li. 2017. Charac-supervised labeled sequence transduction. for Computational Linguistics. 1: Long Papers), Vancouver, Canada. Association Association for Computational Linguistics (Volume Proceedings of the 55th Annual Meeting of the In space variational encoder-decoders for semi-Chunting Zhou and Graham Neubig. 2017b. Multi-couver. Association for Computational Linguistics. sal Morphological Reinflection, pages 58-65, Van-CoNLL SIGMORPHON 2017 Shared Task: Univer-ational encoder-decoders. In Proceedings of the logical inflection generation with multi-space vari-Chunting Zhou and Graham Neubig. 2017a. Morpho-Lexis and Grammar, Belgrade. Proceedings of the 29th International Conference on icon and a POS tagger for Kurmanji Kurdish. In Fast development of basic NLP tools: Towards a lex-G\u00e9raldine Walther, Beno\u00eet Sagot, and Kar\u00ebn Fort. 2010. sources Association (ELRA). (at LREC), Valetta, Malta. European Language Re-Lexical Resources for Less-Resourced Languages SaLTMiL Workshop on Creation and Use of Basic periments on Sorani Kurdish. In Proceedings of the guage: General methodology and preliminary ex-ing a large-scale lexicon for a less-resourced lan-G\u00e9raldine Walther and Beno\u00eet Sagot. 2010. Develop-Paris, September 2013. Workshop on Sino-Tibetan Languages of Sichuan, Khaling verbal morphology. Presentation at the 3rd Sagot. 2013. Uncovering the inner architecture of G\u00e9raldine Walther, Guillaume Jacques, and Beno\u00eet sociation for Computational Linguistics. Morphologically-Rich Languages, pages 1-12. As-HLT 2010 First Workshop on Statistical Parsing of how and whither. In Proceedings of the NAACL of morphologically rich languages (SPMRL) what, hbein, and Lamia Tounsi. 2010. Statistical parsing Candito, Jennifer Foster, Yannick Versley, Ines Re-Reut Tsarfaty, Djam\u00e9 Seddah, Yoav Goldberg, Marie Linguistics. 680, Beijing, China. Association for Computational Processing (Volume 2: Short Papers), pages 674-ternational Joint Conference on Natural Language ation for Computational Linguistics and the 7th In-ceedings of the 53rd Annual Meeting of the Associ-feature schema for inflectional morphology. In Pro-and Roger Que. 2015b. A language-independent John Sylak-Glassman, John Sylak-Glassman, Christo Kirov, David Yarowsky,</td><td>Farrell Ackerman, James P. Blevins, and Robert Mal-ouf. 2009. Parts and wholes: Patterns of related-ness in complex mophological systems and why they matter. In James P. Blevins and Juliette Blevins, ed-itors, Analogy in grammar: Form and acquisition, pages 54-82. Oxford University Press, Oxford. Farrell Ackerman and Robert Malouf. 2013. Morpho-logical organization: The low conditional entropy conjecture. Language, 89:429-464. Roee Aharoni and Yoav Goldberg. 2017. Morphologi-cal inflection generation with hard monotonic atten-tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol-ume 1: Long Papers), Vancouver, Canada. Associa-tion for Computational Linguistics. Roee Aharoni, Yoav Goldberg, and Yonatan Belinkov. 2016. Improving sequence to sequence learning for morphological inflection generation: The BIU-MIT systems for the SIGMORPHON 2016 shared Abhisek Abhisek Chakrabarty, Arun Onkar Pandit, and Utpal Garain. 2017. Context sensitive lemmatization us-ing two successive bidirectional gated recurrent net-works. In Proceedings of the 55th Annual Meet-ing of the Association for Computational Linguis-tics (Volume 1: Long Papers), Vancouver, Canada. Association for Computational Linguistics. Grzegorz Chrupa\u0142a, Georgiana Dinu, and Josef van Genabith. 2008. Learning morphology with Mor-fette. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco. Junyoung Chung, \u00c7 aglar G\u00fcl\u00e7ehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model-ing. CoRR, abs/1412.3555. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared task-morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON, Berlin, Germany. Association for Computational Linguistics. Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic FSTs. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol-ume 2: Short Papers), pages 625-630, Baltimore, Maryland. Association for Computational Linguis-tics. Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015. Modeling word forms using latent underlying morphs and phonology. Transactions of the Associ-ation for Computational Linguistics, 3:433-447. Ryan Cotterell, Hinrich Sch\u00fctze, and Jason Eisner. 2016b. Morphological smoothing and extrapolation of word embeddings. In Proceedings of the 54th An-nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1651-1660, Berlin, Germany. Association for Computa-tional Linguistics. Ryan Cotterell, John Sylak-Glassman, and Christo Kirov. 2017. Neural graphical models over strings for principal parts morphological paradigm comple-tion. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa-tional Linguistics (Volume 2: Short Papers), pages 759-765, Valencia, Spain. Association for Compu-tational Linguistics. Mathias Creutz, Teemu Hirsim\u00e4ki, Mikko Kurimo, Antti Puurula, Janne Pylkk\u00f6nen, Vesa Siivola, Matti Technologies, pages 623-633, San Diego, Califor-nia. Geoffrey Sampson. 1985. Writing Systems: A Linguis-tic Introduction. Stanford University Press. Sunita Sarawagi and William W. Cohen. 2004. Semi-Markov conditional random fields for information extraction. In Advances in Neural Information Pro-cessing Systems, pages 1185-1192. graph-based lattice dependency parser for joint morphological segmentation and syntactic analysis. Transactions of the Association for Computational Linguistics, 3:359-373. Hajime Senuma and Akiko Aizawa. 2017. Seq2seq for morphological reinflection: When deep learning fails. In Proceedings of the CoNLL SIGMORPHON flection, pages 100-109, Vancouver. Association for Computational Linguistics. Scott Seyfarth, Farrell Ackerman, and Robert Malouf. nual Meeting of the Berkeley Linguistics Society, pages 480-494. Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and Lingshuang Jack Mao. 2017. Data augmentation for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Univer-Akhilesh Sudhakar and Anil Kumar Singh. 2017. Ex-periments on morphological reinflection: CoNLL-2017 shared task. In Proceedings of the CoNLL SIG-ciation for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net-works. In Advances in Neural Information Process-John Sylak-Glassman, Christo Kirov, Matt Post, Roger Que, and David Yarowsky. 2015a. A universal feature schema for rich morphological annotation and fine-grained cross-lingual part-of-speech tag-ging. In Cerstin Mahlow and Michael Piotrowski, tems and Frameworks for Computational Morphol-ogy (SFCM), Communications in Computer and In-formation Science, pages 72-93. Springer, Berlin. editors, Proceedings of the 4th Workshop on Sys-Canada. ing Systems, pages 3104-3112, Montreal, Quebec, logical Reinflection, pages 71-78, Vancouver. Asso-MORPHON 2017 Shared Task: Universal Morpho-couver. Association for Computational Linguistics. sal Morphological Reinflection, pages 90-99, Van-logical learning. In Proceedings of the Fortieth An-2014. Implicative organization facilitates morpho-2017 Shared Task: Universal Morphological Rein-Wolfgang Seeker and\u00d6zlem \u00c7 etino\u01e7lu. 2015. A Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 85-89, Vancouver. Association for Computa-tional Linguistics.</td></tr></table>"
},
"TABREF13": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Sub-task 1 High Condition Part 1.</td></tr></table>"
},
"TABREF14": {
"html": null,
"num": null,
"type_str": "table",
"text": "Sub-task 1 High Condition Part 2. .24/80.66 85.40/0.28/82.44 84.90/0.30/81.34 -Urdu 99.40/0.01/94.34 \u2021 97.90/0.03/81.13 96.50/0.05/84.91 98.20/0.03/84.91 -Welsh 98.00/0.04/97.44 \u2021 99.00/0.03/98.72 \u2021 69.00/0.52/65.38 62.00/0.88/57.69 58.00/0.66/52.56",
"content": "<table><tr><td/><td>UF-1</td><td>CMU-1</td><td>baseline</td><td>ISI-1</td><td>EHU-1</td></tr><tr><td>Albanian</td><td>92.90/0.17/87.53</td><td colspan=\"3\">91.30/0.13/78.59 78.90/0.66/68.83 79.10/0.95/70.46</td><td>-</td></tr><tr><td>Arabic</td><td>84.90/0.68/82.79</td><td colspan=\"3\">85.90/0.49/83.87 50.70/1.45/50.18 60.20/1.33/59.57</td><td>-</td></tr><tr><td>Armenian</td><td>93.50/0.11/92.90</td><td colspan=\"3\">82.30/0.34/80.93 87.20/0.27/86.14 89.50/0.25/88.91</td><td>-</td></tr><tr><td>Basque Bengali Bulgarian</td><td colspan=\"4\">5.00/3.38/4.55 96.00/0.08/94.12 \u2021 99.00/0.05/98.53 \u2021 81.00/0.26/83.82 76.00/0.63/76.47 0.00/6.13/0.00 96.00/0.12/86.36 \u2021 97.00/0.11/90.91 \u2021 92.70/0.12/90.46 86.70/0.24/83.20 88.80/0.18/86.83 80.90/0.40/78.49</td><td>31.00/1.66/9.09 --</td></tr><tr><td>Catalan</td><td>93.40/0.14/92.22</td><td colspan=\"3\">96.50/0.09/95.50 95.50/0.11/94.68 92.00/0.21/90.31</td><td>-</td></tr><tr><td>Czech</td><td>87.20/0.23/85.53</td><td colspan=\"3\">81.90/0.32/79.65 89.60/0.22/88.71 83.00/0.34/81.41</td><td>-</td></tr><tr><td>Danish</td><td>88.60/0.23/87.63</td><td colspan=\"3\">85.40/0.23/85.09 87.80/0.21/86.59 82.30/0.26/80.58</td><td>-</td></tr><tr><td>Dutch</td><td>90.80/0.20/90.10</td><td colspan=\"3\">88.90/0.22/88.21 87.00/0.22/86.10 90.20/0.21/89.43</td><td>-</td></tr><tr><td>English</td><td>93.90/0.10/93.79</td><td colspan=\"3\">94.60/0.10/94.51 94.70/0.09/94.61 93.30/0.11/93.18</td><td>-</td></tr><tr><td>Estonian</td><td>94.80/0.12/92.72</td><td colspan=\"3\">93.70/0.15/90.81 78.00/0.39/71.40 75.50/0.78/69.67</td><td>-</td></tr><tr><td>Faroese</td><td>77.00/0.48/75.45</td><td colspan=\"3\">74.50/0.51/72.84 74.10/0.57/73.30 64.30/0.74/62.39</td><td>-</td></tr><tr><td>Finnish</td><td>74.40/0.46/74.24</td><td colspan=\"3\">74.90/0.57/74.54 78.20/0.35/77.99 52.40/2.00/51.93</td><td>-</td></tr><tr><td>French</td><td>79.10/0.41/78.69</td><td colspan=\"3\">82.40/0.33/82.08 81.50/0.36/81.12 80.80/0.43/80.49</td><td>-</td></tr><tr><td>Georgian</td><td>81.60/0.24/80.66</td><td colspan=\"3\">92.30/0.13/91.99 93.80/0.11/93.59 94.40/0.13/95.08</td><td>-</td></tr><tr><td>German</td><td>81.40/0.55/81.02</td><td colspan=\"3\">78.70/0.44/78.22 82.40/0.59/81.95 79.40/0.42/78.84</td><td>-</td></tr><tr><td>Haida Hebrew</td><td colspan=\"5\">-97.50/0.03/95.36 54.00/0.56/39.96 71.10/0.53/60.26 77.60/0.28/63.36 97.00/0.06/92.11 \u2021 97.00/0.12/92.11 \u2021 67.00/0.77/50.00 57.00/1.36/39.47 93.10/0.09/88.74</td></tr><tr><td>Hindi Hungarian</td><td colspan=\"4\">99.10/0.01/96.47 99.60/0.01/98.43 \u2021 93.50/0.09/82.35 99.10/0.01/98.43 78.70/0.41/78.42 73.60/0.58/73.34 68.50/0.66/67.95 77.50/0.46/77.07</td><td>--</td></tr><tr><td>Icelandic</td><td>78.00/0.42/76.53</td><td colspan=\"3\">68.10/0.63/66.41 76.30/0.50/75.19 78.20/0.45/77.20</td><td>-</td></tr><tr><td>Irish</td><td>73.10/0.87/70.66</td><td colspan=\"3\">71.90/0.80/69.88 53.00/1.13/52.97 60.60/1.60/60.69</td><td>-</td></tr><tr><td>Italian</td><td>92.40/0.19/92.09</td><td colspan=\"3\">92.60/0.14/92.09 76.90/0.72/76.39 91.10/0.23/90.60</td><td>-</td></tr><tr><td>Khaling</td><td>96.60/0.05/93.36</td><td colspan=\"3\">94.80/0.10/88.63 53.70/0.87/33.18 56.00/1.55/33.89</td><td>-</td></tr><tr><td>Kurmanji</td><td>93.60/0.08/93.32</td><td colspan=\"3\">83.80/0.34/83.40 93.00/0.08/93.01 93.00/0.09/93.11</td><td>-</td></tr><tr><td>Latin</td><td>54.70/0.72/54.39</td><td colspan=\"4\">66.20/0.60/65.71 47.60/0.81/48.25 21.10/2.49/21.90 70.10/0.69/69.42</td></tr><tr><td>Latvian</td><td>87.40/0.26/86.69</td><td colspan=\"3\">87.50/0.25/86.90 92.10/0.20/91.88 77.90/0.50/76.95</td><td>-</td></tr><tr><td>Lithuanian</td><td>79.20/0.35/70.42</td><td colspan=\"3\">81.60/0.33/73.40 64.20/0.48/53.83 43.50/1.33/31.14</td><td>-</td></tr><tr><td>Lower Sorbian</td><td>94.00/0.12/92.06</td><td colspan=\"3\">91.30/0.14/86.83 86.40/0.25/83.97 85.00/0.29/80.63</td><td>-</td></tr><tr><td>Macedonian</td><td>89.90/0.20/89.43</td><td colspan=\"3\">86.10/0.25/85.41 92.10/0.17/91.75 90.80/0.14/90.49</td><td>-</td></tr><tr><td>Navajo</td><td>68.50/0.85/57.64</td><td colspan=\"3\">84.20/0.34/78.00 37.80/2.12/31.16 23.30/2.68/18.33</td><td>-</td></tr><tr><td>Northern Sami</td><td>88.40/0.22/85.35</td><td colspan=\"4\">85.80/0.33/82.12 64.00/0.73/58.47 28.50/2.26/23.79 76.30/0.56/71.91</td></tr><tr><td colspan=\"2\">Norwegian Bokmal Norwegian Nynorsk 80.30/0.34/80.09 87.60/0.22/87.10</td><td colspan=\"4\">-73.80/0.49/73.34 76.90/0.41/76.50 66.40/0.57/65.51 73.30/0.48/72.03 82.00/0.29/81.40 91.00/0.17/90.54 83.60/0.26/83.01</td></tr><tr><td>Persian</td><td>96.30/0.08/89.14</td><td colspan=\"3\">98.70/0.02/95.13 79.00/0.56/61.80 71.30/0.96/51.69</td><td>-</td></tr><tr><td>Polish</td><td>79.50/0.50/78.77</td><td colspan=\"3\">78.10/0.50/77.28 88.00/0.28/87.47 83.40/0.43/82.80</td><td>-</td></tr><tr><td>Portuguese</td><td>92.50/0.11/91.56</td><td colspan=\"3\">96.40/0.06/96.07 98.10/0.04/97.92 96.10/0.07/95.84</td><td>-</td></tr><tr><td>Quechua</td><td>97.10/0.05/94.49</td><td colspan=\"3\">95.50/0.11/92.02 95.40/0.09/94.68 93.10/0.16/92.78</td><td>-</td></tr><tr><td>Romanian</td><td>77.30/0.63/73.91</td><td colspan=\"3\">78.60/0.51/75.48 79.80/0.54/77.17 77.30/0.71/75.00</td><td>-</td></tr><tr><td>Russian</td><td>79.90/0.62/79.59</td><td colspan=\"3\">76.40/0.65/76.33 85.70/0.47/85.51 86.10/0.43/85.92</td><td>-</td></tr><tr><td>Scottish Gaelic</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Serbo-Croatian</td><td>78.60/0.46/77.67</td><td colspan=\"3\">79.60/0.41/78.93 84.60/0.32/84.17 77.40/0.63/76.73</td><td>-</td></tr><tr><td>Slovak</td><td>90.60/0.15/86.33</td><td colspan=\"3\">87.90/0.18/81.83 83.30/0.30/78.14 82.40/0.34/76.53</td><td>-</td></tr><tr><td>Slovene</td><td>92.90/0.13/91.86</td><td colspan=\"3\">87.80/0.20/85.17 88.90/0.19/86.48 85.70/0.24/83.20</td><td>-</td></tr><tr><td>Sorani Spanish</td><td>85.80/0.20/54.77 86.80/0.26/86.01</td><td colspan=\"3\">87.80/0.16/59.80 63.60/0.68/43.72 54.90/1.13/41.71 92.80/0.12/92.19 90.70/0.21/90.24 88.40/0.36/87.53</td><td>--</td></tr><tr><td>Swedish</td><td>85.50/0.25/85.14</td><td colspan=\"3\">80.60/0.39/80.25 85.40/0.24/84.93 82.70/0.30/82.02</td><td>-</td></tr><tr><td>Turkish</td><td>94.80/0.11/94.17</td><td colspan=\"3\">90.30/0.32/89.40 72.60/0.75/69.29 73.50/0.87/70.48</td><td>-</td></tr><tr><td>Ukrainian</td><td>88.80/0.20/86.15</td><td>84.00/0</td><td/><td/><td/></tr></table>"
},
"TABREF15": {
"html": null,
"num": null,
"type_str": "table",
"text": "Sub-task 1 High Condition Part 3. .21/55.29 87.40/0.42/60.78 80.60/0.54/50.20 85.20/0.56/57.25 --Hungarian 42.30/1.54/41.29 62.80/0.68/62.24 40.60/1.33/39.73 47.70/1.05/47.51 --Icelandic 60.40/0.85/58.29 41.60/1.16/39.71 53.90/1.01/51.72 0.00/8.09/0.00 --Irish 44.00/1.57/43.45 26.10/3.11/25.42 39.50/2.19/39.08 20.10/3.71/19.26 --Italian 71.60/0.84/70.94 70.30/0.80/69.44 75.70/0.57/75.00 58.50/1.26/57.48 .19/89.46 19.20/1.72/19.42 88.00/0.26/88.73 81.60/0.62/82.78 --Latin 37.60/1.17/37.99 22.10/1.72/21.90 14.50/2.90/14.92 22.10/1.72/21.90 -39.70/1.29/39.58 Latvian 85.70/0.25/85.17 60.20/0.88/58.01 62.70/0.83/61.47 57.70/0.95/56.06 Norwegian Bokmal 80.70/0.31/79.78 78.30/0.33/77.20 78.00/0.32/76.77 48.70/0.74/47.20 --Norwegian Nynorsk 61.10/0.67/59.74 56.40/0.71/55.06 59.60/0.70/58..48/68.59 61.50/0.70/55.83 65.20/0.69/59.40 30.80/1.47/23.73 --Urdu 87.50/0.21/36.79 88.00/0.47/33.02 80.20/0.46/27.36 88.00/0.47/33.02 --Welsh 56.00/1.11/50.00 83.00/0.29/80.77 32.00/1.97/29.49 74.00/0.40/69.23 -44.00/0.80/38.46",
"content": "<table><tr><td/><td>baseline</td><td>IIT(BHU)-2</td><td>ISI-1</td><td>IIT(BHU)-1</td><td>CMU-1</td><td>EHU-1</td></tr><tr><td>Albanian</td><td colspan=\"4\">66.30/1.12/55.01 41.50/1.88/26.29 53.90/2.29/35.23 32.20/2.44/18.16</td><td>-</td><td>-</td></tr><tr><td>Arabic</td><td colspan=\"4\">42.10/1.77/41.88 37.60/2.20/33.94 34.60/2.36/33.81 37.60/2.20/33.94</td><td>-</td><td>-</td></tr><tr><td>Armenian</td><td colspan=\"4\">72.70/0.53/70.95 68.10/0.78/67.96 58.20/1.16/57.54 58.40/1.14/58.43</td><td>-</td><td>-</td></tr><tr><td>Basque</td><td>2.00/5.57/0.00</td><td>66.00/0.75/31.82 \u2020</td><td>1.00/2.95/0.00</td><td>66.00/0.75/31.82 \u2020</td><td>-</td><td>6.00/5.10/0.00</td></tr><tr><td>Bengali</td><td colspan=\"4\">76.00/0.33/77.94 91.00/0.19/86.76 53.00/1.28/48.53 91.00/0.19/86.76</td><td>-</td><td>-</td></tr><tr><td>Bulgarian</td><td colspan=\"4\">72.80/0.47/69.22 55.50/0.98/52.02 62.30/0.86/58.06 54.80/1.13/50.27</td><td>-</td><td>-</td></tr><tr><td>Catalan</td><td colspan=\"4\">84.30/0.37/81.31 79.70/0.38/75.85 78.10/0.57/74.35 79.70/0.38/75.85</td><td>-</td><td>-</td></tr><tr><td>Czech</td><td colspan=\"4\">81.50/0.41/79.53 52.90/1.41/50.35 67.60/0.74/64.12 38.60/1.74/35.65</td><td>-</td><td>-</td></tr><tr><td>Danish</td><td>78.10/0.35/76.07</td><td>0.50/5.53/0.58</td><td colspan=\"2\">76.80/0.35/74.91 69.20/0.47/66.71</td><td>-</td><td>-</td></tr><tr><td>Dutch</td><td colspan=\"4\">73.20/0.41/71.30 74.60/0.44/73.30 60.50/0.64/57.95 66.50/0.62/64.74</td><td>-</td><td>-</td></tr><tr><td>English</td><td>90.90/0.16/90.74</td><td>0.00/8.74/0.00</td><td colspan=\"2\">91.10/0.13/90.95 87.90/0.20/87.69</td><td>-</td><td>-</td></tr><tr><td>Estonian</td><td colspan=\"4\">62.90/0.77/51.65 45.70/1.63/43.33 34.80/2.05/24.96 39.90/1.68/34.84</td><td>-</td><td>-</td></tr><tr><td>Faroese</td><td colspan=\"3\">60.60/0.85/58.86 40.40/1.20/37.61 52.00/1.00/49.77</td><td>0.00/8.13/0.00</td><td>-</td><td>-</td></tr><tr><td>Finnish</td><td colspan=\"4\">43.70/1.24/43.20 21.50/2.75/21.40 20.50/3.59/20.18 15.00/3.21/14.91</td><td>-</td><td>-</td></tr><tr><td>French</td><td colspan=\"4\">72.50/0.51/71.79 69.70/0.61/69.03 73.90/0.65/73.70 63.60/0.77/63.10</td><td>-</td><td>-</td></tr><tr><td>Georgian</td><td colspan=\"4\">92.00/0.20/92.11 87.70/0.32/87.41 91.10/0.24/91.30 87.70/0.32/87.41</td><td>-</td><td>-</td></tr><tr><td>German</td><td colspan=\"4\">72.10/0.72/71.37 57.30/0.93/56.74 62.50/0.81/61.83 30.10/1.49/29.88</td><td>-</td><td>-</td></tr><tr><td>Haida</td><td>56.00/1.31/26.32</td><td>0.00/17.48/0.00</td><td colspan=\"2\">28.00/4.09/13.16 83.00/0.47/63.16</td><td>-</td><td>-</td></tr><tr><td>Hebrew</td><td colspan=\"4\">37.50/0.98/23.62 65.80/0.51/50.33 46.60/1.13/31.35 55.90/0.69/38.19</td><td>-</td><td>48.40/0.76/35.76</td></tr><tr><td>Hindi</td><td colspan=\"5\">85.90/0-</td><td>-</td></tr><tr><td>Khaling</td><td>17.90/2.01/8.06</td><td>58.20/0.81/35.55</td><td>16.40/3.17/7.35</td><td>52.20/0.97/29.86</td><td>-</td><td>-</td></tr><tr><td>Kurmanji</td><td colspan=\"5\">89.10/0-</td><td>-</td></tr><tr><td>Lithuanian</td><td colspan=\"4\">52.20/0.70/40.06 37.60/1.34/25.20 20.70/2.24/13.15 33.70/1.34/22.54</td><td>-</td><td>-</td></tr><tr><td>Lower Sorbian</td><td colspan=\"3\">70.80/0.57/63.33 69.00/0.52/60.00 69.90/0.66/61.75</td><td>0.00/7.01/0.00</td><td>-</td><td>-</td></tr><tr><td>Macedonian</td><td colspan=\"4\">83.60/0.31/82.98 79.10/0.32/78.44 76.00/0.36/74.95 69.30/0.50/68.39</td><td>-</td><td>-</td></tr><tr><td>Navajo</td><td colspan=\"2\">33.50/2.37/25.46 19.30/2.78/11.81</td><td>14.40/3.60/9.98</td><td>19.90/2.82/12.63</td><td>-</td><td>-</td></tr><tr><td>Northern Sami</td><td colspan=\"2\">37.00/1.42/29.70 40.80/1.26/35.75</td><td>11.80/3.29/9.54</td><td>34.00/1.64/28.63</td><td>-</td><td>38.90/1.43/32.80</td></tr><tr><td/><td/><td/><td>22</td><td>0.00/8.68/0.00</td><td>-</td><td>54.00/0.80/52.67</td></tr><tr><td>Persian</td><td colspan=\"4\">62.30/1.18/36.33 57.00/1.46/25.84 41.20/2.69/16.85 57.10/1.27/24.34</td><td>-</td><td>-</td></tr><tr><td>Polish</td><td colspan=\"4\">74.00/0.58/73.04 48.40/1.33/46.82 59.70/1.29/58.49 19.60/2.01/18.58</td><td>-</td><td>-</td></tr><tr><td>Portuguese</td><td colspan=\"4\">93.40/0.10/92.72 89.60/0.16/88.67 87.70/0.26/86.47 86.00/0.21/85.09</td><td>-</td><td>-</td></tr><tr><td>Quechua</td><td colspan=\"4\">70.30/1.52/59.89 93.00/0.28/87.83 \u2020 36.20/2.35/26.62 93.00/0.28/87.83 \u2020</td><td>-</td><td>-</td></tr><tr><td>Romanian</td><td colspan=\"4\">69.40/0.75/65.70 49.00/1.57/44.32 60.90/1.09/57.25 36.90/1.95/32.85</td><td>-</td><td>-</td></tr><tr><td>Russian</td><td colspan=\"4\">75.90/0.62/75.61 66.60/0.83/66.12 69.40/0.77/68.98 39.40/1.37/38.67</td><td>-</td><td>-</td></tr><tr><td>Scottish Gaelic</td><td colspan=\"5\">48.00/0.98/42.50 76.00/0.68/75.00 62.00/0.94/60.00 66.00/1.04/62.50 68.00/0.86/65.00</td><td>-</td></tr><tr><td>Serbo-Croatian</td><td colspan=\"4\">64.50/0.85/64.15 49.50/1.52/48.53 55.50/1.52/54.61 38.70/1.83/37.32</td><td>-</td><td>-</td></tr><tr><td>Slovak</td><td colspan=\"4\">72.30/0.50/63.83 63.70/0.60/52.41 69.90/0.57/59.00 52.80/0.82/40.68</td><td>-</td><td>-</td></tr><tr><td>Slovene</td><td colspan=\"4\">82.20/0.32/78.35 73.50/0.45/68.50 69.20/0.52/62.07 32.30/1.13/24.54</td><td>-</td><td>-</td></tr><tr><td>Sorani</td><td colspan=\"3\">51.70/1.06/31.66 57.50/0.95/18.09 23.00/2.38/16.58</td><td>46.50/1.31/8.54</td><td>-</td><td>-</td></tr><tr><td>Spanish</td><td colspan=\"4\">84.70/0.35/83.84 73.90/0.68/72.23 71.40/0.78/70.17 66.70/0.89/65.62</td><td>-</td><td>-</td></tr><tr><td>Swedish</td><td colspan=\"4\">75.70/0.43/74.95 70.00/0.49/69.13 73.00/0.44/71.93 47.70/1.00/46.78</td><td>-</td><td>-</td></tr><tr><td>Turkish</td><td>32.90/2.90/30.00</td><td>0.00/12.97/0.00</td><td colspan=\"2\">27.10/2.56/23.69 74.50/0.65/71.67</td><td>-</td><td>-</td></tr><tr><td>Ukrainian</td><td>72.80/0</td><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF16": {
"html": null,
"num": null,
"type_str": "table",
"text": "Sub-task 1 Medium Condition Part 3.",
"content": "<table><tr><td/><td>IIT(BHU)-1</td><td>UTNII-1</td><td>UA-3</td><td>UA-4</td><td>UA-1</td><td>UA-2</td><td>EHU-1</td></tr><tr><td>Albanian</td><td>0.00/10.23/0.00</td><td>0.30/7.21/0.54</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Arabic</td><td>0.80/5.66/0.96</td><td>0.20/6.59/0.12</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Armenian</td><td>0.00/9.17/0.00</td><td>0.00/6.39/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Basque</td><td>1.00/4.91/0.00</td><td>5.00/4.34/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Bengali</td><td>20.00/2.05/11.76</td><td>1.00/4.19/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Bulgarian</td><td>11.00/3.05/9.41</td><td>0.40/5.69/0.13</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Catalan</td><td>24.70/1.76/20.19</td><td>0.40/5.54/0.27</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Czech</td><td>15.60/3.27/14.24</td><td>0.00/7.07/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Danish</td><td>46.10/0.95/42.77</td><td>1.20/5.25/0.81</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Dutch</td><td>28.20/1.42/26.25</td><td>0.50/5.01/0.44</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>English Estonian</td><td>73.00/0.46/72.63 3.50/4.58/2.60</td><td>1.10/3.98/1.12 0.10/6.41/0.00</td><td colspan=\"3\">90.60/0.14/90.44 90.30/0.14/90.13 90.60/0.14/90.44 ---</td><td>--</td><td>--</td></tr><tr><td>Faroese</td><td>9.30/2.62/7.95</td><td>0.20/5.27/0.11</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Finnish</td><td>0.70/7.41/0.71</td><td>0.00/9.98/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>French</td><td>0.00/8.80/0.00</td><td>0.10/5.76/0.11</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Georgian</td><td>0.00/8.82/0.00</td><td>1.20/4.23/1.26</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>German Haida</td><td>25.60/1.75/25.52 7.00/4.89/0.00</td><td>0.30/6.42/0.31 1.00/6.76/0.00</td><td colspan=\"3\">66.80/0.73/65.87 66.20/0.75/65.46 66.00/0.72/64.94 ---</td><td>--</td><td>--</td></tr><tr><td>Hebrew</td><td>7.00/2.36/1.99</td><td>0.60/3.68/0.44</td><td>-</td><td>-</td><td>-</td><td>-</td><td>1.00/3.15/0.44</td></tr><tr><td>Hindi</td><td>33.40/2.34/9.02</td><td>1.20/5.26/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Hungarian</td><td>0.00/10.07/0.00</td><td>0.00/7.62/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Icelandic</td><td>13.00/2.53/12.01</td><td>0.60/5.52/0.67</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Irish</td><td>0.60/7.26/0.67</td><td>0.00/7.95/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Italian</td><td>0.00/10.02/0.00</td><td>0.00/7.06/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Khaling</td><td>0.00/7.32/0.00</td><td>0.90/4.38/0.24</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Kurmanji</td><td>50.20/1.27/50.84</td><td>0.20/5.23/0.10</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Latin</td><td>0.00/9.44/0.00</td><td>0.00/7.29/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Latvian</td><td>16.60/2.18/16.02</td><td>0.00/6.57/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Lithuanian</td><td>3.00/3.46/1.56</td><td>0.50/6.06/0.31</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Lower Sorbian</td><td>17.60/1.82/12.86</td><td>1.20/4.48/0.63</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Macedonian</td><td>0.00/8.68/0.00</td><td>0.20/4.98/0.21</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Navajo</td><td>0.40/5.61/0.20</td><td>0.70/5.82/0.41</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Northern Sami</td><td>2.50/4.12/2.15</td><td>0.50/5.86/0.27</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">Norwegian Bokmal 52.60/0.71/51.08</td><td>0.30/4.91/0.22</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">Norwegian Nynorsk 23.90/1.41/22.52</td><td>0.70/4.25/0.44</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Persian</td><td>10.50/4.17/1.50</td><td>0.50/5.84/0.00</td><td>29.40/3.59/8.24</td><td>29.00/3.63/7.49</td><td>4.70/5.26/1.12</td><td>28.90/3.61/7.87</td><td>-</td></tr><tr><td>Polish Portuguese</td><td>12.10/2.59/11.68 0.00/9.31/0.00</td><td>0.20/6.47/0.21 1.00/4.53/0.92</td><td colspan=\"3\">45.30/1.42/44.27 45.90/1.42/44.90 45.20/1.44/44.16 ---</td><td>--</td><td>--</td></tr><tr><td>Quechua</td><td>0.70/5.34/0.38</td><td>0.60/5.56/0.19</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Romanian</td><td>5.80/3.85/4.71</td><td>0.00/6.37/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Russian</td><td>15.70/2.61/15.71</td><td>0.00/7.27/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Scottish Gaelic</td><td colspan=\"2\">48.00/1.54/42.50 32.00/2.38/25.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Serbo-Croatian</td><td>4.10/4.55/3.98</td><td>0.10/7.04/0.10</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Slovak</td><td>12.50/1.75/7.56</td><td>0.60/4.65/0.16</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Slovene</td><td>21.00/1.54/14.30</td><td>2.00/5.03/1.05</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Sorani</td><td>0.00/7.64/0.00</td><td>0.90/4.72/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Spanish Swedish</td><td>20.10/2.72/19.52 40.60/1.08/39.19</td><td>0.20/5.71/0.22 0.20/6.13/0.21</td><td colspan=\"4\">56.40/0.84/55.31 56.20/0.85/55.10 64.60/0.75/62.91 56.80/0.84/55.75 ----</td><td>--</td></tr><tr><td>Turkish</td><td>0.00/11.45/0.00</td><td>0.10/8.44/0.00</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Ukrainian</td><td>12.20/2.04/8.23</td><td>0.70/4.99/0.27</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Urdu</td><td>31.20/2.48/2.83</td><td>4.70/4.38/0.94</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Welsh</td><td>0.00/8.77/0.00</td><td>2.00/4.48/1.28</td><td>-</td><td>-</td><td>-</td><td>-</td><td>6.00/3.32/5.13</td></tr></table>"
},
"TABREF17": {
"html": null,
"num": null,
"type_str": "table",
"text": "Sub-task 1 Low Condition Part 3.",
"content": "<table/>"
}
}
}
}