| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:31:39.479902Z" |
| }, |
| "title": "The IMS-CUBoulder System for the SIGMORPHON 2020 Shared Task on Unsupervised Morphological Paradigm Completion", |
| "authors": [ |
| { |
| "first": "Manuel", |
| "middle": [], |
| "last": "Mager", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Stuttgart", |
| "location": {} |
| }, |
| "email": "manuel.mager@ims.uni-stuttgart.de" |
| }, |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Colorado", |
| "location": { |
| "settlement": "Boulder" |
| } |
| }, |
| "email": "katharina.kann@colorado.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we present the systems of the University of Stuttgart IMS and the University of Colorado Boulder (IMS-CUBoulder) for SIGMORPHON 2020 Task 2 on unsupervised morphological paradigm completion (Kann et al., 2020). The task consists of generating the morphological paradigms of a set of lemmas, given only the lemmas themselves and unlabeled text. Our proposed system is a modified version of the baseline introduced together with the task. In particular, we experiment with substituting the inflection generation component with an LSTM sequenceto-sequence model and an LSTM pointergenerator network. Our pointer-generator system obtains the best score of all seven submitted systems on average over all languages, and outperforms the official baseline, which was best overall, on Bulgarian and Kannada.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we present the systems of the University of Stuttgart IMS and the University of Colorado Boulder (IMS-CUBoulder) for SIGMORPHON 2020 Task 2 on unsupervised morphological paradigm completion (Kann et al., 2020). The task consists of generating the morphological paradigms of a set of lemmas, given only the lemmas themselves and unlabeled text. Our proposed system is a modified version of the baseline introduced together with the task. In particular, we experiment with substituting the inflection generation component with an LSTM sequenceto-sequence model and an LSTM pointergenerator network. Our pointer-generator system obtains the best score of all seven submitted systems on average over all languages, and outperforms the official baseline, which was best overall, on Bulgarian and Kannada.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In recent years, a lot of progress has been made on the task of morphological inflection, which consists of generating an inflected word, given a lemma and a list of morphological features (Kann and Sch\u00fctze, 2017; Makarov and Clematide, 2018; Cotterell et al., 2016 Cotterell et al., , 2017 Cotterell et al., , 2018 McCarthy et al., 2019) . The systems developed for this task learn to model inflection in morphologically complex languages in a supervised fashion.", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 213, |
| "text": "(Kann and Sch\u00fctze, 2017;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 214, |
| "end": 242, |
| "text": "Makarov and Clematide, 2018;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 243, |
| "end": 265, |
| "text": "Cotterell et al., 2016", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 266, |
| "end": 290, |
| "text": "Cotterell et al., , 2017", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 291, |
| "end": 315, |
| "text": "Cotterell et al., , 2018", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 316, |
| "end": 338, |
| "text": "McCarthy et al., 2019)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, not all languages have annotated data available. For the 2018 SIGMORPHON shared task (Cotterell et al., 2018) , data for 103 unique languages has been provided. Even this highly multilingual dataset is just covering 1.61% of the 6359 languages 1 that exist in the world (Lewis, 2009) . The unsupervised morphological paradigm completion task (Jin et al., 2020) aims at generating inflections -more specifically all inflected forms, i.e., the entire paradigms, of given lemmas -without any explicit morphological information during training. A system that is able to solve this problem can generate morphological resources for most of the world's languages easily. This motivates us to participate in the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion .", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 118, |
| "text": "(Cotterell et al., 2018)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 279, |
| "end": 292, |
| "text": "(Lewis, 2009)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 351, |
| "end": 369, |
| "text": "(Jin et al., 2020)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task, however, is challenging: As the number of inflected forms per lemma is unknown a priori, an unsupervised morphological paradigm completion system needs to detect the paradigm size from raw text. Since the names of morphological features expressed in a language are not known if there is no supervision, a system should mark which inflections correspond to the same morphological features across lemmas, but needs to do so without using names, cf. Figure 1. For the shared task, no external resources such as pretrained models, annotated data, or even additional monolingual text can be used. The same holds true for multilingual models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We submit two systems, which are both modifications of the official shared task baseline. The latter is a pipeline system, which performs four steps: edit tree retrieval, additional lemma retrieval, paradigm size discovery, and inflection generation (Jin et al., 2020) . We experiment with substituting the original generation component, which is either a simple non-neural system (Cotterell et al., 2017) or a transducer-based hard-attention model (Makarov and Clematide, 2018) with an LSTM encoder-decoder architecture with attention (Bahdanau et al., 2015) -IMS-CUB1-and a pointergenerator network (See et al., 2017) -IMS-CUB2. IMS-CUB2 achieves the best results of all submitted systems, outperforming the second best system by 2.07% macro-averaged best-match accuracy (BMAcc; Jin et al., 2020), when averaged over all languages. However, we underperform the baseline system, which performs 1.03% BMAcc better than IMS-CUB2. Looking at individual languages, IMS-CUB2 obtains the best results overall for Bulgarian and Kannada.", |
| "cite_spans": [ |
| { |
| "start": 250, |
| "end": 268, |
| "text": "(Jin et al., 2020)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 381, |
| "end": 405, |
| "text": "(Cotterell et al., 2017)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 449, |
| "end": 478, |
| "text": "(Makarov and Clematide, 2018)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 601, |
| "end": 619, |
| "text": "(See et al., 2017)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The findings from our work on the shared task are as follows: i) the copy capabilities of a pointergenerator network are useful in this setup; and ii) unsupervised morphological paradigm completion is a challenging task: no submitted system outperforms the baselines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unsupervised methods have shown to be effective for morphological surface segmentation. LIN-GUISTICA (Goldsmith, 2001 ) and MORFESSOR (Creutz, 2003; Creutz and Lagus, 2007; Poon et al., 2009) are two unsupervised systems for the task.", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 117, |
| "text": "(Goldsmith, 2001", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 134, |
| "end": 148, |
| "text": "(Creutz, 2003;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 149, |
| "end": 172, |
| "text": "Creutz and Lagus, 2007;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 173, |
| "end": 191, |
| "text": "Poon et al., 2009)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the realm of morphological generation, Yarowsky and Wicentowski (2000) worked on a task which was similar to unsupervised morphological paradigm completion, but required additional knowledge (e.g., a list of morphemes). Dreyer and Eisner (2011) used a set of seed paradigms to train a paradigm completion model. Ahlberg et al. (2015) and Hulden et al. (2014) also relied on information about the paradigms in the language. Erdmann et al. (2020) proposed a system for a task similar to this shared task.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 73, |
| "text": "Yarowsky and Wicentowski (2000)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 315, |
| "end": 336, |
| "text": "Ahlberg et al. (2015)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 341, |
| "end": 361, |
| "text": "Hulden et al. (2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Learning to generate morphological paradigms in a fully supervised way is the more common approach. Methods include Durrett and DeNero (2013) , Nicolai et al. (2015) , and Kann and Sch\u00fctze (2018) . Supervised morphological inflection has further gained popularity through previous SIG-MORPHON and CoNLL-SIGMORPHON shared tasks on the topic (Cotterell et al., 2016 (Cotterell et al., , 2017 (Cotterell et al., , 2018 McCarthy et al., 2019) . The systems proposed for these shared tasks have a special relevance for our work, as we investigate the performance of morphological inflection components based on Kann and Sch\u00fctze (2016a,b) and Sharma et al. (2018) within a pipeline for unsupervised morphological paradigm completion.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 141, |
| "text": "Durrett and DeNero (2013)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 144, |
| "end": 165, |
| "text": "Nicolai et al. (2015)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 172, |
| "end": 195, |
| "text": "Kann and Sch\u00fctze (2018)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 340, |
| "end": 363, |
| "text": "(Cotterell et al., 2016", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 364, |
| "end": 389, |
| "text": "(Cotterell et al., , 2017", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 390, |
| "end": 415, |
| "text": "(Cotterell et al., , 2018", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 416, |
| "end": 438, |
| "text": "McCarthy et al., 2019)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 606, |
| "end": 632, |
| "text": "Kann and Sch\u00fctze (2016a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 637, |
| "end": 657, |
| "text": "Sharma et al. (2018)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section, we introduce our pipeline system for unsupervised morphological paradigm completion. First, we describe the baseline system, since we rely on some of its components. Then, we describe our morphological inflection models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For the initial steps of our pipeline, we employ the first three components of the baseline (Jin et al., 2020) , cf. Figure 2 , which we describe in this subsection. We use the official implementation. 2", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 110, |
| "text": "(Jin et al., 2020)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 125, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Shared Task Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Retrieval of relevant edit trees. This component (cf. Figure 2 .1) identifies words in the monolingual corpus that could belong to a given lemma's paradigm by computing the longest common substring between the lemma and all words. Then, the transformation from a lemma to each word potentially from its paradigm is represented by edit trees (Chrupa\u0142a, 2008) . Edit trees with frequencies are below a threshold are discarded.", |
| "cite_spans": [ |
| { |
| "start": 341, |
| "end": 357, |
| "text": "(Chrupa\u0142a, 2008)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 54, |
| "end": 62, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Shared Task Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Retrieval of additional lemmas. To increase the confidence that retrieved edit trees represent valid inflections, more lemmas are needed (cf. Figure 2 .2). To find those, the second component of the system applies edit trees to potential lemmas in the corpus. If enough potential inflected forms are found in the corpus, a lemma is considered valid.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 151, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Shared Task Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Paradigm size discovery. Now the system needs to find a mapping between edit trees and paradigms (cf. Figure 2 .3). This is done based on two assumptions: that for each lemma a maximum of one edit tree per paradigm slot can be found, and that each edit tree only realizes one paradigm slot for all lemmas. In addition, the similarity of potential slots is measured. With these elements, similar potential slots are merged until the final paradigm size for a language is being determined.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 110, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Shared Task Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Generation. Now, that the system has a set of lemmas and corresponding potential inflected forms, the baseline employs a morphological inflection component, which learns to generate inflections from lemmas and a slot indicator, and generates missing forms (cf. Figure 2 .4). We experiment with substituting this final component.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 261, |
| "end": 269, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Shared Task Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the remainder of this paper, we will refer to the original baselines with the non-neural system from Cotterell et al. (2017) and the inflection model from Makarov and Clematide (2018) as BL-1 and BL-2, respectively.", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 127, |
| "text": "Cotterell et al. (2017)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 158, |
| "end": 186, |
| "text": "Makarov and Clematide (2018)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Shared Task Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We use an LSTM encoder-decoder model with attention (Bahdanau et al., 2015 ) for our first system, IMS-CUB1, since it has been shown to obtain high performance on morphological inflection (Kann and Sch\u00fctze, 2016a) . This model takes two inputs: a sequence of characters and a sequence of morphological features. It then generates the sequence of characters of the inflected form. For the input, we simply concatenate the paradigm slot number and all characters.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 74, |
| "text": "(Bahdanau et al., 2015", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 188, |
| "end": 213, |
| "text": "(Kann and Sch\u00fctze, 2016a)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSTM Encoder-Decoder", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For IMS-CUB2, we use a pointer-generator network (See et al., 2017) . 3 We expect this system to perform better than IMS-CUB1, given the pointergenerator's better performance on morphological inflection in the low-resource setting (Sharma et al., 2018) . A pointer-generator network is a hybrid between an attention-based sequence-to-sequence model (Bahdanau et al., 2015 ) and a pointer network (Vinyals et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 67, |
| "text": "(See et al., 2017)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 70, |
| "end": 71, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 231, |
| "end": 252, |
| "text": "(Sharma et al., 2018)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 349, |
| "end": 371, |
| "text": "(Bahdanau et al., 2015", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 396, |
| "end": 418, |
| "text": "(Vinyals et al., 2015)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pointer-Generator Network", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The standard pointer-generator network consists of a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) encoder and a unidirectional LSTM decoder with a copy mechanism. Here, we follow (Sharma et al., 2018) and use two separate encoders: one for the lemma and one for the morphological tags. The decoder then computes the probability distribution of the output at each time step as a weighted sum of the probability distribution over the output vocabulary and the attention distribution over the input characters. The weights can be seen as the probability to generate or copy, respectively, 3 We use the following implementation: https://github.com/abhishek0318/ conll-sigmorphon-2018 and are computed by a feedforward network, given the last decoder hidden state. For details, we refer the reader to Sharma et al. (2018) .", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 106, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 188, |
| "end": 209, |
| "text": "(Sharma et al., 2018)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 595, |
| "end": 596, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 805, |
| "end": 825, |
| "text": "Sharma et al. (2018)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pointer-Generator Network", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "4 Experimental Setup", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pointer-Generator Network", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The shared task organizers provide data for five development languages, for which development sets with gold solutions are given. Those languages -Maltese, Persian, Portuguese, Russian, Swedishare not taken into account for the final evaluation. The test languages, in contrast, are supposed to be only for system evaluation and do not come with developments sets. For those languages -Basque, Bulgarian, English, Finnish, German, Kannada, Navajo, Spanish, and Turkish -only a list of lemmas and a monolingual Bible (McCarthy et al., 2020) are given.", |
| "cite_spans": [ |
| { |
| "start": 510, |
| "end": 539, |
| "text": "Bible (McCarthy et al., 2020)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Languages", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The official evaluation metric of the shared task is BMAcc (Jin et al., 2020) . Gold solutions are obtained from UniMorph (Kirov et al., 2018) . Two versions of BMAcc exist: micro-averaged BMAcc and macro-averaged BMAcc. In this paper, we only report macro-averaged BMAcc, the official shared task metric.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 77, |
| "text": "(Jin et al., 2020)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 122, |
| "end": 142, |
| "text": "(Kirov et al., 2018)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metric", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "During the development of our morphological generation systems, we use regular accuracy, the standard evaluation metric for morphological inflection (Cotterell et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 173, |
| "text": "(Cotterell et al., 2016)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metric", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Morphological inflection data. We use the first three components of the baseline model, i.e., the ones performing edit tree retrieval, additional lemma retrieval, and paradigm size discovery, to create training and development data for our inflection models. Those datasets consist of lemmainflection pairs found in the raw text, together with a number indicating the (predicted) paradigm slot, and are described in Table 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 416, |
| "end": 423, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The test set for our morphological inflection systems consist of the lemma-paradigm slot pairs not found in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Hyperparameters. For IMS-CUB1, we use an embedding size of 300, a hidden layer of size 100, a batch size of 20, Adadelta (Zeiler, 2012) for optimization, and a learning rate of 1. For each language, we train a system for 100 epochs, using early stopping with a patience of 10 epochs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For IMS-CUB2, we follow two different approaches. The first is to use a single hyperparameter configuration for all languages (IMS-CUB2-S). The second consists of using a variable setup depending on the training set size (IMS-CUB2-V). For IMS-CUB2-S, we use an embedding size of 300, a hidden layer size of 100, a dropout rate of 0.3, and train for 60 epochs with an early-stopping patience of 10 epochs. We further use an Adam (Kingma and Ba, 2014) optimizer with an initial learning rate of 0.001.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For IMS-CUB2-V, we use the following hyperparameters for training set size T :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 T < 101: an embedding size of 100, a dropout coefficient of 0.5, 300 epochs of training, and an early-stopping patience of 100;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 100 < T < 501: an embedding size of 100, a dropout coefficient of 0.5, 80 training epochs, and an early-stopping patience of 20;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 500 < T : the same hyperparameters as for IMS-CUB2-S.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For IMS-CUB2, we select the best performing system (between IMS-CUB2-S and IMS-CUB2-V) as our final model. The models are evaluated on the morphological inflection task development set using accuracy. All scores are shown in Table 3 shows the official test set results for IMS-CUB1 and IMS-CUB2, compared to the official baselines and all other submitted systems.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 225, |
| "end": 232, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Morphological Inflection Component", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Our best system, IMS-CUB2, achieves the highest scores of all submitted systems (i.e., excluding the baselines), outperforming the second best submission by 2.07% BMAcc. However, BL-1 and BL-2 outperform IMS-CUB2 by 1.03% and 0.3%, respectively. Looking at the results for individual languages, IMS-CUB2 obtains the highest performance overall for Bulgarian (difference to the second best system 0.42%) and Kannada (difference to the second best system 0.53%). Comparing our two submissions, IMS-CUB1 underperforms IMS-CUB2 by 3.6%, showing that vanilla sequence-to-sequence models are not optimally suited for the task. We hypothesize that this could be due to the amount or the diversity of the generated morphological inflection training files.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "As our systems rely on the output of the previous 3 steps of the baseline, only few training examples were available for Basque and Navajo: 85 and 17, respectively. Probably at least partially due to this fact, i.e., due to finding patterns in the raw text corpus being difficult, all systems obtain their lowest scores on these two languages. However, even though Finnish has 2306 training instances for morphological inflection, our best system surprisingly only reaches 5.38% BMAcc. The same happens in Kannada and Turkish: the inflection training set is relatively large, but the overall performance on unsupervised morphological paradigm completion is low. On the contrary, even though English has a relatively small training set (343 examples), the performance of IMS-CUB2 is highest for this language, with 66.20% BMAcc. We think that the quality of the generated inflection training set and the correctness of the predicted paradigm size of the languages are the main reasons behind these performance differences. Improving steps 1 to 3 in the overall pipeline thus seems important in order to achieve better results on the task of unsupervised morphological paradigm completion in the future.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In this paper, we described the IMS-CUBoulder submission to the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion. We explored two modifications of the official baseline system by substituting its inflection generation component with two alternative models. Thus, our final system performed 4 steps: edit tree retrieval, additional lemma retrieval, paradigm size discovery, and inflection generation. The last component was either an LSTM sequence-to-sequence model with attention (IMS-CUB1) or a pointergenerator network (IMS-CUB2). Although our systems could not outperform the official baselines on average, IMS-CUB2 was the best submitted system. It further obtained the overall highest performance for Bulgarian and Kannada.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The number of languages can vary depending on the classification schema used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/cai-lw/ morpho-baseline", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Thanks to Arya McCarthy, Garrett Nicolai, and Mans Hulden for (co-)organizing this shared task, and to Huiming Jin, Liwei Cai, Chen Xia, and Yihui Peng for providing the baseline system! This project has benefited from financial support to MM by DAAD via a Doctoral Research Grant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Paradigm classification in supervised learning of morphology", |
| "authors": [ |
| { |
| "first": "Malin", |
| "middle": [], |
| "last": "Ahlberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Forsberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1024--1029", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/N15-1107" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learn- ing of morphology. In NAACL-HLT, pages 1024- 1029, Denver, Colorado. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Conference on Learning Representations (ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Towards a machinelearning architecture for lexical functional grammar parsing", |
| "authors": [ |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Chrupa\u0142a", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grzegorz Chrupa\u0142a. 2008. Towards a machine- learning architecture for lexical functional grammar parsing. Ph.D. thesis, Dublin City University.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Mielke", |
| "suffix": "" |
| }, |
| { |
| "first": "Miikka", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection", |
| "volume": "", |
| "issue": "", |
| "pages": "1--27", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K18-3001" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "McCarthy, Katharina Kann, Sebastian Mielke, Gar- rett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL- SIGMORPHON 2018 shared task: Universal mor- phological reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Univer- sal Morphological Reinflection, pages 1-27, Brus- sels. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Sylak-Glassman", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00e9raldine", |
| "middle": [], |
| "last": "Walther", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Vylomova", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the CoNLL SIGMORPHON 2017", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K17-2001" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Univer- sal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Shared Task: Universal Morphological Reinflection, pages 1-30, Vancouver. Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shared Task: Universal Morphological Reinflection, pages 1-30, Vancouver. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The SIGMORPHON 2016 shared Task-Morphological reinflection", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Sylak-Glassman", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "10--22", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W16-2002" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task- Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphol- ogy, pages 10-22, Berlin, Germany. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Unsupervised segmentation of words using prior distributions of morph length and frequency", |
| "authors": [ |
| { |
| "first": "Mathias", |
| "middle": [], |
| "last": "Creutz", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "280--287", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/1075096.1075132" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mathias Creutz. 2003. Unsupervised segmentation of words using prior distributions of morph length and frequency. In ACL, pages 280-287, Sapporo, Japan. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Unsupervised models for morpheme segmentation and morphology learning", |
| "authors": [ |
| { |
| "first": "Mathias", |
| "middle": [], |
| "last": "Creutz", |
| "suffix": "" |
| }, |
| { |
| "first": "Krista", |
| "middle": [], |
| "last": "Lagus", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACM TSLP", |
| "volume": "4", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphol- ogy learning. ACM TSLP, 4(1):3.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Discovering morphological paradigms from plain text using a Dirichlet process mixture model", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Dreyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "616--627", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a Dirichlet process mixture model. In EMNLP, pages 616-627, Edinburgh, Scotland, UK. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Supervised learning of complete morphological paradigms", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "1185--1195", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In NAACL-HLT, pages 1185-1195, Atlanta, Georgia. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The paradigm discovery problem", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Erdmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Shijie", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2005.01630" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Erdmann, Micha Elsner, Shijie Wu, Ryan Cotterell, and Nizar Habash. 2020. The paradigm discovery problem. arXiv preprint arXiv:2005.01630.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Unsupervised learning of the morphology of a natural language", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Goldsmith", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "2", |
| "pages": "153--198", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):153-198.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Semi-supervised learning of morphological paradigms and lexicons", |
| "authors": [ |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Forsberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Malin", |
| "middle": [], |
| "last": "Ahlberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "569--578", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/E14-1060" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mans Hulden, Markus Forsberg, and Malin Ahlberg. 2014. Semi-supervised learning of morphological paradigms and lexicons. In EACL, pages 569- 578, Gothenburg, Sweden. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Unsupervised morphological paradigm completion", |
| "authors": [ |
| { |
| "first": "Huiming", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| }, |
| { |
| "first": "Liwei", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Yihui", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Arya", |
| "middle": [ |
| "D" |
| ], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya D. McCarthy, and Katharina Kann. 2020. Unsuper- vised morphological paradigm completion. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "The SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Arya", |
| "middle": [ |
| "D" |
| ], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katharina Kann, Arya D. McCarthy, Garrett Nico- lai, and Mans Hulden. 2020. The SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Re- search in Phonetics, Phonology, and Morphology. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "14th SIGMORPHON Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "62--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016a. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In 14th SIGMORPHON Workshop, pages 62-70.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Singlemodel encoder-decoder with explicit morphological representation for reinflection", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "555--560", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-2090" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016b. Single- model encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection", |
| "volume": "", |
| "issue": "", |
| "pages": "40--48", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K17-2003" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2017. The LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection. In CoNLL SIGMORPHON 2017 Shared Task: Univer- sal Morphological Reinflection, pages 40-48, Van- couver. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Neural transductive learning and beyond: Morphological generation in the minimal-resource setting", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "3254--3264", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D18-1363" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2018. Neural transductive learning and beyond: Morphological generation in the minimal-resource setting. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3254- 3264, Brussels, Belgium. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "UniMorph 2.0: Universal morphology", |
| "authors": [ |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Sylak-Glassman", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00e9raldine", |
| "middle": [], |
| "last": "Walther", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Vylomova", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabrina", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mielke", |
| "suffix": "" |
| }, |
| { |
| "first": "Arya", |
| "middle": [], |
| "last": "Mc-Carthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christo Kirov, Ryan Cotterell, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sabrina J. Mielke, Arya Mc- Carthy, Sandra K\u00fcbler, David Yarowsky, Jason Eis- ner, and Mans Hulden. 2018. UniMorph 2.0: Uni- versal morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Eu- ropean Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Ethnologue: Languages of the world", |
| "authors": [ |
| { |
| "first": "Lewis", |
| "middle": [], |
| "last": "Paul", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M Paul Lewis. 2009. Ethnologue: Languages of the world. SIL international.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Imitation learning for neural morphological string transduction", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Makarov", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Clematide", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2877--2882", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D18-1314" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Makarov and Simon Clematide. 2018. Imita- tion learning for neural morphological string trans- duction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2877-2882, Brussels, Belgium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Arya", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Shijie", |
| "middle": [], |
| "last": "Vylomova", |
| "suffix": "" |
| }, |
| { |
| "first": "Chaitanya", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lawrence", |
| "middle": [], |
| "last": "Malaviya", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Wolf-Sonkin", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Miikka", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabrina", |
| "middle": [ |
| "J" |
| ], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Mielke", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Heinz", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "229--244", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-4226" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Gar- rett Nicolai, Christo Kirov, Miikka Silfverberg, Sab- rina J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229- 244, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "The johns hopkins university bible corpus: 1600+ tongues for typological exploration", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Arya", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Dylan", |
| "middle": [], |
| "last": "Wicks", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Winston", |
| "middle": [], |
| "last": "Mueller", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Adams", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "2884--2892", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Gar- rett Nicolai, Matt Post, and David Yarowsky. 2020. The johns hopkins university bible corpus: 1600+ tongues for typological exploration. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2884-2892, Marseille, France. European Language Resources Association.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Inflection generation as discriminative string transduction", |
| "authors": [ |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Kondrak", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Denver, Colorado. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "922--931", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/N15-1093" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discriminative string transduction. In NAACL-HLT, pages 922-931, Den- ver, Colorado. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Unsupervised morphological segmentation with log-linear models", |
| "authors": [ |
| { |
| "first": "Hoifung", |
| "middle": [], |
| "last": "Poon", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "209--217", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In NAACL-HLT, pages 209- 217. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Get to the point: Summarization with pointergenerator networks", |
| "authors": [ |
| { |
| "first": "Abigail", |
| "middle": [], |
| "last": "See", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1073--1083", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1099" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In ACL, pages 1073-1083, Van- couver, Canada. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "IIT(BHU)-IIITH at CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection", |
| "authors": [ |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Sharma", |
| "suffix": "" |
| }, |
| { |
| "first": "Ganesh", |
| "middle": [], |
| "last": "Katrapati", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipti Misra", |
| "middle": [], |
| "last": "Sharma", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection", |
| "volume": "", |
| "issue": "", |
| "pages": "105--111", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K18-3013" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhishek Sharma, Ganesh Katrapati, and Dipti Misra Sharma. 2018. IIT(BHU)-IIITH at CoNLL- SIGMORPHON 2018 shared task on universal mor- phological reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Uni- versal Morphological Reinflection, pages 105-111, Brussels. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Pointer networks", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Meire", |
| "middle": [], |
| "last": "Fortunato", |
| "suffix": "" |
| }, |
| { |
| "first": "Navdeep", |
| "middle": [], |
| "last": "Jaitly", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2692--2700", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in neural in- formation processing systems, pages 2692-2700.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Minimally supervised morphological analysis by multimodal alignment", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Wicentowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "207--216", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/1075218.1075245" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Yarowsky and Richard Wicentowski. 2000. Min- imally supervised morphological analysis by multi- modal alignment. In ACL, pages 207-216, Hong Kong. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Adadelta: an adaptive learning rate method", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Matthew", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zeiler", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1212.5701" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Partial Portuguese development examples. The input is a list of lemmas, and the output is a list of all inflected forms of each lemma. In this example, unnamed paradigm slots correspond to the following UniMorph features: 1=V.PTCP;FEM;PL;PST, 2=V.PTCP;FEM;SG;PST, 3=V.PTCP;MASC;PL;PST, 4=V.PTCP;MASC;SG;PST.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "text": "The baseline system. This paper experiments with modifying the generation module. All components are described in \u00a73.1.", |
| "uris": null |
| }, |
| "TABREF1": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "Number of instances retrieved by steps 1 to 3 in our pipeline, which are used for training and development of our inflection generation components. The test set contains the lemma and paradigm slot for forms that need to be generated.", |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "", |
| "content": "<table><tr><td>: Accuracy of our morphological inflection com-</td></tr><tr><td>ponents on the development sets produced by the first</td></tr><tr><td>three steps in our pipeline. We list both development</td></tr><tr><td>and test languages.</td></tr></table>" |
| }, |
| "TABREF4": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "Bulgarian 28.30 31.69 2.99 4.15 27.22 32.11 27.69 28.94 27.89 English 65.60 66.20 3.53 17.29 47.80 61.00 50.20 52.80 51.20", |
| "content": "<table><tr><td/><td>BL</td><td/><td colspan=\"2\">KU-CST</td><td>IMS-CUB</td><td/><td/><td>NYU-CUB</td><td/></tr><tr><td>Language</td><td>1</td><td>2</td><td>1</td><td>2</td><td>1</td><td>2</td><td>1</td><td>2</td><td>3</td></tr><tr><td>Basque</td><td>0.06</td><td colspan=\"2\">0.06 0.02</td><td>0.01</td><td colspan=\"2\">0.04 00.06</td><td>0.05</td><td>0.05</td><td>0.07</td></tr><tr><td>Finnish</td><td>05.33</td><td colspan=\"2\">5.50 0.39</td><td colspan=\"3\">2.08 04.90 05.38</td><td>5.36</td><td colspan=\"2\">5.47 05.35</td></tr><tr><td>German</td><td colspan=\"3\">28.35 29.00 0.70</td><td colspan=\"6\">4.98 24.60 28.35 27.30 27.35 27.35</td></tr><tr><td>Kannada</td><td colspan=\"3\">15.49 15.12 4.27</td><td colspan=\"6\">1.69 10.50 15.65 11.10 11.16 11.10</td></tr><tr><td>Navajo</td><td>3.23</td><td colspan=\"2\">3.27 0.13</td><td>0.20</td><td colspan=\"2\">0.33 01.17</td><td>0.40</td><td>0.43</td><td>0.43</td></tr><tr><td>Spanish</td><td colspan=\"9\">22.96 23.67 3.52 10.84 19.50 22.34 20.39 20.56 20.30</td></tr><tr><td>Turkish</td><td colspan=\"3\">14.21 15.53 0.11</td><td colspan=\"6\">0.71 13.54 14.73 14.88 15.39 15.13</td></tr><tr><td>Average</td><td colspan=\"9\">20.39 21.12 1.74 04.66 16.49 20.09 17.49 18.02 17.65</td></tr></table>" |
| }, |
| "TABREF5": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "Final performance (macro-average BMAcc in percentages) of all systems on all test languages. Best scores overall are in bold, and best scores of submitted systems are underlined.", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |