ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1049.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:53:38.213684Z"
},
"title": "Neural Multi-Source Morphological Reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CIS LMU Munich",
"location": {
"country": "Germany"
}
},
"email": "kann@cis.lmu.de"
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"country": "USA"
}
},
"email": "ryan.cotterell@jhu.edu"
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CIS LMU Munich",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We explore the task of multi-source morphological reinflection, which generalizes the standard, single-source version. The input consists of (i) a target tag and (ii) multiple pairs of source form and source tag for a lemma. The motivation is that it is beneficial to have access to more than one source form since different source forms can provide complementary information, e.g., different stems. We further present a novel extension to the encoder-decoder recurrent neural architecture, consisting of multiple encoders, to better solve the task. We show that our new architecture outperforms single-source reinflection models and publish our dataset for multi-source morphological reinflection to facilitate future research.",
"pdf_parse": {
"paper_id": "E17-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "We explore the task of multi-source morphological reinflection, which generalizes the standard, single-source version. The input consists of (i) a target tag and (ii) multiple pairs of source form and source tag for a lemma. The motivation is that it is beneficial to have access to more than one source form since different source forms can provide complementary information, e.g., different stems. We further present a novel extension to the encoder-decoder recurrent neural architecture, consisting of multiple encoders, to better solve the task. We show that our new architecture outperforms single-source reinflection models and publish our dataset for multi-source morphological reinflection to facilitate future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphologically rich languages still constitute a challenge for natural language processing (NLP). The increased data sparsity caused by highly inflected word forms in certain languages causes otherwise state-of-the-art systems to perform worse in standard tasks, e.g., parsing (Ballesteros et al., 2015) and machine translation (Bojar et al., 2016) . To create systems whose performance is not deterred by complex morphology, the development of NLP tools for the generation and analysis of morphological forms is crucial. Indeed, these considerations have motivated a great deal of recent work on the topic (Ahlberg et al., 2015; Dreyer, 2011; Nicolai et al., 2015) .",
"cite_spans": [
{
"start": 278,
"end": 304,
"text": "(Ballesteros et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 329,
"end": 349,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 608,
"end": 630,
"text": "(Ahlberg et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 631,
"end": 644,
"text": "Dreyer, 2011;",
"ref_id": "BIBREF14"
},
{
"start": 645,
"end": 666,
"text": "Nicolai et al., 2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the area of generation, the most natural task is morphological inflection-finding an inflected form for a given target tag and lemma. An example for English is as follows: (trg:3rdSgPres, bring)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Past Ind Past Sbj Sg Pl Sg Pl Sg Pl 1 treffe treffen traf trafen tr\u00e4fe tr\u00e4fen 2 triffst trefft trafst traft tr\u00e4fest tr\u00e4fet 3 trifft treffen traf trafen tr\u00e4fe tr\u00e4fen Table 1 : The paradigm of the strong German verb TREFFEN, which exhibits an irregular ablaut pattern. Different parts of the paradigm make use of one of four bolded theme vowels: e, i, a or\u00e4. In a sense, the verbal paradigm is partitioned into subparadigms. To see why multi-source models could help in this case, starting only from the infinitive treffen makes it difficult to predict subjunctive form tr\u00e4fest, but the additional information of the fellow subjunctive form tr\u00e4fe makes the task easier.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Present Ind",
"sec_num": null
},
{
"text": "\u2192 brings. In this case, the 3rd person singular present tense of bring is generated. One generalization of inflection is morphological reinflection (MRI) (Cotterell et al., 2016a) , where we must produce an inflected form from a triple of target tag, source form and source tag. The inflection task is the special case where the source form is the lemma. As an example, we may again consider generating the English past tense form from the 3rd person singular present: (trg:3rdSgPres, brought, src:Past) \u2192 brings (where trg = \"target tag\" and src = \"source tag\"). As the starting point varies, MRI is more difficult than morphological inflection and exhibits more data sparsity. However, it is also more widely applicable since lexical resources are not always complete and, thus, the lemma is not always available. A more complex German example is given in Table 1 . In this work, we generalize the MRI task to a multi-source setup. Instead of using a single source form-tag pair, we use multiple source form-tag pairs. Our motivation is that (i) it is often beneficial to have access to more than one source form since different source forms can provide complementary information, e.g., different stems; and (ii) in many application scenarios, we will have encountered more than one form of a paradigm at the point when we want to generate a new form.",
"cite_spans": [
{
"start": 154,
"end": 179,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 858,
"end": 865,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Present Ind",
"sec_num": null
},
{
"text": "We will make the intuition that multiple source forms provide complementary information precise in the next section, but first return to the English verb bring. Generating the form brings from brought may be tricky-there is an irregular vowel shift. However, if we had a second form with the same theme vowel, e.g., bringing, the task would be much easier, i.e., (trg:3rdSgPres, form1:brought, src1:Past, form2:bringing, src2:Gerund). A multi-source approach clearly is advantageous for this case since mapping bringing to brings is regular even though the verb itself is irregular.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Present Ind",
"sec_num": null
},
{
"text": "The contributions of the paper are as follows. (i) We define the task of multi-source MRI, a generalization of single-source MRI. (ii) We show that a multi-source MRI system, implemented as a novel encoder-decoder, outperforms the top-performing system in the SIGMORPHON 2016 Shared Task on Morphological Reinflection on seven out of eight languages, when given additional source forms. (iii) We release our data to support the development of new systems for MRI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Present Ind",
"sec_num": null
},
{
"text": "Previous work on morphological reinflection has assumed a single source form, i.e., an input consisting of exactly one inflected source form (potentially the lemma) and the corresponding morphological tag. The output is generated from this input. In contrast, multi-source morphological reinflection, the task we introduce, is a generalization in which the model receives multiple form-tag pairs. In effect, this gives the model a partially annotated paradigm from which it predicts the rest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "The multi-source variant is a more natural problem than single-source morphological reinflection since we often have access to more than just one form. 1 For example, corpora such as the universal dependency corpus (McDonald et al., 2013) that are annotated on the token level with inflectional features often contain several different inflected forms of a lemma. Such corpora would provide an ideal source of data for the multi-source MRI task.",
"cite_spans": [
{
"start": 215,
"end": 238,
"text": "(McDonald et al., 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "1 Scenarios where a single form is available and that form is the lemma are perhaps not infrequent. In high-resource languages, an electronic dictionary may have near-complete coverage of the lemmata of the language. However, paradigm completion is especially crucial for neologisms and lowresource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "Formally, we can think of a morphological paradigm as follows. Let \u03a3 be a discrete alphabet for a given language and T be the set of morphological tags in the language. The inflectional table or morphological paradigm \u03c0 of a lemma w can be formalized as a set of pairs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "\u03c0(w) = {(f 1 , t 1 ), (f 2 , t 2 ), . . . , (f N , t N )}, (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "where f i \u2208 \u03a3 + is an inflected form of w, and t i \u2208 T is the morphological tag of the form f i . The integer N is the number of slots in the paradigm that have the syntactic category (POS) of w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "Using this notation, single-source morphological reinflection (MRI) can be described as follows. Given a target tag and a pair of source form and source tag (t trg , (f src , t src )) as input, predict the target form f trg . There has been a substantial amount of prior work on this task, including systems that participated in Task 2 of the SIGMOR-PHON 2016 shared task (Cotterell et al., 2016a) . Thus, we may define the task of multi-source morphological reinflection as follows: Given a target tag and a set of k form-tag source pairs",
"cite_spans": [
{
"start": 372,
"end": 397,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "(t trg , {(f 1 src , t 1 src ), . . . , (f k src , t k src )})",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "as input, predict the target form f trg . Note that single-source MRI is a special case of multi-source MRI for k = 1. Figure 1 gives examples for four different configurations that can occur in multi-source MRI. 2 We have colored the source forms green and drawn a dotted line to the target if they contain sufficient information for correct generation. If two source forms together are needed, the dotted line encloses both of them. Source forms that provide no information in the configuration are colored red (no arrow); note these forms could provide (and in most cases will provide) useful information for other combinations of source and target forms.",
"cite_spans": [
{
"start": 213,
"end": 214,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Task: Multi-Source Reinflection",
"sec_num": "2"
},
{
"text": "2 Figure 1 is not intended as a complete taxonomy of possible MRI configurations, e.g., there are hybrids of ANYFORM and NOFORM (some forms are informative, others are suppletive) and fuzzy variants (a single form gives pretty good evidence for how to generate the target form, but another single form gives better evidence). All of our examples make additional assumptions, e.g., that we have not seen other similar forms in training either of the same lemma (e.g., poner) or of a similar lemma (e.g., reponer). Hopefully, the examples are illustrative of the main conceptual distinction: several single forms each are sufficient by themselves (ANYFORM), a single, but carefully selected form is sufficient (SINGLEFORM), multiple forms are needed to generate the target (MULTIFORM) and the target form cannot be predicted (irregular) from the source forms (NOFORM). Figure 1 : Four possible input configurations in multi-source morphological reinflection (MRI). In each subfigure, the target form on the right is purple. The source forms are on the left and are green if they can be used to predict the target form (also connected with a dotted line) and red if they cannot. There are four possible configurations: (i) ANYFORM is the case where one can predict the target form from any of the source forms. (ii) SINGLEFORM is the case where only one form can be used to regularly predict the target form. (iii) MULTIFORM is the case where multiple forms are necessary to predict the target form. (iv) NOFORM is the case where the target form cannot be regularly derived from any of the source forms. Multi-source MRI is expected to perform better than single-source MRI for the configurations SINGLEFORM and MULTIFORM, but not for the configurations ANYFORM and NOFORM.",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 10,
"text": "Figure 1",
"ref_id": null
},
{
"start": 867,
"end": 875,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivating Examples",
"sec_num": "2.1"
},
{
"text": "The first type of configuration is ANYFORM: each of the available source forms in the subset of the English paradigm (lift, lifts, lifted) contains enough information for a correct generation of the target form lifting. The second configuration is SINGLEFORM: there is a single form that contains enough information for correct generation, but it has to be carefully selected. Inflected forms of the German verb treffen 'to meet' have different stem vowels (see Table 1 ). In single-source reinflection, producing a target form with one stem vowel (a in trafe in the figure) from a source form with another stem vowel (e.g., e in treffe) is difficult. 3 In contrast, the learning problem for the SINGLE-FORM configuration is much easier in multi-source MRI. The multi-source model does not have to learn the possible vowel changes of this irregular verb; instead, it just needs to pick the correct vowel change from the alternatives offered in the input. This is a relatively easy task since the theme vowel is identical. So we only need to learn one general fact about German morphology (which suffix to add) and will then be able to produce the correct form with high accuracy. This type of regularity is typical of complex morphology: there are groups of forms in a paradigm that are similar and it is highly predictable which of these groups a particular target form for a new word will be a member of. As long as one representative of each group is part of the multi-source input, we can select it to generate the correct form.",
"cite_spans": [
{
"start": 652,
"end": 653,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 462,
"end": 469,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivating Examples",
"sec_num": "2.1"
},
{
"text": "In the MULTISOURCE configuration, we are able to use information from multiple forms if no single form is sufficient by itself. For example, to generate ponga, 3rdSgSubPres of poner 'to put' in Spanish, we need to know what the stem is (ponga, not pona) and which conjugation class (-ir, -er orar) it is part of (ponga, not pongue). The singlesource input pongo, 1stSgIndPres, does not reveal the conjugation class: it is compatible with both ponga and pongue. The single-source input poner, Inf, does not reveal the stem for the subjunctive: it is compatible with both ponga and pona-we need both source forms to generate the correct form ponga.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating Examples",
"sec_num": "2.1"
},
{
"text": "Again, such configurations are frequent crosslinguistically, either in this \"discrete\" variant or in more fuzzy variants where taking several forms together increases our chances of producing the correct target form. Finally, we call configurations NOFORM if the target form is completely irregular and not related to any of the source forms. The suppletive form went is our example for this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivating Examples",
"sec_num": "2.1"
},
{
"text": "The intuition behind the MRI task draws inspiration from the theoretical linguistic notion of principle parts (Finkel and Stump, 2007; Stump and Finkel, 2013) . The notion is that a paradigm has a subset that allows for maximum predictability. In terms of language pedagogy, the principle parts would be a minimial set of forms a student has to learn in order to be able to generate any form in the paradigm. For instance for the partial German paradigm in Table 1 , the forms treffen, trifft, trafen, and tr\u00e4fen could form one potential set of principle parts.",
"cite_spans": [
{
"start": 110,
"end": 134,
"text": "(Finkel and Stump, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 135,
"end": 158,
"text": "Stump and Finkel, 2013)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 457,
"end": 464,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Principle Parts",
"sec_num": "2.2"
},
{
"text": "From a computational learning point of view, maximizing predictability is always a boon-we want to make it as easy as possible for the system to learn the morphological regularities and subregularities of the language. Giving the system the principle parts as input is one way to achieve this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Principle Parts",
"sec_num": "2.2"
},
{
"text": "Our model is a multi-source extension of MED, Kann and Sch\u00fctze (2016b)'s encoder-decoder network for MRI. In MED, a single bidirectional recurrent neural network (RNN) encodes the input. In contrast, we use multiple encoders to be able to handle multiple source form-tag pairs. In MED, a decoder RNN produces the output from the hidden representation. We do not change this part of the architecture, so there is still a single decoder. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3"
},
{
"text": "For k source forms, our model takes k different inputs of parallel structure. Each of the 1 \u2264 i \u2264 k inputs consists of the target tag t trg and the source form f i and its corresponding source tag t i . The output is the target form. Each source form is represented as a sequence of characters; each character is represented as an embedding. Each tag-both the target tag and the source tags-is represented as a sequence of subtags; each subtag is represented as an embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input and Output Format",
"sec_num": "3.1"
},
{
"text": "More formally, we define the alphabet \u03a3 lang as the set of characters in the language and \u03a3 subtag as the set of subtags that occur as part of the set of morphological tags T of the language, e.g., if 1st-SgPres \u2208 T , then 1st, Sg and Pres \u2208 \u03a3 subtag . Each of the k inputs to our system is of the following format: S start \u03a3 + subtag \u03a3 + lang \u03a3 + subtag S end where the first subtag sequence is the source tag t i and the second subtag sequence is the target tag. The output format is: S start \u03a3 + lang S end , where the symbols S start and S end are predefined start and end symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input and Output Format",
"sec_num": "3.1"
},
{
"text": "The encoder-decoder is based on the machine translation model of Bahdanau et al. (2015) and all specifics of our model are identical to the original presentation unless stated otherwise. 5 Whereas Bahdanau et al. (2015) 's model has only one encoder, our model consists of k \u2265 1 encoders and processes k sources simultaneously. The k sources have the form X m = (t trg , f m src , t m src ), represented as S start \u03a3 + subtag \u03a3 + lang \u03a3 + subtag S end as described above. Characters and subtags are embedded.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF2"
},
{
"start": 197,
"end": 219,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Source Encoder-Decoder",
"sec_num": "3.2"
},
{
"text": "The input to encoder m is X m . Each encoder consists of a bidirectional RNN that computes a hidden state h mi for each position, the concatenation of forward and backward hidden states. Decoding proceeds as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Source Encoder-Decoder",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y | X 1 , . . . , X k ) = |Y | t=1 p(y t | {y 1 , ..., y t\u22121 }, c t ) = |Y | t=1 g(y t\u22121 , s t , c t ),",
"eq_num": "(2)"
}
],
"section": "Multi-Source Encoder-Decoder",
"sec_num": "3.2"
},
{
"text": "where y = (y 1 , ..., y |Y | ) is the output sequence (a sequence of |Y | characters), g is a nonlinear function, s t is the hidden state of the decoder and c t is the sum of the encoder states h mi , weighted by attention weights \u03b1 mi (s t\u22121 ) that depend on the decoder state:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Source Encoder-Decoder",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = k m=1 |Xm| i=1 \u03b1 mi (s t\u22121 )h mi .",
"eq_num": "(3)"
}
],
"section": "Multi-Source Encoder-Decoder",
"sec_num": "3.2"
},
{
"text": "A visual depiction of this model may be found in Figure 2 . A more complex hierarchical attention structure would be an alternative, but this simple model in which all hidden states contribute on the same level in a single attention layer (i.e., k m=1 |Xm| i=1 \u03b1 mi = 1) works well as our experiments show. The k encoders share their weights.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Source Encoder-Decoder",
"sec_num": "3.2"
},
{
"text": "We evaluate the performance of our model in an experiment based on Task 2 of the SIGMORPHON Shared Task on Morphological Reinflection (Cotterell et al., 2016a) . This is a single-source MRI task as outlined in Section 1.",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Source Reinflection Experiment",
"sec_num": "4"
},
{
"text": "Datasets. Our datasets are based on the data from the SIGMORPHON 2016 Shared Task on Morphological Reinflection (Cotterell et al., 2016a) . Our experiments cover eight languages: Arabic, Finnish, Georgian, German, Hungarian, Russian, Spanish and Turkish. The languages were chosen to represent different types of morphology. Finnish, Figure 2 : Visual depiction of our multi-source encoder-decoder RNN. We sketch a two encoder model, where the left encoder reads in the present form treffen and the right encoder reads in the past tense form trafen. They work together to predict the subjunctive form tr\u00e4fen. The shadowed red arcs indicate the strength of the attention weights-we see the network is focusing more on a because it helps the decoder better predict\u00e4 than e. We omit the source and target tags as input for conciseness.",
"cite_spans": [
{
"start": 112,
"end": 137,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "! h 1 ! h 2 ! h 3 ! h N h 1 h 2 h 3 h N ! h 1 ! h 2 ! h 3 ! h N h 1 h 2 h 3 h N t r e n t r a n t r \u00e4 s 1 s 2 s 3 s N y 1 = y 2 = y 3 = M s 4 \u2026 \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "German, Hungarian, Russian, Turkish and Spanish are all suffixing. In addition to being suffixing, three of these languages employ vocalic (German, Spanish) and consonantal (Russian) stem changes for many inflections. The members of the remaining sub-group are agglutinative. Georgian makes use of prefixation as well as suffixation. Arabic morphology contains both concatenative and templatic elements. We build multi-source versions of the dataset for Task 2 of the SIGMORPHON shared task in the following way. We use data from the UNIMORPH project, 6 containing complete paradigms for all languages of the shared task. The shared task data was sampled from the same set of paradigms; our new dataset is a superset of the SIGMORPHON data. We create our new dataset by uniformly sampling three additional word forms from the paradigm of each source form in the original data. In combination with the source and target forms of the original dataset, this means that our dataset is a set of 5-tuples consisting each of four source forms and one target form. 7 Ideally, we would like to keep the experimental variable k, the number of sources we use in multi-source MRI, con-6 http://unimorph.org 7 One thing to note is that the original shared task data was sampled depending on word frequency in unlabeled corpora. We do not impose a similar condition, so the frequency distributions of our data and the shared task data are different. Also, we excluded Maltese and Navajo due to a lack of data to create the additional multi-source datasets. stant for a particular experiment or vary it systematically across other experimental conditions. Table 2 gives an overview of the number of different source forms per language in our dataset. Our dataset is available for download at http: //cistern.cis.lmu.de.",
"cite_spans": [],
"ref_spans": [
{
"start": 1641,
"end": 1648,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Hyperparameters. We use embeddings of size 300. that performs best on the development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Baselines. For the single-source case, we apply MED, the top-scoring system in the SIGMOR-PHON 2016 Shared Task on Morphological Reinflection (Cotterell et al., 2016a; Kann and Sch\u00fctze, 2016b) . At the time of writing, MED constitutes the state of the art on the dataset. For Arabic, German and Turkish, we run an additional set of experiments to test two additional architectural configurations of multi-source encoder-decoders: (i) In addition to the default configuration in which all encoders share parameters, we also test the option of each encoder learning its own set of parameters (shared par's: yes vs. no in Table 4 ). (ii) Another way of realizing a multi-source system is to concatenate all sources and give this to an encoder-decoder with a single encoder as one input (encoders: k = 1 vs. k > 1 in Table 4 ).",
"cite_spans": [
{
"start": 142,
"end": 167,
"text": "(Cotterell et al., 2016a;",
"ref_id": null
},
{
"start": 168,
"end": 192,
"text": "Kann and Sch\u00fctze, 2016b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 813,
"end": 820,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Evaluation Metric. We evaluate on 1-best accuracy (exact match) against the gold form. We deviate from the shared task, which also evaluates under mean reciprocal rank and edit distance. We omit the later two since all these metrics were highly correlated (Cotterell et al., 2016a) . Table 3 shows the results of the MRI experiment on test data. We compare using a single source, the first two sources and all four sources. The first source (in column \"1\") is the original source from the SIGMORPHON shared task. Recall that we used uniform sampling to identify additional forms whereas the sampling procedure of the shared task took into account frequency. We suspect that this is the reason for the worse performance of the new sources compared to the original source; e.g., in bef\u00e4hle that are unlikely to help generate related forms that are more frequent. The main result of the experiment is that multisource MRI performs better than single-source MRI for all languages except for Hungarian and that, clearly, the more sources the better: using four sources is always better than using two sources. This result confirms our hypothesis, illustrated in Figure 1 , that for most languages, different source forms provide complementary information when generating a target form and thus performance of the multi-source model is better than of the singlesource model. Table 3 demonstrates that the two configurations we identified as promising for multisource MRI, SINGLEFORM and MULTIFORM, occur frequently enough to boost the performance for seven of the eight languages, with the largest gains observed for Arabic (7.3%) and Russian (3.5%) and the smallest for Spanish (0.9%) and Georgian (1.3%) (comparing using source form 1 with using source forms 1-4).",
"cite_spans": [
{
"start": 256,
"end": 281,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1157,
"end": 1165,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1369,
"end": 1376,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Hungarian is the only language for which performance decreases, by a small amount (0.3%). We attribute this to overfitting: the multi-source model has a larger number of parameters, so it is more prone to overfitting. We would expect the performance to be the same in a comparison of two models that have the same size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Error Analysis. We compare errors of singlesource and multi-source models for German on development data. Most mistakes of the multi-source model are stem-related: versterbst for verstirbst, erwerben for erw\u00fcrben, Apfelsinenbaume for Apfelsinenb\u00e4ume, lungenkr\u00e4nkes for lungenkrankes and ubernehmte for\u00fcbern\u00e4hme. In most of these cases, the stem of the lemma was used, which is correct for some forms, but not for the form that had to be generated. In one case, the multi-source model did not use the correct inflection rule: braucht for gebraucht-the inflectional rule that the past participle is formed by ge-was not applied. Figure 3 : Learning curves for single-source and multi-source models for Arabic, German and Turkish. We observe that the multi-source model generalizes faster than the single soure case-this is to be expected since the multi-source model often faces an easier transduction problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 627,
"end": 635,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Errors of the single-source model that were \"corrected\" by the multi-source model include empfahlt for empfiehl, Throne for Thron and befielen for befallen. These are all SINGLEFORM cases: the multi-source model will generate the correct form if it succeeds in selecting the most predictive source form. The single-source model is at a disadvantage if this most predictive source form is not part of its input. Table 4 compares different architectural configurations. All experiments use 4 sources. We see that sharing parameters is superior as expected. Using a single encoder on 4 sources performs as well as 4 encoders (and very slightly better on Turkish). Apparently, it has no difficulty learning to understand an unstructured (or rather lightly structured) concatenation of form-tag pairs; on the other hand, this parsing task, i.e., learning to parse the sequence of form-tag pairs, is easy, so this is not a surprising result. Figure 3 shows learning curves for Arabic, German and Turkish. We iteratively halve the training set and train models for each subset. In this analysis, we train all models for 90 epochs, but use the numbers from the main experiment for the full training set. For the single-source model, we use the SIGMORPHON source. The figure shows that the single-source model needs more individual paradigms in the training data to achieve the same performance as the multi-source model. The largest difference between single-source and multisource is > 20% for Arabic when only 1/8 of the training set is used. This suggests that multi-source MRI is an attractive option for low-resource languages since it exploits available data better than single-source. Figure 4 shows for one example, the generation of the German form w\u00f6gen, 3rdPlSubPst, the attention weights of the multi-source model at each time step of the decoder, i.e., for each character as it is being produced by the decoder. For characters that simply need to be copied, the main attention lies on the corresponding characters of the input sources. For example, the character g is produced when attention is on the characters g in w\u00f6gest, w\u00f6ge and wogen. This aspect of the multi-source model is not different from the single-source model, offering no advantage.",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 418,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 936,
"end": 944,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1684,
"end": 1692,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "However, even for g, the source form that is least relevant for generating w\u00f6gen receives almost no weight: w\u00e4gst is an indicative singular form that does not provide helpful information for generating a plural form in the subjunctive; the model seems to have learned that this is the case. In contrast, wogen does receive some weight; this makes sense as it is a past indicative form and the past subjunctive is systematically related to the past indicative for many German verbs. These observations suggest that the network has learned to correctly predict (at least in this case) which forms provide potentially useful information. For the last two time steps (i.e., characters to be generated), attention is mainly focused on the tags. Again, this indicates that the model has learned the regularity in generating this part of the word form: the suffix, consisting of en, is predictable from the tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Visualization",
"sec_num": "4.5"
},
{
"text": "Recently, variants of the RNN encoder-decoder have seen widespread adoption in many areas of NLP due to their strong performance. Encoderdecoders with and without attention have been applied to tasks such as machine translation (Cho et al., 2014; Bahdanau et Figure 4 : Attention heatmap for the multi-source model. The example is for the German verb wiegen 'to weigh'. The model learns to focus most of its attention on forms that share the irregular subjunctive stem w\u00f6g in addition to the target subtags 3 and P that encode that the target form is 3rd person plural. We omit the tags from the diagram to which the model hardly attends.",
"cite_spans": [
{
"start": 228,
"end": 246,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 247,
"end": 267,
"text": "Bahdanau et Figure 4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "al., 2015), parsing and automatic speech recognition (Graves and Schmidhuber, 2005; Graves et al., 2013) .",
"cite_spans": [
{
"start": 53,
"end": 83,
"text": "(Graves and Schmidhuber, 2005;",
"ref_id": "BIBREF21"
},
{
"start": 84,
"end": 104,
"text": "Graves et al., 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The first work on multi-source models was presented for machine translation. Zoph and Knight (2016) made simultaneous use of source sentences in multiple languages in order to find the best match possible in the target language. Unlike our model, they apply transformations to the hidden states of the encoders that are input to the decoder. Firat et al. (2016) 's neural architecture for MT translates from any of N source languages to any of M target languages, using language specific encoders and decoders, but sharing one single attention-mechanism. In contrast to our work, they obtain a single output for each input.",
"cite_spans": [
{
"start": 77,
"end": 99,
"text": "Zoph and Knight (2016)",
"ref_id": "BIBREF39"
},
{
"start": 342,
"end": 361,
"text": "Firat et al. (2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Much ink has been spilled on morphological reinflection over recent years. Dreyer et al. (2008) develop a high-performing weighted finite-state transducer for the task, which was later hybridized with an LSTM (Rastogi et al., 2016) . Durrett and DeNero (2013) apply a semi-CRF to heuristically extracted rules to generate inflected forms from lemmata using data scraped from Wiktionary. Improved systems for the Wiktionary data were subsequently developed by Hulden et al. (2014) , who used a semi-supervised approach, and Faruqui et al. (2016) , who used a character-level LSTM. All of the above work has focused on the single input case. Two important exceptions, however, have considered the multi-input case. Both Dreyer and Eisner (2009) and Cotterell et al. (2015b) define a string-valued graphical model over the paradigm and apply the missing values.",
"cite_spans": [
{
"start": 75,
"end": 95,
"text": "Dreyer et al. (2008)",
"ref_id": "BIBREF13"
},
{
"start": 209,
"end": 231,
"text": "(Rastogi et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 234,
"end": 259,
"text": "Durrett and DeNero (2013)",
"ref_id": "BIBREF15"
},
{
"start": 459,
"end": 479,
"text": "Hulden et al. (2014)",
"ref_id": "BIBREF23"
},
{
"start": 523,
"end": 544,
"text": "Faruqui et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 718,
"end": 742,
"text": "Dreyer and Eisner (2009)",
"ref_id": "BIBREF12"
},
{
"start": 747,
"end": 771,
"text": "Cotterell et al. (2015b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The SIGMORPHON 2016 Shared Task on Morphological Reinflection (Cotterell et al., 2016a) , based on the UNIMORPH (Sylak-Glassman et al., 2015) data, resulted in the development of numerous methods. RNN encoder-decoder models (Aharoni et al., 2016; Kann and Sch\u00fctze, 2016a; \u00d6stling, 2016) obtained the strongest performance and are the current state of the art on the task. The best-performing model made use of an attention mechanism (Kann and Sch\u00fctze, 2016a) , first popularized in machine translation (Bahdanau et al., 2015) . We generalize this architecture to the multi-source case in this paper for the reinflection task.",
"cite_spans": [
{
"start": 62,
"end": 87,
"text": "(Cotterell et al., 2016a)",
"ref_id": null
},
{
"start": 112,
"end": 141,
"text": "(Sylak-Glassman et al., 2015)",
"ref_id": "BIBREF36"
},
{
"start": 224,
"end": 246,
"text": "(Aharoni et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 247,
"end": 271,
"text": "Kann and Sch\u00fctze, 2016a;",
"ref_id": "BIBREF25"
},
{
"start": 272,
"end": 286,
"text": "\u00d6stling, 2016)",
"ref_id": null
},
{
"start": 433,
"end": 458,
"text": "(Kann and Sch\u00fctze, 2016a)",
"ref_id": "BIBREF25"
},
{
"start": 502,
"end": 525,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Besides generation, computational work on morphology has also focused on analysis. In this area, a common task-morphological segmentation-is to break up a word into its sequence of constituent morphs. The unsupervised MORFESSOR model (Creutz and Lagus, 2002) has achieved widespread adoption. Bayesian methods have also proven themselves successful in unsupervised morphological segmentation (Johnson et al., 2006; Goldwater et al., 2009) . When labeled training data for segmentation is available, supervised methods significantly outperform the unsupervised techniques (Ruokolainen et al., 2013; Cotterell et al., 2015a; Cotterell et al., 2016b) .",
"cite_spans": [
{
"start": 234,
"end": 258,
"text": "(Creutz and Lagus, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 392,
"end": 414,
"text": "(Johnson et al., 2006;",
"ref_id": "BIBREF24"
},
{
"start": 415,
"end": 438,
"text": "Goldwater et al., 2009)",
"ref_id": "BIBREF20"
},
{
"start": 571,
"end": 597,
"text": "(Ruokolainen et al., 2013;",
"ref_id": "BIBREF33"
},
{
"start": 598,
"end": 622,
"text": "Cotterell et al., 2015a;",
"ref_id": "BIBREF7"
},
{
"start": 623,
"end": 647,
"text": "Cotterell et al., 2016b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "As we pointed out in Section 2, morphologically annotated corpora provide an ideal source of data for the multi-source MRI task: they are annotated on the token level with inflectional features and often contain several different inflected forms of a lemma. Eskander et al. (2013) develop an algorithm for automatic learning of inflectional classes and associated lemmas from morphologically annotated corpora, an approach that could be usefully combined with our multi-source MRI framework.",
"cite_spans": [
{
"start": 258,
"end": 280,
"text": "Eskander et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Generation of unknown inflections in morphologically rich languages is an important task that remains unsolved. We provide a new angle on the problem by considering systems that are allowed to have multiple inflected forms as input. To this end, we define the task of multi-source morphological reinflection as a generalization of singlesource MRI (Cotterell et al., 2016a ) and present a model that solves the task. We extend an attentionbased RNN encoder-decoder architecture from the single-source case to the multi-source case. Our new model consists of multiple encoders, each receiving one of the inputs. Our model improves over the state of the art for seven out of eight languages, demonstrating the promise of multi-source MRI. Additionally, we publically release our implementation. 8",
"cite_spans": [
{
"start": 348,
"end": 372,
"text": "(Cotterell et al., 2016a",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The new dataset for multi-source morphological reinflection that we release is a superset of the dataset of the SIGMORPHON 2016 Shared Task on Morphological Reinflection to facilitate research on morphological generation. One focus of future work should be the construction of more complex datasets, e.g., datasets that have better coverage of irregular words and datasets in which there is no overlap in lemmata between training and test sets. Further, for difficult inflections, it might be interesting to find an effective way to include unsupervised data into the setup. For example, we could define one of our k inputs to be a form mined from a corpus that is not guaranteed to have been correctly tagged morphologically, but likely to be helpful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "We show in this paper that multi-source MRI outperforms single-source MRI. This is an important contribution because-as we discussed in Section 2.1-multi-source MRI is only promising for paradigms with specific properties, which we referred to as SINGLEFORM and MULTIFORM configurations. Whether such configurations occur and whether these configurations have a strong effect on MRI performance was an open empirical question. Indeed, we found that for one of the languages we investigated, for Hungarian, singlesource MRI works at least as well as multi-source MRI-presumably because its paradigms almost exclusively contain SINGLEFORM configurations. Thus, single-source MRI is probably preferable for Hungarain since single-source is simpler than multi-source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "There is another important question that we have not answered in this paper: in an experimental setting in which the amount of training information available is exactly the same for single-source and multi-source, does multi-source still outperform single-source and by how much? For example, the numbers we compare in Table 3 are matched with respect to the number of target forms, but not with respect to the number of source forms: multi-source has more source forms available for training than single-source. We leave investigation of this important issue for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 326,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "It is not impossible to learn, but treffen is an irregular verb, so we cannot easily leverage the morphology we have learned about other verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The edit tree(Chrupa\u0142a, 2008;M\u00fcller et al., 2015) augmentation discussed in Kann and Sch\u00fctze (2016b) was not employed here.5 We modify the implementation of the model freely available at https://github.com/mila-udem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://cistern.cis.lmu.de",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge the financial support of Siemens and of DFG (SCHUE 2246/10-1) for this research. The second author was supported by a DAAD Long-Term Research Grant and an NDSEG fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improving sequence to sequence learning for morphological inflection generation: The BIU-MIT systems for the SIGMORPHON 2016 shared task for morphological reinflection",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni, Yoav Goldberg, and Yonatan Belinkov. 2016. Improving sequence to sequence learning for morphological inflection generation: The BIU- MIT systems for the SIGMORPHON 2016 shared task for morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology, pages 41-48, Berlin, Germany, August. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Paradigm classification in supervised learning of morphology",
"authors": [
{
"first": "Malin",
"middle": [],
"last": "Ahlberg",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1024--1029",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learn- ing of morphology. In Proceedings of the 2015 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1024-1029, Denver, Col- orado, May-June. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Represen- tations, San Diego, California, USA, May.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improved transition-based parsing by modeling characters instead of words with LSTMs",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "349--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by model- ing characters instead of words with LSTMs. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 349- 359, Lisbon, Portugal, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Findings of the 2016 conference on machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Aurelie",
"middle": [],
"last": "Neveol",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "131--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aure- lie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation, pages 131-198, Berlin, Germany, August. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar, Octo- ber. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards a machinelearning architecture for lexical functional grammar parsing",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a. 2008. Towards a machine- learning architecture for lexical functional grammar parsing. Ph.D. thesis, Dublin City University.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Labeled morphological segmentation with semi-markov models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "164--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Thomas M\u00fcller, Alexander Fraser, and Hinrich Sch\u00fctze. 2015a. Labeled morphological segmentation with semi-markov models. In Pro- ceedings of the Nineteenth Conference on Computa- tional Natural Language Learning, pages 164-174, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Modeling word forms using latent underlying morphs and phonology",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "433--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015b. Modeling word forms using latent underly- ing morphs and phonology. Transactions of the As- sociation for Computational Linguistics, 3:433-447.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared taskmorphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "10--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared task- morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10-22, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A joint model of orthography and morphological segmentation",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "664--669",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Tim Vieira, and Hinrich Sch\u00fctze. 2016b. A joint model of orthography and morpho- logical segmentation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 664-669, San Diego, California, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised discovery of morphemes",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Morphological and Phonological Learning",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL-02 Workshop on Morphological and Phonolog- ical Learning, pages 21-30. Association for Compu- tational Linguistics, July.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Graphical models over multiple strings",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer and Jason Eisner. 2009. Graphical models over multiple strings. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 101-110, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Latent-variable modeling of string transductions with finite-state methods",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1080--1089",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer, Jason Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 1080-1089, Hon- olulu, Hawaii, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A non-parametric model for the discovery of inflectional paradigms from plain text using graphical models over strings",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer. 2011. A non-parametric model for the discovery of inflectional paradigms from plain text using graphical models over strings. Ph.D. thesis, Johns Hopkins University, Baltimore, MD.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Supervised learning of complete morphological paradigms",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1185--1195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1185-1195, Atlanta, Georgia, June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic extraction of morphological lexicons from morphologically annotated corpora",
"authors": [
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1032--1043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramy Eskander, Nizar Habash, and Owen Rambow. 2013. Automatic extraction of morphological lex- icons from morphologically annotated corpora. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1032-1043, Seattle, Washington, USA, October. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Morphological inflection generation using character sequence to sequence learning",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "634--643",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection gener- ation using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 634-643, San Diego, California, June. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Principal parts and morphological typology",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Stump",
"suffix": ""
}
],
"year": 2007,
"venue": "Morphology",
"volume": "17",
"issue": "1",
"pages": "39--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Finkel and Gregory Stump. 2007. Princi- pal parts and morphological typology. Morphology, 17(1):39-75.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "866--875",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine trans- lation with a shared attention mechanism. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866-875, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A bayesian framework for word segmentation: Exploring the effects of context. Cognition",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "112",
"issue": "",
"pages": "21--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2009. A bayesian framework for word segmentation: Exploring the effects of context. Cog- nition, 112(1):21-54.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional LSTM and other neural network architectures. Neu- ral Networks, 18(5):602-610.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Speech recognition with deep recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Abdel-Rahman",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "6645--6649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Abdel-rahman Mohamed, and Geof- frey E. Hinton. 2013. Speech recognition with deep recurrent neural networks. In IEEE International Conference on Acoustics, Speech and Signal Pro- cessing, pages 6645-6649, Vancouver, BC, Canada, May.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semi-supervised learning of morphological paradigms and lexicons",
"authors": [
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "Malin",
"middle": [],
"last": "Ahlberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "569--578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mans Hulden, Markus Forsberg, and Malin Ahlberg. 2014. Semi-supervised learning of morphological paradigms and lexicons. In Proceedings of the 14th Conference of the European Chapter of the Associa- tion for Computational Linguistics, pages 569-578, Gothenburg, Sweden, April. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adaptor grammars: A framework for specifying compositional nonparametric bayesian models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in Neural Information Processing Systems",
"volume": "19",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2006. Adaptor grammars: A framework for specifying compositional nonparametric bayesian models. In Advances in Neural Information Pro- cessing Systems 19, pages 641-648, Vancouver, BC, Canada, December.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "62--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016a. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Pro- ceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 62-70, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Singlemodel encoder-decoder with explicit morphological representation for reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "555--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016b. Single- model encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A simple way to initialize recurrent networks of rectified linear units",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Jaitly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc V. Le, Navdeep Jaitly, and Geoffrey E. Hinton. 2015. A simple way to initialize recurrent networks of rectified linear units. CoRR, abs/1504.00941.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Universal dependency annotation for multilingual parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Bedini",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Bertomeu Castell\u00f3",
"suffix": ""
},
{
"first": "Jungmee",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuz- man Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Uni- versal dependency annotation for multilingual pars- ing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 92-97, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Joint lemmatization and morphological tagging with lemming",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Fraser",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2268--2274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M\u00fcller, Ryan Cotterell, Alexander M. Fraser, and Hinrich Sch\u00fctze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2268-2274, Lisbon, Portugal, September. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Inflection generation as discriminative string transduction",
"authors": [
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "922--931",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discriminative string transduction. In Proceedings of the 2015 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 922-931, Denver, Col- orado, May-June. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Morphological reinflection with convolutional neural networks",
"authors": [
{
"first": "",
"middle": [],
"last": "Robert\u00f6stling",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "23--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert\u00d6stling. 2016. Morphological reinflection with convolutional neural networks. In Proceedings of the 14th SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology, pages 23-26, Berlin, Germany, August. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Weighting finite-state transductions with neural context",
"authors": [
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "623--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neu- ral context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623-633, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Supervised morphological segmentation in a low-resource learning setting using conditional random fields",
"authors": [
{
"first": "Oskar",
"middle": [],
"last": "Teemu Ruokolainen",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Kohonen",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "29--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teemu Ruokolainen, Oskar Kohonen, Sami Virpioja, and Mikko Kurimo. 2013. Supervised morpholog- ical segmentation in a low-resource learning setting using conditional random fields. In Proceedings of the Seventeenth Conference on Computational Nat- ural Language Learning, pages 29-37, Sofia, Bul- garia, August. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Morphological typology: From word to paradigm",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Stump",
"suffix": ""
},
{
"first": "Raphael",
"middle": [
"A"
],
"last": "Finkel",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "138",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Stump and Raphael A. Finkel. 2013. Morpho- logical typology: From word to paradigm, volume 138. Cambridge University Press.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems 27, pages 3104-3112, Montreal, Que- bec, Canada, December.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A language-independent feature schema for inflectional morphology",
"authors": [
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Que",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "674--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A language-independent feature schema for inflectional morphology. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 674-680, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Grammar as a foreign language",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2014. Grammar as a foreign language. CoRR, abs/1412.7449.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Adadelta: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Multi-source neural translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "30--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 30-34, San Diego, Cali- fornia, June. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Number of target forms in the training set for which 1, 2, 3 or \u2265 4 source forms (in the training set) are available for prediction. The tables for the development and test splits show the same pattern and are omitted."
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">source form(s) used</td></tr><tr><td/><td>1</td><td>2</td><td>3</td><td>4</td><td>1-2 1-4</td></tr><tr><td>ar</td><td>.871</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>Our encoder and decoder GRUs have 100</td></tr><tr><td/><td/><td/><td/><td/><td>hidden units each. Following Le et al. (2015), we</td></tr><tr><td/><td/><td/><td/><td/><td>initialize all encoder and decoder weights as well</td></tr><tr><td/><td/><td/><td/><td/><td>as the embeddings with an identity matrix. All</td></tr><tr><td/><td/><td/><td/><td/><td>biases are initialized with zero. We use stochas-</td></tr><tr><td/><td/><td/><td/><td/><td>tic gradient descent, Adadelta (Zeiler, 2012) and a</td></tr><tr><td/><td/><td/><td/><td/><td>minibatch size of 20 for training. Training is done</td></tr><tr><td/><td/><td/><td/><td/><td>for a maximum number of 90 epochs. If no im-</td></tr><tr><td/><td/><td/><td/><td/><td>provement occurs for 20 epochs, we stop training</td></tr><tr><td/><td/><td/><td/><td/><td>early. The final model we run on test is the model</td></tr></table>",
"text": ".813 .796 .830 .905 .944 fi .956 .929 .941 .934 .965 .978 ka .967 .943 .942 .934 .969 .979 de .954 .922 .931 .912 .959 .980 hu .992 .962 .963 .963 .988 .989 ru .876 .795 .824 .817 .888 .911 es .975 .961 .963 .968 .977 .984 tu .967 .928 .947 .944 .970 .983"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Accuracy on MRI for single-source (1, 2, 3, 4) and</td></tr><tr><td>multi-source (1-2, 1-4) models. Best result in bold.</td></tr></table>",
"text": ""
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Accuracy of different architectures for the dataset with 4 source forms being available for prediction. The best result for each row is in bold."
}
}
}
}