ACL-OCL / Base_JSON /prefixA /json /alta /2020.alta-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:51.109626Z"
},
"title": "Exploring Looping Effects in RNN-based Architectures",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Shcherbakov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "sandreas@unimelb.edu.au"
},
{
"first": "Saliha",
"middle": [
"Muradoglu"
],
"last": "\u2126\u03c6",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "ekaterina.vylomova@unimelb.edu.au"
},
{
"first": "Jennifer",
"middle": [],
"last": "White",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Eliza- Beth",
"middle": [],
"last": "Salesky",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Maria",
"middle": [],
"last": "Ponti",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Rowan",
"middle": [
"Hall"
],
"last": "Maudslay",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Josef",
"middle": [],
"last": "Valvoda",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Toldova",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Elena",
"middle": [],
"last": "Klyachko",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Yegorov",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Krizhanovsky",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Paula",
"middle": [],
"last": "Czarnowska",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Irene",
"middle": [],
"last": "Nikkarinen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Krizhanovsky",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lucas",
"middle": [
"Torroba"
],
"last": "Hennigen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hilaria",
"middle": [],
"last": "Cruz",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Eleanor",
"middle": [],
"last": "Chodroff",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Emily",
"middle": [],
"last": "Di- Nan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper investigates repetitive loops, a common problem in contemporary text generation (such as machine translation, language modelling, morphological inflection) systems. We hypothesized that a model's failure to distinguish respective latent states for different positions in an output sequence may be the primary cause of the looping. Therefore, we propose adding a position-aware discriminating factor to the model in attempt to reduce that effect. We conduct a study on neural models with recurrent units by explicitly altering their decoder internal state. We use a task of morphological reinflection as a proxy to study the effects of the changes. Our results show that the probability of the occurrence of repetitive loops is significantly reduced by introduction of an extra neural decoder output. The output should be specifically trained to produce gradually increasing value upon generation of each character of a given sequence. We also explored variations of the technique and found that feeding the extra output back to the decoder amplifies the positive effects.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper investigates repetitive loops, a common problem in contemporary text generation (such as machine translation, language modelling, morphological inflection) systems. We hypothesized that a model's failure to distinguish respective latent states for different positions in an output sequence may be the primary cause of the looping. Therefore, we propose adding a position-aware discriminating factor to the model in attempt to reduce that effect. We conduct a study on neural models with recurrent units by explicitly altering their decoder internal state. We use a task of morphological reinflection as a proxy to study the effects of the changes. Our results show that the probability of the occurrence of repetitive loops is significantly reduced by introduction of an extra neural decoder output. The output should be specifically trained to produce gradually increasing value upon generation of each character of a given sequence. We also explored variations of the technique and found that feeding the extra output back to the decoder amplifies the positive effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Over the last few years we witnessed a significant progress in the field of Natural Language Processing (NLP). Many state-of-the-art models are based on neural architectures with recurrent units. For instance, Sutskever et al. (2014) proposed one of the first neural machine translation models that achieved results comparable with statistical models. Similarly, Plank et al. (2016) introduced a neural POS tagging model as a new state-of-the-art on the task. Recently, neural architectures almost superseded non-neural (finite-state or rule-based) approaches in morphology modelling tasks such as morphological reinflection (Cotterell et al., 2016 (Cotterell et al., , 2017 with average accuracy being over 90% on high-resource languages. Error analysis conducted by Gorman et al. (2019) demonstrated that among general misprediction errors such as syncretism, the models also produce certain \"silly\" errors that human learners do not make. One case of such errors, a looping error, is particularly notable. This type of error is not specific to the task and several other papers reported a similar problem (Holtzman et al., 2019; Vakilipourtakalou and Mou, 2020) . Still, the causes and the nature of the error remains under-studied. Here we provide some insights on the causes of the issues and possible remedy to it. We consider morphological reinflection task for our experiments since it has low time and space requirements and, therefore, allows us to reproduce cases of looping in sufficient quantities and analyse them relatively easy.",
"cite_spans": [
{
"start": 210,
"end": 233,
"text": "Sutskever et al. (2014)",
"ref_id": "BIBREF22"
},
{
"start": 363,
"end": 382,
"text": "Plank et al. (2016)",
"ref_id": "BIBREF19"
},
{
"start": 625,
"end": 648,
"text": "(Cotterell et al., 2016",
"ref_id": "BIBREF4"
},
{
"start": 649,
"end": 674,
"text": "(Cotterell et al., , 2017",
"ref_id": "BIBREF2"
},
{
"start": 768,
"end": 788,
"text": "Gorman et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 1108,
"end": 1131,
"text": "(Holtzman et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 1132,
"end": 1164,
"text": "Vakilipourtakalou and Mou, 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Morphological inflection is the task of generating a target word form (e.g., \"runs\") from its lemma (\"to run\") and a set of target morphosyntactic features (tags, \"Verb;Present Tense;Singular;3rd Person\"). The task is called morphological reinflection when the lemma form is replaced with any other form and, optionally, its morphosyntactic features. This is a type of string-to-string transduction problem that in many cases pre-supposes nearly monotonic alignment between the strings. Traditionally, researchers either hand-engineered (Koskenniemi, 1983; Kaplan and Kay, 1994) or used trainable (Mohri, 1997; Eisner, 2002) finite state transducers to solve the task. Most recently, neural models were shown to outperform most non-neural systems, especially in the case of high-resource languages (Cotterell et al., 2016; Vylomova et al., 2020) .",
"cite_spans": [
{
"start": 537,
"end": 556,
"text": "(Koskenniemi, 1983;",
"ref_id": "BIBREF14"
},
{
"start": 557,
"end": 578,
"text": "Kaplan and Kay, 1994)",
"ref_id": "BIBREF12"
},
{
"start": 597,
"end": 610,
"text": "(Mohri, 1997;",
"ref_id": "BIBREF17"
},
{
"start": 611,
"end": 624,
"text": "Eisner, 2002)",
"ref_id": "BIBREF5"
},
{
"start": 798,
"end": 822,
"text": "(Cotterell et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 823,
"end": 845,
"text": "Vylomova et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological reinflection task",
"sec_num": "2"
},
{
"text": "In terms of the study we focus on two typologically diverse languages, Nen (Evans and Miller, 2016; Evans, 2017 Evans, , 2019 and Russian. Nen is a Papuan language of the Morehead-Maro (or Yam) family, spoken in the Western province of Papua New Guinea by approximately 400 people. The language is highly under-resourced, and Muradoglu et al. (2020) is the only computational work on it we are aware of, and in current study we use the data derived from their corpus.",
"cite_spans": [
{
"start": 75,
"end": 99,
"text": "(Evans and Miller, 2016;",
"ref_id": null
},
{
"start": 100,
"end": 111,
"text": "Evans, 2017",
"ref_id": "BIBREF6"
},
{
"start": 112,
"end": 125,
"text": "Evans, , 2019",
"ref_id": "BIBREF7"
},
{
"start": 326,
"end": 349,
"text": "Muradoglu et al. (2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological reinflection task",
"sec_num": "2"
},
{
"text": "Russian, a Slavic language from Indo-European family, on the other hand, is considered as high-resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological reinflection task",
"sec_num": "2"
},
{
"text": "We use the splits from the SIGMORPHON-CoNLL 2017 shared task on morphological reinflection (Cotterell et al., 2017) .",
"cite_spans": [
{
"start": 91,
"end": 115,
"text": "(Cotterell et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological reinflection task",
"sec_num": "2"
},
{
"text": "We used medium sized training sets which occurred to yield highest rates of looped sequences in predicted word forms. The number of samples in the datasets are presented in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological reinflection task",
"sec_num": "2"
},
{
"text": "We reused the hard attention model specifically designed for the morphological reinflection task (Aharoni and Goldberg, 2017) for our explorations. The model uses an external aligner (Sudoh et al., 2013) to extract input-to-output character sequence transformation steps for a given morphological sample. Instances of a special character (STEP) are inserted into transformed words to represent alignment step advances. The resulting seq2seq model is trained to perform transformation from a given lemma into a target inflected form which contains STEP characters. The model consists of two modules;",
"cite_spans": [
{
"start": 183,
"end": 203,
"text": "(Sudoh et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "(1) an array of LSTM (Hochreiter and Schmidhuber, 1997) encoders and (2) an LSTM decoder. When a STEP character occurs in a target sequence (either learnt or predicted), the encoder array index advances to the next position. It corresponds to advancing current pointer in the lemma by one character. In such a way, a hard monotonic attention schema is implemented.",
"cite_spans": [
{
"start": 21,
"end": 55,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In our experiments we computed counts of looped sequences in generated word forms during model evaluation rounds that were carried out upon each epoch of model training. We distinguished a generated character sequence as looped if it satisfies both of the following conditions: (1) the sequence contains at least 3 repeated instances of some character subsequence at its very end, and (2) the total length of those repeated subsequences reaches at least 8 characters. While applying such a criterion, we considered predicted sequences in their alphabetical form, with all STEP characters stripped off. 1 We hypothesized that the looping is primarily caused by merging of decoder states relevant to different word positions. Therefore, introduction of variables that are guaranteed to be different at distinct stages of output word form production should reduce looped prediction rate. Presence of such a variable would facilitate distinguishing states that correspond to different parts of generated word, if even closely surrounding character sequences are similar. To implement this idea, we introduced an extra decoder output that is trained to always be increasing while new output characters are produced. More specifically, we added an extra output r and an extra inputr to the decoder. To ensure that r increases gradually while target word characters are generated, we modified calculation of total loss in the model training, allowing an extra (hinge-like) term as follows:",
"cite_spans": [
{
"start": 602,
"end": 603,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = max(0, \u03b3 \u2022 (s \u2212 \u2206r))",
"eq_num": "(1)"
}
],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Here \u2206r is the difference between current and previous r values. Initially, for every predicted word form r is set to zero. Having observed the dynamics of r value in preliminary training experiments, we chose \u03b3 = 50; s = 0.05. For better exploration of different factors, we tested combinations of the following setting variations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Feeding r back tor vs. leaving it unused (lettingr = 0). We hypothesized that even when an increasing output itself isn't used, computation of its value still affects neural weights at the front layer of the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Requiring r to increase vs. leaving it free.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Scalar vs. vector r (in the latter case, terms according to equation 1 are to be added per each component).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Using an externally provided autoincremented value for r instead of an extra decoder output. Table 2 presents mode denotations we use in the paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We repeated experiments 15 times for each distinct setting. The result figures presented are normalized to single experiment. denotation goal for rr value n (\"none\") none zero i (\"increment\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "r is ablated incrementing f (\"feedback\") none previous r u (\"unused\") increase zero s (\"all set\") increase previous r Note: if r is a vector, its size is added before a mode symbol: '3f', '3u', '3s'. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The plots given in Fig. 1 present counts of looped predictions at different epochs for the two datasets used (Nen and Russian). 2 It can be observed that 2 The curves shown at Fig. 1, 2 are generated by a polynomial smoothing procedure from a dataset with high variance. They may expose some irrelevant artifacts, for example, they fall to negative count values at some points. training a model with increasing r (modes 's', '3s') demonstrates significantly lower rates of looped word generation compared to the baseline mode ('n'). This is true for almost all considered epochs. One may also note that the 'u' mode yields results comparable to ones obtained with the 's' mode. This fact means that the presence of gradually incrementing decoder output is helpful for fighting looping even when the output isn't used. However, if the output is free of constraints and is fed back to the decoder (mode 'f'), the effect is mostly negative. Fig. 1 demonstrates the results of the same kind for the modes that occur to be less looping-prone than the baseline mode. When its components weren't trained to gradually increase (mode '3f'), a vector of 3 feedback values drastically increased looping rate at all epochs. If a vector of 3 increasing components was produced but wasn't fed back as input, the results were still negative. This is surprising because the result for a respective scalar mode ('u') is positive. Table 3 shows average looping counts for the 'later' epochs (15..34). Those epochs are more significant for the final quality assessment because maximum accuracy is usually achieved at one of them, so they have relatively high probability of producing the best model. Also, the table displays looping counts observed at epochs yielding best prediction accuracy as measured at a respective development set. The figures demonstrate that using modes with gradually increasing r ('s', '3s', 'u', 'i') yields significant reduction of looping rate. The only exception is mode '3u' which causes increase of the rate. As for the 'f' and, especially, '3f' modes (feeding an output back without requirement to grow), they may cause unacceptable high frequency of looped sequence generation. Overall, the digits are in line with the trends shown in the figures.",
"cite_spans": [
{
"start": 154,
"end": 155,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 19,
"end": 25,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 176,
"end": 185,
"text": "Fig. 1, 2",
"ref_id": "FIGREF0"
},
{
"start": 938,
"end": 944,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 1413,
"end": 1420,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Increasing the dimensionality of extra decoder output sometimes yields an improvement ('3s' mode) but generally the results suggest that vector size is a factor causing looping rate increase. Finally, scalar seems to be more preferable than vector. Table 4 shows prediction accuracy figures achieved in the experiments. For each training run, the epoch which produced the highest prediction accuracy against the development set was selected. Then, an average over repeated similar experiments was calculated. According to the figures, 's' mode yields a notable improvement of accuracy. In contrast, sticking to the 'f' mode causes a dramatic decline of accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We have found a strong evidence that the presence of a decoder output which is trained to progressively increment reduces the average rate of looping sequences in multiple times. In most cases the positive effect is more significant if this output is fed back to the decoder, although there are exceptions of minor magnitude. Attempts to scale the effect further by increasing dimensionality of progressively incrementing variables are sometimes successful. Still, if we consider an average explored case, the mode 's' seems to be the most effective and consistent in fighting looping. We also observed that presence of an auto-incremented decoder input (mode 'i') leads to looping rate reduction, but the effect is superior if the decoder itself serves to produce a gradually increasing value. Thus, the practical recommendation arising from our research should be (1) adding an extra scalar output to the decoder, (2) endorsing it to increase by inclusion a respective term into a training loss formula, and (3) feeding it back as an encoder input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Conceptually, it isn't surprising that the presence of an increasing variable helps the decoder to distinguish states rated to different phases of output word production and such a way reduces probability of falling into a loop. Still, the details of this mechanism yet need exploration. In our current work we made no attempt to enforce the usage of the new variable in any way; we only made such a usage potentially possible. A detailed exploration of its effect on the learning process is yet a subject of further research. And, what is even more practically important, we yet need to find how the system design may be changed to incorporate progressive variables in a more explicit, controllable and efficient way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The introduction of feedback variables adds elements of RNN architecture to the decoder. We observed highly negative results when such variable values weren't constrained (modes 'f' and, especially, '3f') . This indirectly suggests that RNN schema may not be a good solution for a decoder in terms of looping prevention. Holtzman et al. (2019) associated the problem with a more general degeneration issues that also includes production of blank and incoherent text. The authors observed that the issue appears during in maximization-based decoding methods such as beam search. As a remedy, they proposed a nucleus sampling technique that truncates unreliable tail of the probability distribution in the decoder part. Kulikov et al. (2019) also compared two search strategies, greedy and beam, proposing a novel iterative beam search strategy that increases diversity of the candidate responses. Contrary to that, Welleck et al. (2019) suggests that the problem cannot be solved by making beam search predictions more diverse. Instead, they propose focusing on likelihood loss, and introduce \"unlikelihood training\" that assigns lower probability to unlikely generations. Finally, following earlier observations on chaotic states w.r.t model parameters in Bertschinger and Natschl\u00e4ger (2004) and Laurent and von Brecht (2016) , Vakilipourtakalou and Mou (2020) study chaotic behavior (Kathleen et al., 1996) in RNNs that are defined as iterative maps (Strogatz, 1994) .",
"cite_spans": [
{
"start": 171,
"end": 204,
"text": "(modes 'f' and, especially, '3f')",
"ref_id": null
},
{
"start": 321,
"end": 343,
"text": "Holtzman et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 1256,
"end": 1291,
"text": "Bertschinger and Natschl\u00e4ger (2004)",
"ref_id": "BIBREF1"
},
{
"start": 1296,
"end": 1325,
"text": "Laurent and von Brecht (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1384,
"end": 1407,
"text": "(Kathleen et al., 1996)",
"ref_id": "BIBREF13"
},
{
"start": 1451,
"end": 1467,
"text": "(Strogatz, 1994)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We proposed and explored a simple technique that reduces rate of repetitive loops occurrence in a neural decoder output. Our work was inspired by a hypothesis that looping effects in a neural decoder are caused by its inability to distinguish states related to different positions in a generated word. We both provided a simple and universal practical solution and outlined a promising direction for further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We didn't consider possible irregular (chaotic) looping cases as they are extremely rare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphological inflection generation with hard monotonic attention",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2004--2015",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1183"
]
},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphologi- cal inflection generation with hard monotonic atten- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2004-2015, Vancouver, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Real-time computation at the edge of chaos in recurrent neural networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Bertschinger",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Natschl\u00e4ger",
"suffix": ""
}
],
"year": 2004,
"venue": "Neural computation",
"volume": "16",
"issue": "7",
"pages": "1413--1436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Bertschinger and Thomas Natschl\u00e4ger. 2004. Real-time computation at the edge of chaos in recurrent neural networks. Neural computation, 16(7):1413-1436.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Walther",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Univer- sal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Shared Task: Universal Morphological Reinflection, pages 1-30, Vancouver. Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shared Task: Universal Morphological Reinflection, pages 1-30, Vancouver. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The SIGMORPHON 2016 shared Task-Morphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "10--22",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task- Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphol- ogy, pages 10-22, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parameter estimation for probabilistic finite-state transducers",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2002. Parameter estimation for proba- bilistic finite-state transducers. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 1-8.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quantification in nen",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 2017,
"venue": "Handbook of Quantifiers in Natural Language",
"volume": "II",
"issue": "",
"pages": "571--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Evans. 2017. Quantification in nen. In Hand- book of Quantifiers in Natural Language: Volume II, pages 571-607. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Waiting for the Word: Distributed Deponency and the Semantic Interpretation of Number in the Nen Verb",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "100--123",
"other_ids": {
"DOI": [
"http://www.jstor.org/stable/10.3366/j.ctvggx4p0.8"
]
},
"num": null,
"urls": [],
"raw_text": "Nicholas Evans. 2019. Waiting for the Word: Dis- tributed Deponency and the Semantic Interpretation of Number in the Nen Verb, pages 100-123. Edin- burgh University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Weird inflects but OK: Making sense of morphological generation errors",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Markowska",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "140--151",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman, Arya D. McCarthy, Ryan Cotterell, Ekaterina Vylomova, Miikka Silfverberg, and Mag- dalena Markowska. 2019. Weird inflects but OK: Making sense of morphological generation errors. In Proceedings of the 23rd Conference on Computa- tional Natural Language Learning (CoNLL), pages 140-151, Hong Kong, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09751"
]
},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Regular models of phonological rule systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational linguistics",
"volume": "20",
"issue": "3",
"pages": "331--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald M Kaplan and Martin Kay. 1994. Regular mod- els of phonological rule systems. Computational lin- guistics, 20(3):331-378.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "CHAOS: an introduction to dynamical systems",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T Kathleen, D Tim, and A James. 1996. CHAOS: an introduction to dynamical systems. Springer, New York, NY, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Two-level morphology: A general computational model for word-form recognition and production",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Koskenniemi",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimmo Koskenniemi. 1983. Two-level morphology: A general computational model for word-form recog- nition and production, volume 11. University of Helsinki, Department of General Linguistics Helsinki, Finland.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Importance of search and evaluation strategies in neural dialogue modeling",
"authors": [
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "76--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilia Kulikov, Alexander Miller, Kyunghyun Cho, and Jason Weston. 2019. Importance of search and eval- uation strategies in neural dialogue modeling. In Proceedings of the 12th International Conference on Natural Language Generation, pages 76-87.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A recurrent neural network without chaos",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Laurent",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James Von Brecht",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.06212"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Laurent and James von Brecht. 2016. A recur- rent neural network without chaos. arXiv preprint arXiv:1612.06212.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Finite-state transducers in language and speech processing",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational linguistics",
"volume": "23",
"issue": "2",
"pages": "269--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri. 1997. Finite-state transducers in lan- guage and speech processing. Computational lin- guistics, 23(2):269-311.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "To compress or not to compress? a finitestate approach to Nen verbal morphology",
"authors": [
{
"first": "Saliha",
"middle": [],
"last": "Muradoglu",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Suominen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "207--213",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-srw.28"
]
},
"num": null,
"urls": [],
"raw_text": "Saliha Muradoglu, Nicholas Evans, and Hanna Suomi- nen. 2020. To compress or not to compress? a finite- state approach to Nen verbal morphology. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics: Student Re- search Workshop, pages 207-213, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "412--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidi- rectional long short-term memory models and auxil- iary loss. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412-418.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Nonlinear dynamics and chaos: with applications to physics",
"authors": [
{
"first": "H",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strogatz",
"suffix": ""
}
],
"year": 1994,
"venue": "Biology, Chemistry and Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven H Strogatz. 1994. Nonlinear dynamics and chaos: with applications to physics. Biology, Chem- istry and Engineering, page 1.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Noise-aware character alignment for bootstrapping statistical machine transliteration from bilingual corpora",
"authors": [
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "204--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsuhito Sudoh, Shinsuke Mori, and Masaaki Nagata. 2013. Noise-aware character alignment for boot- strapping statistical machine transliteration from bilingual corpora. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 204-209.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "How chaotic are recurrent neural networks",
"authors": [
{
"first": "Pourya",
"middle": [],
"last": "Vakilipourtakalou",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13838"
]
},
"num": null,
"urls": [],
"raw_text": "Pourya Vakilipourtakalou and Lili Mou. 2020. How chaotic are recurrent neural networks? arXiv preprint arXiv:2004.13838.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Looping counts observed in training a hard attention model on morphological datasets for Nen (upper plot) and Russian (lower plot)",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Looping count increase observed at some modes on a Nen language morphological dataset (seeFigure 1for other modes)",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Nen Russian</td></tr><tr><td>Training samples</td><td>1589</td><td>1000</td></tr><tr><td colspan=\"2\">Development samples 227</td><td>1000</td></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"text": "Dataset sizes",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF2": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">: A summary of explored modes</td></tr><tr><td colspan=\"2\">mode nen</td><td>ru</td><td colspan=\"2\">mode nen</td><td>ru</td></tr><tr><td>n</td><td colspan=\"2\">0.040 2.313</td><td>i</td><td>0.020 0.033</td></tr><tr><td/><td>0</td><td>1.267</td><td/><td>0</td><td>0</td></tr><tr><td>s</td><td colspan=\"2\">0.017 0.017</td><td>3s</td><td>0.030 0.003</td></tr><tr><td/><td>0</td><td>0</td><td/><td>0.066</td><td>0</td></tr><tr><td>u</td><td colspan=\"2\">0.010 0.027</td><td>3u</td><td>0.810 39.87</td></tr><tr><td/><td>0</td><td>0.133</td><td/><td>0</td><td>24.13</td></tr><tr><td>f</td><td colspan=\"2\">0.087 5.770</td><td>3f</td><td>5.823 107.2</td></tr><tr><td/><td colspan=\"2\">0.066 2.800</td><td/><td>2.667 114.7</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">: Average looping counts (per epoch) observed</td></tr><tr><td colspan=\"2\">at epochs 15..34</td><td/><td/></tr><tr><td colspan=\"2\">mode nen</td><td>ru</td><td colspan=\"2\">mode nen</td><td>ru</td></tr><tr><td>n</td><td colspan=\"2\">0.725 0.717</td><td>i</td><td>0.726 0.724</td></tr><tr><td>s</td><td colspan=\"2\">0.732 0.750</td><td>3s</td><td>0.716 0.753</td></tr><tr><td>u</td><td colspan=\"2\">0.727 0.728</td><td>3u</td><td>0.704 0.669</td></tr><tr><td>f</td><td colspan=\"2\">0.432 0.451</td><td>3f</td><td>0.668 0.574</td></tr></table>",
"html": null,
"num": null
},
"TABREF4": {
"text": "Development set accuracy achieved at different modes",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}