| { |
| "paper_id": "Q18-1045", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:10:03.400741Z" |
| }, |
| "title": "Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and Prince (1988) and the Past Tense Debate", |
| "authors": [ |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University Baltimore", |
| "location": { |
| "region": "MD" |
| } |
| }, |
| "email": "ckirov1@jhu.edu" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University", |
| "location": { |
| "settlement": "Baltimore", |
| "region": "MD" |
| } |
| }, |
| "email": "ryan.cotterell@jhu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Can advances in NLP help advance cognitive modeling? We examine the role of artificial neural networks, the current state of the art in many common NLP tasks, by returning to a classic case study. In 1986, Rumelhart and McClelland famously introduced a neural architecture that learned to transduce English verb stems to their past tense forms. Shortly thereafter, in 1988, Pinker and Prince presented a comprehensive rebuttal of many of Rumelhart and McClelland's claims. Much of the force of their attack centered on the empirical inadequacy of the Rumelhart and McClelland model. Today, however, that model is severely outmoded. We show that the Encoder-Decoder network architectures used in modern NLP systems obviate most of Pinker and Prince's criticisms without requiring any simplification of the past tense mapping problem. We suggest that the empirical performance of modern networks warrants a reexamination of their utility in linguistic and cognitive modeling.", |
| "pdf_parse": { |
| "paper_id": "Q18-1045", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Can advances in NLP help advance cognitive modeling? We examine the role of artificial neural networks, the current state of the art in many common NLP tasks, by returning to a classic case study. In 1986, Rumelhart and McClelland famously introduced a neural architecture that learned to transduce English verb stems to their past tense forms. Shortly thereafter, in 1988, Pinker and Prince presented a comprehensive rebuttal of many of Rumelhart and McClelland's claims. Much of the force of their attack centered on the empirical inadequacy of the Rumelhart and McClelland model. Today, however, that model is severely outmoded. We show that the Encoder-Decoder network architectures used in modern NLP systems obviate most of Pinker and Prince's criticisms without requiring any simplification of the past tense mapping problem. We suggest that the empirical performance of modern networks warrants a reexamination of their utility in linguistic and cognitive modeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In their famous 1986 opus, Rumelhart and McClelland (R&M) describe a neural network capable of transducing English verb stems to their past tense. The strong cognitive claims in the article fomented a veritable brouhaha in the linguistics community and eventually led to the highly influential rebuttal of Pinker and Prince (1988) (P&P) . P&P highlighted the extremely poor empirical performance of the R&M model, and pointed out a number of theoretical issues with the model, which they suggested would apply to any neural network, contemporarily branded connectionist approaches. Their critique was so successful that many linguists and cognitive scientists to this day do not consider neural networks a viable approach to modeling linguistic data and human cognition.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 40, |
| "text": "Rumelhart and", |
| "ref_id": null |
| }, |
| { |
| "start": 306, |
| "end": 330, |
| "text": "Pinker and Prince (1988)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 331, |
| "end": 336, |
| "text": "(P&P)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the field of natural language processing (NLP), however, neural networks have experienced a renaissance. With novel architectures, large new data sets available for training, and access to extensive computational resources, neural networks now constitute the state of the art in many NLP tasks. However, NLP as a discipline has a distinct practical bent and more often concerns itself with the large-scale engineering applications of language technologies. As such, the field's findings are not always considered relevant to the scientific study of language (i.e., the field of linguistics). Recent work, however, has indicated that this perception is changing, with researchers, for example, probing the ability of neural networks to learn syntactic dependencies like subject-verb agreement (Linzen et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 795, |
| "end": 816, |
| "text": "(Linzen et al., 2016)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Moreover, in the domains of morphology and phonology, both NLP practitioners and linguists have considered virtually identical problems, seemingly unbeknownst to each other. For example, both computational and theoretical morphologists are concerned with how different inflected forms in the lexicon are related and how one can learn to generate such inflections from data. Indeed, the original R&M network focuses on such a generation task, namely, generating English past tense forms from their stems. R&M's network, however, was severely limited and did not generalize correctly to held-out data. In contrast, state-of-the art morphological generation networks used in NLP, built from the modern evolution of recurrent neural networks (RNNs) explored by Elman (1990) and others, solve the same problem almost perfectly (Cotterell et al., 2016) . This level of performance on a cognitively relevant problem suggests that it is time to consider further incorporating network modeling into the study of linguistics and cognitive science.", |
| "cite_spans": [ |
| { |
| "start": 757, |
| "end": 769, |
| "text": "Elman (1990)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 822, |
| "end": 846, |
| "text": "(Cotterell et al., 2016)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Crucially, we wish to sidestep one of the issues that framed the original debate between P&P and R&M-whether or not neural models learn and use \"rules.\" From our perspective, any system that picks up systematic, predictable patterns in data may be referred to as rule-governed. We focus instead on an empirical assessment of the ability of a modern state-of-the-art neural architecture to learn linguistic patterns, asking the following questions: (i) Does the learner induce the full set of correct generalizations about the data? Given a range of novel inputs, to what extent does it apply the correct transformations to them? (ii) Does the behavior of the learner mimic humans? Are the errors human-like?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we run new experiments examining the ability of the Encoder-Decoder architecture developed for machine translation (Bahdanau et al., 2014; Sutskever et al., 2014) to learn the English past tense. The results suggest that modern nets absolutely meet the first criterion, and often meet the second. Furthermore, they do this given limited prior knowledge of linguistic structure: The networks we consider do not have phonological features built into them and must instead learn their own representations for input phonemes. The design and performance of these networks invalidate many of the criticisms in Pinker and Prince (1988) . We contend that, given the gains displayed in this case study, which is characteristic of problems in the morpho-phonological domain, researchers across linguistics and cognitive science should consider evaluating modern neural architectures as part of their modeling toolbox.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 152, |
| "text": "(Bahdanau et al., 2014;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 153, |
| "end": 176, |
| "text": "Sutskever et al., 2014)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 618, |
| "end": 642, |
| "text": "Pinker and Prince (1988)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper is structured as follows. Section 2 describes the problem under consideration, the English past tense. Section 3 lays out the original Rumelhart and McClelland model from 1986 in modern machine-learning parlance, and compares it to a state-of-the-art Encoder-Decoder architecture. A historical perspective on alternative approaches to modeling, both neural and nonneural, is provided in Section 4. The empirical performance of the Encoder-Decoder architecture is evaluated in Section 5. Section 6 provides a summary of which of Pinker and Prince's original criticisms have effectively been resolved, and which ones still require further consideration. Concluding remarks follow.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many languages mark words with syntacticosemantic distinctions. For instance, English marks the distinction between present and past tense verbs, for example, walk and walked. English verbal mor- phology is relatively impoverished, distinguishing maximally five forms for the copula to be and only three forms for most verbs. In this work, we consider learning to conjugate the English verb forms, rendered as phonological strings. As it is the focus of the original R&M study, we focus primarily on the English past tense formation. Both regular and irregular patterning exist in English. Orthographically, the canonical regular suffix is -ed, which, phonologically, may be rendered as one of three phonological strings:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The English Past Tense", |
| "sec_num": "2" |
| }, |
| { |
| "text": "[-Id], [-d], or [-t].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The English Past Tense", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The choice among the three is deterministic, depending only on the phonological properties of the previous segment. English selects [-Id] where the previous phoneme is a [t] (go \u2192went)), or exist in sub-regular islands defined by processes like ablaut (e.g., sing \u2192sang) that may contain several verbs (Nelson, 2010) ; see Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 137, |
| "text": "[-Id]", |
| "ref_id": null |
| }, |
| { |
| "start": 170, |
| "end": 173, |
| "text": "[t]", |
| "ref_id": null |
| }, |
| { |
| "start": 302, |
| "end": 316, |
| "text": "(Nelson, 2010)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 323, |
| "end": 330, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The English Past Tense", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Single vs. Dual Route. A frequently discussed cognitive aspect of past tense processing concerns whether or not irregular forms have their own processing pipeline in the brain. Pinker and Prince (1988) proposed separate modules for regular and irregular verbs; regular verbs go through a general, rule-governed transduction mechanism, and exceptional irregulars are produced via simple memory look-up. 1 While some studies (e.g., Marslen-Wilson and Tyler, 1997; Ullman et al., 1997) provide corroborating evidence from speakers with selective impairments to regular or irregular verb production, others have called these results into doubt (Stockall and Marantz, 2006) . From the perspective of this paper, a complete model of the English past tense should cover both regular and irregular transformations. The neural network approaches we advocate for achieve this goal, but do not clearly fall into either the single or dualroute category-internal computations performed by each network remain opaque, so we cannot at present make a claim whether two separable computation paths are present.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 201, |
| "text": "Pinker and Prince (1988)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 430, |
| "end": 461, |
| "text": "Marslen-Wilson and Tyler, 1997;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 462, |
| "end": 482, |
| "text": "Ullman et al., 1997)", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 640, |
| "end": 668, |
| "text": "(Stockall and Marantz, 2006)", |
| "ref_id": "BIBREF54" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The English Past Tense", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The English past tense is of considerable theoretical interest because of the now well-studied acquisition patterns of children. As first shown by Berko (1958) in the so-called wug-test, knowledge of English morphology cannot be attributed solely to memorization. Indeed, both adults and children are fully capable of generalizing the patterns to novel words (e.g., [w2g] \u2192[w2gd] (wug \u2192wugged)). During acquisition, only a few types of errors are common; children rarely blend regular and irregular forms-for example, the past tense of come is either produced as comed or came, but rarely camed (Pinker, 1999) .", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 159, |
| "text": "Berko (1958)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 595, |
| "end": 609, |
| "text": "(Pinker, 1999)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acquisition of the Past Tense", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Acquisition Patterns for Irregular Verbs. It is widely claimed that children learning the past tense forms of irregular verbs exhibit a \"U-shaped\" learning curve. At first, they correctly conjugate irregular forms (e.g., come \u2192came), then they regress during a period of overregularization producing the past tense as comed as they acquire the general past tense formation. Finally, they learn to produce both the regular and irregular forms. Plunkett and Marchman, however, observed a more nuanced form of this behavior. Rather than a macro U-shaped learning process that applies globally and uniformly to all irregulars, they noted that many irregulars oscillate between correct and overregularized productions (Marchman, 1988) . These oscillations, which Plunkett and Marchman refer to as a micro U-shape, further apply at different rates for different verbs (Plunkett and Marchman, 1991) . Interestingly, although the exact pattern of irregular acquisition may be disputed, children rarely overirregularize, that is, misconjugate a regular verb as if it were irregular, such as ping \u2192pang.", |
| "cite_spans": [ |
| { |
| "start": 713, |
| "end": 729, |
| "text": "(Marchman, 1988)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 862, |
| "end": 891, |
| "text": "(Plunkett and Marchman, 1991)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acquisition of the Past Tense", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In this section, we compare the original R&M architecture from 1986 to today's state-of-the-art neural architecture for morphological transduction, the Encoder-Decoder model. Rumelhart and McClelland (1986) For many linguists, the face of neural networks to this day remains the work of R&M. Here, we describe in detail their original architecture, using modern machine learning parlance whenever possible. Fundamentally, R&M were interested in designing a sequence-to-sequence network for variable-length input using a small feed-forward network. From an NLP perspective, this work constitutes one of the first attempts to design a network for a task reminiscent of popular NLP tasks today that require variable-length input (e.g., partof-speech tagging, parsing, and generation). We can describe R&M's representations using the modern linear-algebraic notation standard among researchers in neural networks. First, we assume that the language under consideration contains a fixed set of phonemes \u03a3, plus an edge symbol # marking the beginning and end of words. Then, we construct the set of all Wickelphones \u03a6 and the set of all Wickelfeatures F by enumeration. The first layer of the R&M neural network consists of two deterministic functions: (i) \u03c6 : \u03a3 * \u2192 B |\u03a6| and (ii) f :", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 206, |
| "text": "Rumelhart and McClelland (1986)", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1986 vs. Today", |
| "sec_num": "3" |
| }, |
| { |
| "text": "B |\u03a6| \u2192 B |F | ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "where we define B = {\u22121, 1}. The first function \u03c6 maps a phoneme string to the set of Wickelphones that fire, as it were, on that string; for example,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c6 ([#kaet#]) = {[#kae], [kaet], [aet#]}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "The output subset of \u03a6 may be represented by a binary vector of length |\u03a6|, where a 1 means that the Wickelphone appears in the string and a \u22121 that it does not. 2 The second function f maps a set of Wickelphones to its corresponding set of Wickelfeatures.", |
| "cite_spans": [ |
| { |
| "start": 162, |
| "end": 163, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "Pattern Associator Network. Here we define the complete network of R&M. We denote strings of phonemes as x \u2208 \u03a3 * , where x i is the i th phoneme in a string. Given source and target phoneme strings x (i) , y (i) \u2208 \u03a3 * , R&M optimize the following objective, a sum over the individual losses for each of the i = 1, ..., N training items:", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 203, |
| "text": "(i)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "N i=1 max 0, \u2212\u03c0(y (i) ) W \u03c0(x (i) ) + b 1 (1) where max{\u2022} is taken point-wise, is point-wise multiplication, W \u2208 R |F |\u00d7|F | is a projection ma- trix, b \u2208 R |F|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "is a bias term, and \u03c0 = \u03c6 \u2022 f is the composition of the Wickelphone and Wickelfeature encoding functions. Using modern terminology, the architecture is a linear model for a multi-label classification problem (Tsoumakas and Katakis, 2006) : The goal is to predict the set of Wickelfeatures in the target form y (i) given the input form x (i) using a point-wise perceptron loss (hinge loss without a margin); that is, a binary perceptron predicts each feature independently, but there is one set of parameters {W , b}. The total loss incurred is the sum of the per-feature loss, hence the use of the L 1 norm. The model is trained with stochastic sub-gradient descent (the perceptron update rule) (Rosenblatt, 1958; Bertsekas, 2015 ) with a fixed learning rate. 3 Later work augmented the architecture with multiple layers and nonlinearities (Marcus, 2001, Table 3 .3).", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 237, |
| "text": "Katakis, 2006)", |
| "ref_id": null |
| }, |
| { |
| "start": 310, |
| "end": 313, |
| "text": "(i)", |
| "ref_id": null |
| }, |
| { |
| "start": 695, |
| "end": 713, |
| "text": "(Rosenblatt, 1958;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 714, |
| "end": 729, |
| "text": "Bertsekas, 2015", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 840, |
| "end": 862, |
| "text": "(Marcus, 2001, Table 3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "Decoding. Decoding the R&M network necessitates solving a tricky optimization problem. Given an input phoneme string x (i) , we then must find the corresponding y \u2208 \u03a3 * that minimizes", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c0(y ) \u2212 threshold W \u03c0(x (i) ) + b 0 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "where threshold is a step function that maps all non-positive reals to \u22121 and all positive reals to 1. In other words, we seek the phoneme string y that shares the most features with the maximum a posteriori decoded binary vector. This problem is intractable, and so R&M provide an approximation. For each test stem, they hand-selected a set of likely past-tense candidate forms, for example, good candidates for the past tense of break would be {break, broke, brake, braked}, and choose the form with Wickelfeatures closest to the network's output. This manual approximate decoding procedure is not intended to be cognitively plausible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "Architectural Limitations. R&M used Wickelphones and Wickelfeatures in order to help with generalization and limit their network to a tractable size. However, this came at a significant cost to the network's ability to represent unique strings-the encoding is lossy: Two words may have the same set of Wickelphones or features. The easiest way to see this shortcoming is to consider morphological reduplication, which is common in many of the world's languages. P&P provide an example from the Australian language of Oykangand, which distinguishes between algal 'straight' and algalgal 'ramrod straight'; both of these strings have the (Jesperson, 1942) .", |
| "cite_spans": [ |
| { |
| "start": 636, |
| "end": 653, |
| "text": "(Jesperson, 1942)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "The NLP community has recently developed an analogue to the past-tense generation task originally considered by R&M: morphological paradigm completion (Durrett and DeNero, 2013; Ahlberg et al., 2015; Cotterell et al., 2015; Nicolai et al., 2015; Faruqui et al., 2016) . The goal is to train a model capable of mapping the lemma (stem in the case of English) to each form in the paradigm. In the case of English, the goal would be to map a lemma, for example, walk, to its past-tense word walked as well as its gerund and third person present singular, walking and walks, respectively. This task generalizes the R&M setting in that it requires learning more mappings than simply lemma to past tense.", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 177, |
| "text": "(Durrett and DeNero, 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 178, |
| "end": 199, |
| "text": "Ahlberg et al., 2015;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 200, |
| "end": 223, |
| "text": "Cotterell et al., 2015;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 224, |
| "end": 245, |
| "text": "Nicolai et al., 2015;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 246, |
| "end": 267, |
| "text": "Faruqui et al., 2016)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "By definition, any system that solves the more general morphological paradigm completion task must also be able to solve the original R&M task. As we wish to highlight the strongest currently available alternative to R&M, we focus on the state of the art in morphological paradigm completion: the Encoder-Decoder network architecture (ED) (Cotterell et al., 2016) . This architecture consists of two RNNs coupled together by an attention mechanism. The encoder RNN reads each symbol in the input string one at a time, first assigning it a unique embedding, then processing that embedding to produce a representation of the phoneme given the rest of the phonemes in the string. The decoder RNN produces a sequence of output phonemes one at a time, using the attention mechanism to peek back at the encoder states as needed. Decoding ends when a halt symbol is output. Formally, the ED architecture encodes the probability distribution over forms", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 363, |
| "text": "(Cotterell et al., 2016)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p(y | x) = N i=1 p(y i | y 1 , . . . , y i\u22121 , c i ) (3) = N i=1 g(y i\u22121 , s i , c i )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where g is a non-linear function (in our case it is a multi-layer perceptron), s i is the hidden state of the decoder RNN, y = (y 1 , . . . , y N ) is the output sequence (a sequence of N = |y| characters), and finally c i is an attention-weighted sum of the the encoder RNN hidden states h i , using the attention weights \u03b1 k (s i\u22121 ) that are computed based on the previous decoder hidden state:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "c i = |x| k=1 \u03b1 k (s i\u22121 )h k .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In contrast to the R&M network, the ED network optimizes the log-likelihood of the training data, that is, M i=1 log p(y (i) | x (i) ) for i = 1, ..., M training items. We refer the reader to Bahdanau et al. (2014) for the complete architectural specification of the specific ED model we apply in this paper.", |
| "cite_spans": [ |
| { |
| "start": 192, |
| "end": 214, |
| "text": "Bahdanau et al. (2014)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Theoretical Improvements. Although there are a number of possible variants of the ED architecture (Luong et al., 2015) , 4 they all share several critical features that make up for many of the theoretical shortcomings of the feed-forward R&M model. The encoder reads in each phoneme sequentially, preserving identity and order and allowing any string of arbitrary length to receive a unique representation. Despite this encoding, a flexible notion of string similarity is also maintained, as the ED model learns a fixed embedding for each phoneme that forms part of the representation of all strings that share the phoneme. When the network encodes [sIlt] and [slIt], it uses the same phoneme embeddings-only the order changes. Finally, the decoder permits sampling and scoring arbitrary length fully formed strings in polynomial time (forward sampling), so there is no need to determine which string a non-unique set of Wickelfeatures represents. However, we note that decoding the 1-best string from a sequence-to-sequence model is likely NP-hard (1-best string decoding is even hard for weighted finite-state transducers [Goodman, 1998] ).", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 118, |
| "text": "(Luong et al., 2015)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1124, |
| "end": 1139, |
| "text": "[Goodman, 1998]", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Multi-Task Capability. A single ED model is easily adapted to multi-task learning (Caruana, 1997; Collobert et al., 2011) , where each task is a single transduction (e.g., stem \u2192 past). Note that R&M would need a separate network for each transduction (e.g., stem \u2192 gerund and stem \u2192 past participle). In fact, the current state of the art in NLP is to learn all morphological transductions in a paradigm jointly. The key insight is to construct a single network p(y | x, t) to predict all inflections, marking the transformation in the input string-that is, we feed the network the string \"w a l k <gerund>\" as input, augmenting the alphabet \u03a3 to include morphological descriptors. We refer to reader to Kann and Sch\u00fctze (2016) ", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 97, |
| "text": "(Caruana, 1997;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 98, |
| "end": 121, |
| "text": "Collobert et al., 2011)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 705, |
| "end": 728, |
| "text": "Kann and Sch\u00fctze (2016)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encoder-Decoder Architectures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this section, we first describe direct follow-ups to the original 1986 R&M model, using various neural architectures. Then we review competing nonneural systems of context-sensitive rewrite rules in the style of the Sound Pattern of English (SPE) (Halle and Chomsky, 1968) , as favored by Pinker and Prince.", |
| "cite_spans": [ |
| { |
| "start": 250, |
| "end": 275, |
| "text": "(Halle and Chomsky, 1968)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "4.1 Follow-ups to Rumelhart and McClelland (1986) Over the Years", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 49, |
| "text": "Rumelhart and McClelland (1986)", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Following R&M, a cottage industry devoted to cognitively plausible connectionist models of inflection learning sprouted in the linguistics and cognitive science literature. We provide a summary listing of the various proposals, along with broadbrush comparisons, in Table 2 . Although many of the approaches apply more modern feed-forward architectures than R&M, introducing multiple layers connected by nonlinear transformations, most continue to use feed-forward architectures with limited ability to deal with variablelength inputs and outputs and remain unable to produce and assign probability to arbitrary output strings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 266, |
| "end": 273, |
| "text": "Table 2", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "MacWhinney and Leinbach (1991), Marchman (1991, 1993) , and Plunkett and Juola (1999) map phonological strings to phonological strings using feed-forward networks, but rather than turning to Wickelphones to imprecisely represent strings of any length, they use fixed-size input and output templates, with units representing each possible symbol at each input and output position. For example, Marchman (1991, 1993 ) simplify the past-tense mapping problem by only considering a language of artificially generated words of exactly three syllables and a limited set of constructed past-tense formation patterns. MacWhinney and Leinbach (1991) and Plunkett and Juola (1999) additionally modify the input template to include extra units marking particular transformations (e.g., past or gerund), enabling their network to learn multiple mappings.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 53, |
| "text": "Marchman (1991, 1993)", |
| "ref_id": null |
| }, |
| { |
| "start": 60, |
| "end": 85, |
| "text": "Plunkett and Juola (1999)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 393, |
| "end": 413, |
| "text": "Marchman (1991, 1993", |
| "ref_id": null |
| }, |
| { |
| "start": 610, |
| "end": 640, |
| "text": "MacWhinney and Leinbach (1991)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 645, |
| "end": 670, |
| "text": "Plunkett and Juola (1999)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Some proposals simplify the problem even further, mapping fixed-size inputs into a small finite set of categories, solving a classification problem rather than a transduction problem. (Nakisa and Hahn, 1996; Hahn and Nakisa, 2000) classify German noun stems into their appropriate plural inflection classes. Plunkett and Nakisa (1997) do the same for Arabic stems. Hoeffner (1992) , Hare and Elman (1995) , and Cottrell and Plunkett (1994) also solve an alternative problem-mapping semantic representations (usually one-hot vectors with one unit per possible word type, and one unit per possible inflection) to phonological outputs. As these networks use unstructured semantic inputs to represent words, they must act as memories-the phonological content of any word must be memorized. This precludes generalization to word types that were not seen during training.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 207, |
| "text": "(Nakisa and Hahn, 1996;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 208, |
| "end": 230, |
| "text": "Hahn and Nakisa, 2000)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 308, |
| "end": 334, |
| "text": "Plunkett and Nakisa (1997)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 365, |
| "end": 380, |
| "text": "Hoeffner (1992)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 383, |
| "end": 404, |
| "text": "Hare and Elman (1995)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Of the proposals that map semantics to phonology, the architecture in Hoeffner (1992) is unique in that it uses an attractor network rather than a feed-forward network, with the main difference being training using Hebbian learning rather than the standard backpropagation algorithm. Cottrell and Plunkett (1994) present an early use of a simple recurrent network (Elman, 1990) to decode output strings, making their model capable of variable length output.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 85, |
| "text": "Hoeffner (1992)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 297, |
| "end": 312, |
| "text": "Plunkett (1994)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 364, |
| "end": 377, |
| "text": "(Elman, 1990)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Bullinaria (1997) includes one of the few models proposed that can deal with variable length inputs. They use a derivative of the NETtalk pronunciation model (Sejnowski and Rosenberg, 1987 ) that would today be considered a convolutional network. Each input phoneme in a stem is read independently along with its left and right context phonemes within a limited context window (i.e., a convolutional kernel). Each kernel is then mapped to zero or more output phonemes within a fixed template. Because each output fragment is independently generated, the architecture is limited to learning only local constraints on output string structure. Similarly, the use of a fixed context window also means that inflectional patterns that depend on long-distance dependencies between input phonemes cannot be captured. Finally, the model of Westermann and Goebel (1995) is arguably the most similar to a modern ED architecture, relying on simple recurrent networks to both encode input strings and decode output strings. However, the model was intended to explicitly instantiate a dual route mechanism and contains an additional explicit memory component to memorize irregulars. Despite the addition of this memory, the model was unable to fully learn the mapping from German verb stems to their participle forms, failing to capture the correct form for strong training verbs, including the copular sein \u2192 gewesen. As the authors note, this may be due to the difficulty of training simple recurrent networks, which tend to converge to poor local minima. Modern RNN varieties, such as long short-term memory (LSTM) networks in the ED model, were specifically designed to overcome these training limitations (Hochreiter and Schmidhuber, 1997) .", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 188, |
| "text": "(Sejnowski and Rosenberg, 1987", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 831, |
| "end": 859, |
| "text": "Westermann and Goebel (1995)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 1696, |
| "end": 1730, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "P&P describe several basic ideas that underlie a traditional, symbolic rule learner. Such a learner produces SPE-style rewrite rules that may be applied to deterministically transform the input string into the target. Rule candidates are found by comparing the stem and the inflected form, treating the portion that changes as the rule that governs the transformation. This is typically a set of non-copy edit operations. If multiple stem-past pairs share similar changes, these may be collapsed into a single rule by generalizing over the shared phonological features involved in the changes. For example, if multiple stems are converted to the past tense by the addition of the suffix [-d] , the learner may create a collapsed rule that adds the suffix to all stems that end in a [+voice] sound. Different rules may be assigned weights (e.g., probabilities) derived from how many stem-past pairs exemplify the rules. These weights decide which rules to apply to produce the past tense.", |
| "cite_spans": [ |
| { |
| "start": 687, |
| "end": 691, |
| "text": "[-d]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-neural Learners", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Several systems that follow this rule-based template have been developed in NLP. Although the SPE itself does not impose detailed restrictions on rule structure, these systems generate rules that can be compiled into finite-state transducers (Kaplan and Kay, 1994; Ahlberg et al., 2015) . These systems generalize well, but even the most successful variants have been shown to perform significantly worse than state-of-the-art neural networks at morphological inflection, often with a >10 percentage point differential in accuracy on held-out data (Cotterell et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 264, |
| "text": "(Kaplan and Kay, 1994;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 265, |
| "end": 286, |
| "text": "Ahlberg et al., 2015)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 548, |
| "end": 572, |
| "text": "(Cotterell et al., 2016)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-neural Learners", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the linguistics literature, the most straightforward, direct, machine-implemented instantiation of the P&P proposal is, arguably, the Minimal Generalization Learner (MGL) of Albright and Hayes (2003) (c.f., Allen and Becker, 2015; Taatgen and Anderson, 2002) . This model takes a mapping of phonemes to phonological features and makes feature-level generalizations like the post-voice [-d] rule described earlier. For a detailed technical description, see Albright and Hayes (2002) . We treat the MGL as a baseline in \u00a75.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 202, |
| "text": "Albright and Hayes (2003)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 210, |
| "end": 233, |
| "text": "Allen and Becker, 2015;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 234, |
| "end": 261, |
| "text": "Taatgen and Anderson, 2002)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 459, |
| "end": 484, |
| "text": "Albright and Hayes (2002)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-neural Learners", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Unlike Taatgen and Anderson (2002) , who explicitly account for dual route processing by including both memory retrieval and rule application submodules, Albright and Hayes (2003) and Allen and Becker (2015) rely on discovering and correctly weighting (using weights learned by log-linear regression) highly stem-specific rules to account for irregular transformations.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 34, |
| "text": "Taatgen and Anderson (2002)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 154, |
| "end": 179, |
| "text": "Albright and Hayes (2003)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-neural Learners", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Within the context of rule-based systems, several proposals focus on the question of rule generalization, rather than rule synthesis. That is, given a set of predefined rules, the systems implement metrics to decide whether rules should generalize to novel forms, depending on the number of exceptions in the data set. Yang (2016) defines the 'tolerance principle,' a threshold for exceptionality beyond which a rule will fail to generalize. O'Donnell (2011) treats the question of whether a rule will generalize as one of optimal Bayesian inference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-neural Learners", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We evaluate the performance of the ED architecture in light of the criticisms P&P levied against the original R&M model. We show that, in most cases, these criticisms no longer apply. 5 The most potent line of attack P&P use against the R&M model is that it simply does not learn the English past tense very well. Although the nondeterministic, manual, and non-precise decoding procedure used by R&M makes it difficult to obtain exact accuracy numbers, P&P estimate that the model only prefers the correct past tense form for about 67% of English verb stems. Furthermore, many of the errors made by the R&M network are unattested in human performance. For example, the model produces blends of regular and irregular past-tense formation (e.g., eat \u2192 ated) that children do not produce unless they mistake ate for a present stem (Pinker, 1999) . Furthermore, the R&M model frequently produces irregular past tense forms when a regular formation is expected (e.g., ping \u2192 pang). Humans are more likely to overregularize. These behaviors suggest that the R&M model learns the wrong kind of generalizations. As shown subsequently, the ED architecture seems to avoid these pitfalls, while outperforming a P&P-style non-neural baseline.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 185, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 828, |
| "end": 842, |
| "text": "(Pinker, 1999)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of the ED Learner", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In the first experiment, we seek to show: (i) the ED model successfully learns to conjugate both regular and irregular verbs in the training data, and generalizes to held-out data at convergence and (ii) the pattern of errors the model exhibits is compatible with attested speech errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "CELEX Data Set. Our base data set consists of 4,039 verb types in the CELEX database (Baayen et al., 1993) . Each verb is associated with a present tense form (stem) and past tense form, both in IPA. Each verb is also marked as regular or irregular (Albright and Hayes, 2003) . A total of 168 of the 4,039 verb types were marked as irregular. We assigned verbs to train, development, and test sets according to a random 80-10-10 split. Each verb appears in exactly one of these sets once. This corresponds to a uniform distribution over types because every verb has an effective frequency of 1.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 106, |
| "text": "(Baayen et al., 1993)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 249, |
| "end": 275, |
| "text": "(Albright and Hayes, 2003)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In contrast, the original R&M model was trained and tested (data was not held out) on a set of 506 stem/past pairs derived from Ku\u010dera and Francis (1967) . A total of 98 of the 506 verb types were marked as irregular.", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 153, |
| "text": "Ku\u010dera and Francis (1967)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Types vs. Tokens. In real human communication, words follow a Zipfian distribution, with many irregular verbs being exponentially more common than regular verbs. Although this condition is more true to the external environment of language learning, it may not accurately represent the psychological reality of how that environment is processed. A body of psycholinguistic evidence (Bybee, 1995 (Bybee, , 2001 Pierrehumbert, 2001) suggests that human learners generalize phonological patterns based on the count of word types they appear in, ignoring the token frequency of those types. Thus, we chose to weigh all verb types equally for training, effecting a uniform distribution over types as described above.", |
| "cite_spans": [ |
| { |
| "start": 381, |
| "end": 393, |
| "text": "(Bybee, 1995", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 394, |
| "end": 408, |
| "text": "(Bybee, , 2001", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 409, |
| "end": 429, |
| "text": "Pierrehumbert, 2001)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Hyperparameters and Other Details. Our architecture is nearly identical to that used in Bahdanau et al. (2014) , with hyperparameters set following Kann and Sch\u00fctze (2016, \u00a74.1.1) . Each input character has an embedding size of 300 units. The encoder consists of a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with two layers. There is a dropout value of 0.3 between the layers. The decoder is a unidirectional LSTM with two layers. Both the encoder and decoder have 100 hidden units. Training was done using the Adadelta procedure (Zeiler, 2012) with a learning rate of 1.0 and a minibatch size of 20. We train for 100 epochs to ensure that all verb forms in the training data are adequately learned. We decode the model with beam search (k = 12). The code for our experiments is derived from the OpenNMT package (Klein et al., 2017) . We use accuracy as our metric of performance. We train the MGL as a non-neural baseline, using the code distributed with Albright and Hayes (2003) with default settings.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 110, |
| "text": "Bahdanau et al. (2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 148, |
| "end": 179, |
| "text": "Kann and Sch\u00fctze (2016, \u00a74.1.1)", |
| "ref_id": null |
| }, |
| { |
| "start": 284, |
| "end": 318, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 823, |
| "end": 843, |
| "text": "(Klein et al., 2017)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 967, |
| "end": 992, |
| "text": "Albright and Hayes (2003)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Results and Discussion. The non-neural MGL baseline unsurprisingly learns the regular pasttense pattern nearly perfectly, given that it is imbued with knowledge of phonological features as well as a list of phonologically illegal phoneme sequences to avoid in its output. However, in our testing of the MGL, the preferred past-tense output for all verbs was never an irregular formulation. This was true even for irregular verbs that were observed by the learner in the training set. One might say that the MGL is only intended to account for the regular route of a dual route system. However, the intended scope of the MGL seems to be wider. The model is billed as accurately learning \"islands of subregularity\" within the past tense system, and Albright and Hayes use the model to make predictions about which irregular forms of novel verb stems are preferable to human speakers (see the subsequent discussion of wugs). Table 3 : Results on held-out data in English past tense prediction for single-and multi-task scenarios. The MGL achieves perfect accuracy on regular verbs, and 0 accuracy on irregular verbs. \u2020 indicates that a neural model's performance was found to be significantly different (p < 0.05) from the MGL.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 922, |
| "end": 929, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In contrast, the ED model, despite no builtin knowledge of phonology, successfully learns to conjugate nearly all the verbs in the training data, including irregulars-no reduction in scope is needed. This capacity to account for specific exceptions to the regular rule does not result in overfitting. We note similarly high accuracy on held-out regular data-98.9% to 99.2% at convergence depending on the condition. We report the full accuracy in all conditions in Table 3 . The \u2020 indicates when a neural model's performance was found to be significantly different (p < 0.05) from the MGL according to a \u03c7 2 test. The ED model achieves near-perfect accuracy on regular verbs, and irregular verbs seen during training, as well as substantial accuracy on irregular verbs in the dev and test sets. This behavior jointly results in better overall performance for the ED model when all verbs are considered. Figure 1 shows learning curves for regular and irregular verbs types in different conditions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 465, |
| "end": 472, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 903, |
| "end": 911, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "An error analysis of held-out data shows that the errors made by this network do not show any of the problems of the R&M architecture. There are no blend errors of the eat \u2192 ated variety. Indeed, the only error the network makes on irregulars is overregularization (e.g., throw \u2192 throwed). In fact, the overregularization-caused lower accuracy that we observe for irregular verbs in development and test is expected and desirable; it matches the human tendency to treat novel words as regular, lacking knowledge of irregularity (Albright and Hayes, 2003) .", |
| "cite_spans": [ |
| { |
| "start": 528, |
| "end": 554, |
| "text": "(Albright and Hayes, 2003)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Although most held-out irregulars are regularized, as expected, the ED model does, perhaps surprisingly, correctly conjugate a handful of irregular forms it has not seen during training-five in the test set. However, three of these are prefixed versions of irregulars that exist in the training set (retell \u2192 retold, partake \u2192 partook, withdraw Figure 1 : Single-task vs. multi-task. Learning curves for the English past tense. The x-axis is the number of epochs (one complete pass over the training data) and the y-axis is the accuracy on the training data (not the metric of optimization). \u2192 withdrew). One (sling \u2192 slung) is an analogy to similar training words (fling, cling). The final conjugation, forsake \u2192 forsook, is an interesting combination, with the prefix \"for,\" but an unattested base form \"sake\" that is similar to \"take.\" 6 From the training data, the only regular verb with an error is compartmentalized, whose past tense is predicted to be \"compartmentalized,\" with a spurious vowel change that would likely be ironed out with additional training. Among the regular verbs in the development and test sets, the errors also consisted of single vowel changes (the full set of these errors was \"thin\" \u2192 \"thun,\" \"try\" \u2192 \"traud,\" \"institutionalize\" \u2192 \"instititionalized,\" and \"drawl\" \u2192 \"drooled\").", |
| "cite_spans": [ |
| { |
| "start": 839, |
| "end": 840, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 345, |
| "end": 353, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Overall then, the ED model performs extremely well, a far cry from the \u224867% accuracy of the R&M model. It exceeds any reasonable standard of empirical adequacy, and shows human-like error behavior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Acquisition Patterns. R&M made several claims that their architecture modeled the detailed acquisition of the English past tense by children. The core claim was that their model exhibited a macro U-shaped learning curve as in \u00a72. Irregulars were initially produced correctly, followed by a period of overregularization preceding a final correct stage. However, P&P point out that R&M only achieve this pattern by manipulating the input distribution fed into their network. They trained only on irregulars for a number of epochs, before flooding the network with regular verb forms. R&M justify this by claiming that young children's vocabulary consists disproportionately of irregular verbs early on, but P&P cite contrary evidence. A survey of child-directed speech shows that the ratio of regular to irregular verbs a child hears is constant while they are learning their language (Slobin, 1971) . Furthermore, psycholinguistic results suggest that there is no early skew towards irregular verbs in the vocabulary children understand or use (Brown, 1973) . Although we do not wish to make a strong claim that the ED architecture accurately mirrors children's acquisition, only that it ultimately learns the correct generalizations, we wanted to see if it would display a child-like learning pattern without changing the training inputs fed into the network over time-that is, in all of our experiments, the data sets remained fixed for all epochs, unlike in R&M. We do not clearly see a macro U-shape, but we do observe Plukett and Marchman's predicted oscillations for irregular learning-the so-called micro U-shaped pattern. As shown in Table 4 , individual verbs oscillate between correct production and overregularization before they are fully mastered.", |
| "cite_spans": [ |
| { |
| "start": 883, |
| "end": 897, |
| "text": "(Slobin, 1971)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 1043, |
| "end": 1056, |
| "text": "(Brown, 1973)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1641, |
| "end": 1648, |
| "text": "Table 4", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Wug Testing. As a further test of the MGL as a cognitive model, Albright and Hayes created a set of 74 nonce English verb stems with varying levels of similarity to both regular and irregular verbs. For each stem (e.g., rife), they picked one regular output form (rifed), and one irregular output form (rofe). They used these stems and potential past-tense variants to perform a wug test with human participants. For each stem, they had 24 participants freely attempt to produce a past", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Learning the Past Tense", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Network MG Regular (rife \u223c rifed, n=58) 0.48 0.35 Irregular (rife \u223c rofe, n=74) 0.45 0.36 Table 5 : Spearman's \u03c1 of human wug production probabilities with MG scores and ED probability estimates.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 97, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "English", |
| "sec_num": null |
| }, |
| { |
| "text": "tense form. They then counted the percentage of participants who produced the pre-chosen regular and irregular forms (production probability). The production probabilities for each pre-chosen regular and irregular form could then be correlated with the predicted scores derived from the MGL. In Table 5 , we compare the correlations based on their model scores, with correlations comparing the human scores to the output probabilities given by an ED model. As the wug data provided with Albright and Hayes (2003) use a different phonetic transcription than the one we used, we trained a separate ED model for this comparison. Model architecture, training verbs, and hyperparameters remained the same. Only the transcription used to represent input and output strings was changed to match Albright and Hayes (2003) . Following the original paper, we correlate the probabilities for regular and irregular transformations separately. We apply Spearman's rank correlation, as we don't necessarily expect a linear relationship. We see that the ED model probabilities are slightly more correlated than the MGL's scores.", |
| "cite_spans": [ |
| { |
| "start": 487, |
| "end": 512, |
| "text": "Albright and Hayes (2003)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 788, |
| "end": 813, |
| "text": "Albright and Hayes (2003)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 295, |
| "end": 302, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "English", |
| "sec_num": null |
| }, |
| { |
| "text": "Another objection levied by P&P is R&M's focus on learning a single morphological transduction: stem to past tense. Many phonological patterns in a language, however, are not restricted to a single transduction-they make up a core part of the phonological system and take part in many different processes. For instance, the voicing assimilation patterns found in the past tense also apply to the third person singular: we see the affix -s rendered as [-s] after voiceless consonants and [-z] after voiced consonants and vowels. P&P argue that the R&M model would not be able to take advantage of these shared generalizations. Assuming a different network would need to be trained for each transduction (e.g., stem to gerund and stem to past participle), it would be impossible to learn that they have any patterns in common. However, as discussed in \u00a73.2, a single ED model can learn multiple types of mapping, simply by tagging each input-output pair in the training set with the transduction it represents. A network trained in such a way shares the same weights and phoneme embeddings across tasks, and thus has the capacity to generalize patterns across all transductions, naturally capturing the overall phonology of the language. Because different transductions mutually constrain each other (e.g., English in general does not allow sequences of identical vowels), we actually expect faster learning of each individual pattern, which we test in the following experiment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2: Joint Multi-Task Learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We trained a model with an architecture identical to that used in Experiment 1, but this time to jointly predict four mappings associated with English verbs (past, gerund, past participle, thirdperson singular).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2: Joint Multi-Task Learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Data. For each of the verb types in our base training set from Experiment 1, we added the three remaining mappings. The gerund, past-participle, and third-person singular forms were identified in CELEX according to their labels in Wiktionary (Sylak-Glassman et al., 2015) . The network was trained on all individual stem \u2192 inflection pairs in the new training set, with each input string modified with additional characters representing the current transduction (Kann and Sch\u00fctze, 2016) : take <PST> \u2192 took, but take <PTCP> \u2192 taken. 7", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 271, |
| "text": "(Sylak-Glassman et al., 2015)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 462, |
| "end": 486, |
| "text": "(Kann and Sch\u00fctze, 2016)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2: Joint Multi-Task Learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Results. Table 3 and Figure 1 show the results. Overall, accuracy is >99% after convergence on train. Although the difference in final performance is never statistically significant compared to singletask learning, the learning curves are much steeper, so this level of performance is achieved much more quickly. This provides evidence for our intuition that cross-task generalization facilitates individual task learning due to shared phonological patterning (i.e., jointly generating the gerund hastens pasttense learning).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 16, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 21, |
| "end": 29, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 2: Joint Multi-Task Learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this paper, we have argued that the Encoder-Decoder architecture obviates many of the criti- 7 Without input annotation to mark the different mappings the network must learn, it would treat all input/output pairs as belonging to the same mapping, with each inflected form of a single stem as an equally likely output variant associated with that mapping. It is not within the scope of this network architecture to solve problems other than morphological transduction, such as discovering the range of morphological paradigm slots. cisms P&P levied against R&M. Most importantly, the empirical performance of neural models is no longer an issue. The past tense transformation is learned nearly perfectly, compared to an approximate accuracy of 67% for R&M. Furthermore, the ED architecture solves the problem in a fully general setting. A single network can easily be trained on multiple mappings at once (and appears to generalize knowledge across them). No representational cludges such as Wickelphones are required-ED networks can map arbitrary length strings to arbitrary length strings. This permits training and evaluating the ED model on realistic data, including the ability to assign an exact probability to any arbitrary output string, rather than \"representative\" data designed to fit in a fixed-size neural architecture (e.g., fixed input and output templates). Evaluation shows that the ED model does not appear to display any of the degenerate error-types P&P note in the output of R&M (e.g., regular/irregular blends of the ate \u2192 ated variety).", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 97, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Resolved and Outstanding Criticisms", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Despite this litany of successes, some outstanding criticisms of R&M still remain to be addressed. On the trivial end, P&P correctly point out that the R&M model does not handle homophones: write \u2192 wrote, but right \u2192 righted. This is because it only takes the phonological make-up of the input string into account, without concern for its lexical identity. This issue affects the ED models we discuss in this paper as well-lexical disambiguation is outside of their intended scope. However, even the rule learner that P&P propose does not have such functionality. Furthermore, if lexical markings were available, we could incorporate them into the model just as with different transductions in the multi-task set-up (i.e., by adding the disambiguating markings to the input).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Resolved and Outstanding Criticisms", |
| "sec_num": "6" |
| }, |
| { |
| "text": "More importantly, we need to limit any claims regarding treating ED models as proxies for child language learners. P&P criticized such claims from R&M because they manipulated the input data distribution given to their network over time to effect a U-shaped learning curve, despite no evidence that the manipulation reflected children's perception or production capabilities. We avoid this criticism in our experiments, keeping the input distribution constant. We even show that the ED model captures at least one observed pattern of child language development-Plukett and Marchman's predicted oscillations for irregular learning, the micro U-shaped pattern. However, we did not observe a macro U-shape, nor was the micro effect consistent across all irregular verbs. More study is needed to determine the ways in which ED architectures do or do not reflect children's behavior. Even if nets do not match the development patterns of any individual, they may still be useful if they ultimately achieve a knowledge state that is comparable to that of an adult or, possibly, the aggregate usage statistics of a population of adults.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Resolved and Outstanding Criticisms", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Along this vein, P&P note that the R&M model is able to learn highly unnatural patterns that do not exist in any language. For example, it is trivial to map each Wickelphone to its reverse, effectively creating a mirror-image of the input, for example, brag \u2192garb. Although an ED model could likely learn linguistically unattested patterns as well, some patterns may be more difficult to learn than others-for example, they might require increased time-to-convergence. It remains an open question for future research to determine which patterns RNNs prefer, and which changes are needed to account for over-and underfitting. Indeed, any sufficiently complex learning system (including rule-based learners) would have learning biases that require further study.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Resolved and Outstanding Criticisms", |
| "sec_num": "6" |
| }, |
| { |
| "text": "There are promising directions from which to approach this study. Networks are in a way analogous to animal models (McCloskey, 1991) , in that they share interesting properties with human learners, as shown empirically, but are much easier and less costly to train and manipulate across multiple experiments. Initial experiments could focus on default architectures, as we do in this paper, effectively treating them as inductive baselines (Gildea and Jurafsky, 1996) and measuring their performance given limited domain knowledge. Our ED networks, for example, have no built-in knowledge of phonology or morphology. Failures of these baselines would then point the way towards the biases required to learn human language, and models modified to incorporate these biases could be tested.", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 132, |
| "text": "(McCloskey, 1991)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 440, |
| "end": 467, |
| "text": "(Gildea and Jurafsky, 1996)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Resolved and Outstanding Criticisms", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We have shown that the application of the ED architecture to the problem of learning the English past tense obviates many, though not all, of the objections levied by P&P against the first neural network proposed for the task, suggesting that the criticisms do not extend to all neural models, as P&P imply. Compared with a non-neural baseline, the ED model accounts for both regular and irregular past tense formation in observed training data and generalizes to held-out verbs, all without built-in knowledge of phonology. Although not necessarily intended to act as a proxy for a child learner, the ED model also shows one of the development patterns that has been observed in children, namely, a micro U-shaped (oscillating) learning curve for irregular verbs. The accurate and substantially human-like performance of the ED model warrants consideration of its use as a research tool in theoretical linguistics and cognitive science.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Note that irregular look-up can simply be recast as the application of a context-specific rule.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We have chosen \u22121 instead of the more traditional 0 so that the objective function that Rumelhart and McClelland optimize may be more concisely written.3 Follow-up work, e.g.,Plunkett and Marchman (1991), has speculated that the original experiments in R&M may not have converged. Indeed, convergence may not be guaranteed depending on the fixed learning rate chosen. As Equation(1)is jointly convex in its parameters {W, b}, there exist convex optimization algorithms that will guarantee convergence, albeit often with a decaying learning rate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the experiments in this paper, we use the variant inBahdanau et al. (2014), which has explicitly been shown to be state of the art in morphological transduction(Cotterell et al., 2016).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Data sets and code for all experiments are available at https://github.com/ckirov/Revisit PinkerAndPrince.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "[s] and [t] are both coronal consonants, a fricative and a stop, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Paradigm classification in supervised learning of morphology", |
| "authors": [ |
| { |
| "first": "Malin", |
| "middle": [], |
| "last": "Ahlberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Forsberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1024--1029", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in super- vised learning of morphology. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1024-1029, Denver, Colorado. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Modeling English past tense intuitions with minimal generalization", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Albright", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruce", |
| "middle": [], |
| "last": "Hayes", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Workshop on Morphological and Phonological Learning", |
| "volume": "6", |
| "issue": "", |
| "pages": "58--69", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Albright and Bruce Hayes. 2002. Model- ing English past tense intuitions with minimal generalization. In Proceedings of the Workshop on Morphological and Phonological Learning, volume 6, pages 58-69. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Rules vs. analogy in English past tenses: A computational/experimental study", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Albright", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruce", |
| "middle": [], |
| "last": "Hayes", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Cognition", |
| "volume": "90", |
| "issue": "", |
| "pages": "119--161", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in English past tenses: A computational/experimental study. Cognition, 90:119-161.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Learning alternations from surface forms with sublexical phonology", |
| "authors": [ |
| { |
| "first": "Blake", |
| "middle": [], |
| "last": "Allen", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Becker", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blake Allen and Michael Becker. 2015. Learn- ing alternations from surface forms with sublexi- cal phonology. Technical report, University of British Columbia and Stony Brook University.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The CELEX lexical data base on CD-ROM", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Harald", |
| "middle": [], |
| "last": "Baayen", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Piepenbrock", |
| "suffix": "" |
| }, |
| { |
| "first": "Rijn", |
| "middle": [], |
| "last": "Van", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Harald Baayen, Richard Piepenbrock, and Rijn van H. 1993. The CELEX lexical data base on CD-ROM.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1409.0473" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The child's learning of English morphology", |
| "authors": [ |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Berko", |
| "suffix": "" |
| } |
| ], |
| "year": 1958, |
| "venue": "Word", |
| "volume": "14", |
| "issue": "2-3", |
| "pages": "150--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean Berko. 1958. The child's learning of English morphology. Word, 14(2-3):150-177.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Convex Optimization Algorithms", |
| "authors": [ |
| { |
| "first": "Dimitri", |
| "middle": [ |
| "P" |
| ], |
| "last": "Bertsekas", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Athena Scientific", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dimitri P. Bertsekas. 2015. Convex Optimization Algorithms. Athena Scientific.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A First Language: The Early Stages", |
| "authors": [ |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roger Brown. 1973. A First Language: The Early Stages. Harvard University Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Modeling reading, spelling, and past tense learning with artificial neural networks", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bullinaria", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Brain and Language", |
| "volume": "59", |
| "issue": "2", |
| "pages": "236--266", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John A Bullinaria. 1997. Modeling reading, spelling, and past tense learning with arti- ficial neural networks. Brain and Language, 59(2):236-266.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Regular morphology and the lexicon", |
| "authors": [ |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Bybee", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Language and Cognitive Processes", |
| "volume": "10", |
| "issue": "", |
| "pages": "425--455", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joan Bybee. 1995. Regular morphology and the lexicon. Language and Cognitive Processes, 10:425-455.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Phonology and Language Use", |
| "authors": [ |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Bybee", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joan Bybee. 2001. Phonology and Language Use. Cambridge University Press, Cambridge, UK.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Multitask learning", |
| "authors": [ |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Caruana", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Machine Learning", |
| "volume": "28", |
| "issue": "", |
| "pages": "41--75", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rich Caruana. 1997. Multitask learning. Machine Learning, 28:41-75.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Natural language processing (almost) from scratch", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The SIGMORPHON 2016 shared task-morphological reinflection", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Sylak-Glassman", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "10--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Cotterell, Christo Kirov, John Sylak- Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task-morphological reinflection. In Proceedings of the 14th SIGMORPHON Work- shop on Computational Research in Phonet- ics, Phonology, and Morphology, pages 10-22, Berlin, Germany. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Modeling word forms using latent underlying morphs and phonology", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Nanyun", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "1", |
| "pages": "433--447", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015. Modeling word forms using latent under- lying morphs and phonology. Transactions of the Association for Computational Linguistics, 3(1):433-447.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Acquiring the mapping from meaning to sounds", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Garrison", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Cottrell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Plunkett", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Connection Science", |
| "volume": "6", |
| "issue": "4", |
| "pages": "379--412", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Garrison W Cottrell and Kim Plunkett. 1994. Ac- quiring the mapping from meaning to sounds. Connection Science, 6(4):379-412.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Supervised learning of complete morphological paradigms", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1185--1195", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1185-1195, Atlanta, Georgia. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Finding structure in time", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jeffrey L Elman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Cognitive science", |
| "volume": "14", |
| "issue": "2", |
| "pages": "179--211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179-211.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Morphological inflection generation using character sequence to sequence learning", |
| "authors": [ |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "634--643", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies, pages 634-643, San Diego, California. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Learning bias and phonological-rule induction", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "4", |
| "pages": "497--530", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea and Daniel Jurafsky. 1996. Learning bias and phonological-rule induction. Computa- tional Linguistics, 22(4):497-530.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Parsing inside-out", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joshua Goodman. 1998. Parsing inside-out. Har- vard Computer Science Group Technical Report TR-07-98.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "German inflection: Single route or dual route?", |
| "authors": [ |
| { |
| "first": "Ulrike", |
| "middle": [], |
| "last": "Hahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramin", |
| "middle": [], |
| "last": "Charles Nakisa", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Cognitive Psychology", |
| "volume": "41", |
| "issue": "4", |
| "pages": "313--360", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ulrike Hahn and Ramin Charles Nakisa. 2000. German inflection: Single route or dual route? Cognitive Psychology, 41(4):313-360.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "The Sound Pattern of English", |
| "authors": [ |
| { |
| "first": "Morris", |
| "middle": [], |
| "last": "Halle", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Morris Halle and Noam Chomsky. 1968. The Sound Pattern of English. Harper & Row.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Learning and morphological change", |
| "authors": [ |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Hare", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jeffrey L Elman", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Cognition", |
| "volume": "56", |
| "issue": "1", |
| "pages": "61--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mary Hare and Jeffrey L Elman. 1995. Learning and morphological change. Cognition, 56(1): 61-98.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Are rules a thing of the past? The acquisition of verbal morphology by an attractor network", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Hoeffner", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceddings of the 14th Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "861--866", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Hoeffner. 1992. Are rules a thing of the past? The acquisition of verbal morphology by an attractor network. In Proceddings of the 14th Annual Conference of the Cognitive Science Society, pages 861-866.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A Modern English Grammar on Historical Principles", |
| "authors": [ |
| { |
| "first": "Otto", |
| "middle": [], |
| "last": "Jesperson", |
| "suffix": "" |
| } |
| ], |
| "year": 1942, |
| "venue": "George Allen & Unwin Ltd", |
| "volume": "6", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Otto Jesperson. 1942. A Modern English Gram- mar on Historical Principles, volume 6. George Allen & Unwin Ltd., London, UK.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Med: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "62--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. Med: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Proceedings of the 14th SIGMORPHON Work- shop on Computational Research in Phonet- ics, Phonology, and Morphology, pages 62-70, Berlin, Germany. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Regular models of phonological rule systems", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ronald", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Kaplan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kay", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Computational linguistics", |
| "volume": "20", |
| "issue": "3", |
| "pages": "331--378", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computa- tional linguistics, 20(3):331-378.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Opennmt: Open-source toolkit for neural machine translation", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuntian", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Senellart", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ACL 2017, System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "67--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Computational Analysis of Present-day American English", |
| "authors": [ |
| { |
| "first": "Henry", |
| "middle": [], |
| "last": "Ku\u010dera", |
| "suffix": "" |
| }, |
| { |
| "first": "Winthrop", |
| "middle": [ |
| "Nelson" |
| ], |
| "last": "Francis", |
| "suffix": "" |
| } |
| ], |
| "year": 1967, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Henry Ku\u010dera and Winthrop Nelson Francis. 1967. Computational Analysis of Present-day American English. Dartmouth Publishing Group.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", |
| "authors": [ |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association for Computational Linguistics (TACL)", |
| "volume": "4", |
| "issue": "", |
| "pages": "521--535", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Trans- actions of the Association for Computational Linguistics (TACL), 4:521-535.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Effective approaches to attentionbased neural machine translation", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1421", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Implementations are not conceptualizations: Revising the verb learning model", |
| "authors": [ |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Macwhinney", |
| "suffix": "" |
| }, |
| { |
| "first": "Jared", |
| "middle": [], |
| "last": "Leinbach", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Cognition", |
| "volume": "40", |
| "issue": "1", |
| "pages": "121--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brian MacWhinney and Jared Leinbach. 1991. Implementations are not conceptualizations: Revising the verb learning model. Cognition, 40(1):121-157.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Rules and regularities in the acquisition of the English past tense", |
| "authors": [ |
| { |
| "first": "Virginia", |
| "middle": [], |
| "last": "Marchman", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Virginia Marchman. 1988. Rules and regulari- ties in the acquisition of the English past tense. Center for Research in Language Newsletter, 2(4):04.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "The Algebraic Mind", |
| "authors": [ |
| { |
| "first": "Gary", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gary Marcus. 2001. The Algebraic Mind. MIT Press.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Dissociating types of mental computation", |
| "authors": [ |
| { |
| "first": "Lorraine", |
| "middle": [ |
| "K" |
| ], |
| "last": "William D Marslen-Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tyler", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Nature", |
| "volume": "387", |
| "issue": "6633", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William D Marslen-Wilson and Lorraine K Tyler. 1997. Dissociating types of mental computation. Nature, 387(6633):592.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Networks and theories: The place of connectionism in cognitive science", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mccloskey", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Psychological science", |
| "volume": "2", |
| "issue": "6", |
| "pages": "387--395", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael McCloskey. 1991. Networks and theories: The place of connectionism in cognitive science. Psychological science, 2(6):387-395.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Where defaults don't help: the case of the German plural system", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Ramin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulrike", |
| "middle": [], |
| "last": "Nakisa", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hahn", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 18th Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "177--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramin Charles Nakisa and Ulrike Hahn. 1996. Where defaults don't help: the case of the German plural system. In Proceedings of the 18th Annual Conference of the Cognitive Science Society, pages 177-182. San Diego, CA.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "English: an Essential Grammar", |
| "authors": [ |
| { |
| "first": "Gerald", |
| "middle": [], |
| "last": "Nelson", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerald Nelson. 2010. English: an Essential Gram- mar. Routledge.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Inflection generation as discriminative string transduction", |
| "authors": [ |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Kondrak", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "922--931", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discrim- inative string transduction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 922-931, Denver, Colorado. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Productivity and Reuse in Language", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [ |
| "J" |
| ], |
| "last": "O'donnell", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy J. O'Donnell. 2011. Productivity and Reuse in Language. Ph.D. thesis, Harvard Uni- versity, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Words and Rules: The Ingredients of Language", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Pinker", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven Pinker. 1999. Words and Rules: The Ingre- dients of Language. Basic Books.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "On language and connectionism: Analysis of a parallel distributed processing model of language acquisition", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Pinker", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Prince", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Cognition", |
| "volume": "28", |
| "issue": "1", |
| "pages": "73--193", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel dis- tributed processing model of language acquisi- tion. Cognition, 28(1):73-193.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "A connectionist model of English past tense and plural morphology", |
| "authors": [ |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Plunkett", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Juola", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Cognitive Science", |
| "volume": "23", |
| "issue": "4", |
| "pages": "463--490", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kim Plunkett and Patrick Juola. 1999. A connec- tionist model of English past tense and plural morphology. Cognitive Science, 23(4):463-490.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "U-shaped learning and frequency effects in a multi-layered perception: Implications for child language acquisition", |
| "authors": [ |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Plunkett", |
| "suffix": "" |
| }, |
| { |
| "first": "Virginia", |
| "middle": [], |
| "last": "Marchman", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Cognition", |
| "volume": "38", |
| "issue": "1", |
| "pages": "43--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kim Plunkett and Virginia Marchman. 1991. U-shaped learning and frequency effects in a multi-layered perception: Implications for child language acquisition. Cognition, 38(1):43-102.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "From rote learning to system building: Acquiring verb morphology in children and connectionist nets", |
| "authors": [ |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Plunkett", |
| "suffix": "" |
| }, |
| { |
| "first": "Virginia", |
| "middle": [], |
| "last": "Marchman", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Cognition", |
| "volume": "48", |
| "issue": "1", |
| "pages": "21--69", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kim Plunkett and Virginia Marchman. 1993. From rote learning to system building: Acquiring verb morphology in children and connectionist nets. Cognition, 48(1):21-69.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "A connectionist model of the Arabic plural system", |
| "authors": [ |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Plunkett", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramin", |
| "middle": [], |
| "last": "Charles Nakisa", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Language and Cognitive processes", |
| "volume": "12", |
| "issue": "5-6", |
| "pages": "807--836", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kim Plunkett and Ramin Charles Nakisa. 1997. A connectionist model of the Arabic plural system. Language and Cognitive processes, 12(5-6): 807-836.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "The Perceptron: A probabilistic model for information storage and organization in the brain", |
| "authors": [ |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Rosenblatt", |
| "suffix": "" |
| } |
| ], |
| "year": 1958, |
| "venue": "Psychological review", |
| "volume": "65", |
| "issue": "6", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank Rosenblatt. 1958. The Perceptron: A prob- abilistic model for information storage and or- ganization in the brain. Psychological review, 65(6):386.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "On learning the past tenses of English verbs", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "E" |
| ], |
| "last": "Rumelhart", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcclelland", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Parallel Distributed Processing: Explorations in the Microstructure of Cognition", |
| "volume": "2", |
| "issue": "", |
| "pages": "216--271", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David E. Rumelhart and James L. McClelland. 1986. On learning the past tenses of English verbs. In Parallel Distributed Processing: Ex- plorations in the Microstructure of Cognition, volume 2, pages 216-271. MIT Press.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Parallel networks that learn to pronounce English text", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Terrence", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles R", |
| "middle": [], |
| "last": "Sejnowski", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rosenberg", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Complex systems", |
| "volume": "1", |
| "issue": "1", |
| "pages": "145--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Terrence J Sejnowski and Charles R Rosenberg. 1987. Parallel networks that learn to pronounce English text. Complex systems, 1(1):145-168.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "On the learning of morphological rules: A reply to Palermo and Eberhart", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "I" |
| ], |
| "last": "Slobin", |
| "suffix": "" |
| } |
| ], |
| "year": 1971, |
| "venue": "The Ontogenesis of Grammar: A Theoretical Symposium", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.I. Slobin. 1971. On the learning of morphologi- cal rules: A reply to Palermo and Eberhart. In The Ontogenesis of Grammar: A Theoretical Symposium. Academic Press, New York.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "A single route, full decomposition model of morphological complexity: MEG evidence", |
| "authors": [ |
| { |
| "first": "Linnaea", |
| "middle": [], |
| "last": "Stockall", |
| "suffix": "" |
| }, |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Marantz", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "The mental lexicon", |
| "volume": "1", |
| "issue": "1", |
| "pages": "85--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Linnaea Stockall and Alec Marantz. 2006. A single route, full decomposition model of morpholog- ical complexity: MEG evidence. The mental lexicon, 1(1):85-123.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Infor- mation Processing Systems 27: Annual Confer- ence on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "A languageindependent feature schema for inflectional morphology", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Sylak-Glassman", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Que", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "674--680", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A language- independent feature schema for inflectional mor- phology. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 674-680, Beijing, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Why do children learn to say broke? a model of learning the past tense without feedback", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Niels", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Taatgen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "John R Anderson", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Cognition", |
| "volume": "86", |
| "issue": "2", |
| "pages": "123--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Niels A Taatgen and John R Anderson. 2002. Why do children learn to say broke? a model of learn- ing the past tense without feedback. Cognition, 86(2):123-155.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Multi-label classification: An overview", |
| "authors": [], |
| "year": 2006, |
| "venue": "Grigorios Tsoumakas and Ioannis Katakis", |
| "volume": "3", |
| "issue": "", |
| "pages": "1--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grigorios Tsoumakas and Ioannis Katakis. 2006. Multi-label classification: An overview. Interna- tional Journal of Data Warehousing and Mining, 3(3):1-13.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "A neural dissociation within language: Evidence that the mental dictionary is part of declarative memory, and that grammatical rules are processed by the procedural system", |
| "authors": [ |
| { |
| "first": "Suzanne", |
| "middle": [], |
| "last": "Michael T Ullman", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Corkin", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Coppola", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hickok", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Growdon", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Walter", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Koroshetz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pinker", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Journal of cognitive neuroscience", |
| "volume": "9", |
| "issue": "2", |
| "pages": "266--276", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael T Ullman, Suzanne Corkin, Marie Coppola, Gregory Hickok, John H Growdon, Walter J Koroshetz, and Steven Pinker. 1997. A neural dissociation within language: Evidence that the mental dictionary is part of declarative memory, and that grammatical rules are pro- cessed by the procedural system. Journal of cognitive neuroscience, 9(2):266-276.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Connectionist rules of language", |
| "authors": [ |
| { |
| "first": "Gert", |
| "middle": [], |
| "last": "Westermann", |
| "suffix": "" |
| }, |
| { |
| "first": "Rainer", |
| "middle": [], |
| "last": "Goebel", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 17th annual conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "236--241", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gert Westermann and Rainer Goebel. 1995. Con- nectionist rules of language. In Proceedings of the 17th annual conference of the Cognitive Science Society, pages 236-241.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Context-sensitive coding, associative memory, and serial order in (speech) behavior", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wayne A Wickelgren", |
| "suffix": "" |
| } |
| ], |
| "year": 1969, |
| "venue": "Psychological Review", |
| "volume": "76", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wayne A Wickelgren. 1969. Context-sensitive coding, associative memory, and serial order in (speech) behavior. Psychological Review, 76(1):1.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "The price of linguistic productivity: How children learn to break the rules of language", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charles Yang. 2016. The price of linguistic produc- tivity: How children learn to break the rules of language. MIT Press.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "ADADELTA: An adaptive learning rate method", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Matthew", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zeiler", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew D Zeiler. 2012. ADADELTA: An adap- tive learning rate method. CoRR, abs/1212.5701.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "text": "Examples of inflected English verbs.", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF4": { |
| "text": "identical Wickelphone set {[#al], [alg], [lga], [gal], [al#]}. Moreover, P&P point out that phonologically related words such as [slIt] and [sIlt] have disjoint sets of Wickelphones: {[#sl], [slI], [lIt], [It#]} and {[#sI], [sIl], [Ilt], [lt#]}, respectively. These two words differ only by an instance of metathesis, or swapping the order of nearby sounds. The use of Wickelphone representations imposes the strong claim that they have nothing in common phonologically, despite sharing all phonemes. P&P suggest this is unlikely to be the case. As one point of evidence, the metathesis of the kind that differentiates [slIt] and [sIlt] is a common diachronic change. In English, for example, [horse] evolved from [hross], and [bird] from [brid]", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF5": { |
| "text": "for the encoding details. Thus, one network predicts all forms; for example, p(y | x=walk, t=past) yields a distribution over past tense forms for walk and p(y | x=walk, t=gerund) yields a distribution over gerunds.", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Type of Model</td><td>Reference</td><td>Input</td><td>Output</td></tr><tr><td>Feedforward Network</td><td colspan=\"2\">Rumelhart and McClelland (1986) Wickelphones</td><td>Wickelphones</td></tr><tr><td>Feedforward Network</td><td colspan=\"3\">MacWhinney and Leinbach (1991) Fixed Size Phonological Template Fixed Size Phonological Template</td></tr><tr><td>Feedforward Network</td><td>Plunkett and Marchman (1991)</td><td colspan=\"2\">Fixed Size Phonological Template Fixed Size Phonological Template</td></tr><tr><td>Attractor Network</td><td>Hoeffner (1992)</td><td>Semantics</td><td>Fixed Size Phonological Template</td></tr><tr><td>Feedforward Network</td><td>Plunkett & Marchman (1993)</td><td colspan=\"2\">Fixed Size Phonological Template Fixed Size Phonological Template</td></tr><tr><td>Recurrent Neural Network</td><td>Cottrell & Plunkett (1994)</td><td>Semantics</td><td>Phonological String</td></tr><tr><td>Feedforward Network</td><td colspan=\"3\">Hare, Elman, & Daugherty (1995) Fixed Size Phonological Template Inflection Class</td></tr><tr><td>Feedforward Neural Network</td><td>Hare & Elman (1995)</td><td>Semantics</td><td>Fixed Size Phonological Template</td></tr><tr><td>Recurrent Neural Network</td><td>Westermann & Goebel (1995)</td><td>Phonological String</td><td>Phonological String</td></tr><tr><td>Feedforward Neural Network</td><td>Nakisa & Hahn (1996)</td><td colspan=\"2\">Fixed Size Phonological Template Inflection Class</td></tr><tr><td colspan=\"2\">Convolutional Neural Network Bullinaria (1997)</td><td>Phonological String</td><td>Phonological String</td></tr><tr><td>Feedforward Neural Network</td><td>Plunkett & Nakisa (1997)</td><td colspan=\"2\">Fixed Size Phonological Template Inflection Class</td></tr><tr><td>Feedforward Neural Network</td><td>Plunkett & Juola (1999)</td><td colspan=\"2\">Fixed Size Phonological Template Fixed Size Phonological Template</td></tr><tr><td>Feedforward Neural Network</td><td>Hahn & Nakisa (2000)</td><td colspan=\"2\">Fixed Size Phonological Template Inflection Class</td></tr></table>" |
| }, |
| "TABREF6": { |
| "text": "A curated list of related work, categorized by aspects of the technique. Based on a similar list found inMarcus (2001, page 82).", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF9": { |
| "text": "Here we evince the oscillating development of single words in our corpus. For each stem, e.g., CLING, we show the past form that produced at change points to show the diversity of alternation. Beyond the last epoch displayed, each verb was produced correctly.", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |