| { |
| "paper_id": "N01-1021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:48:21.304372Z" |
| }, |
| "title": "A Probabilistic Earley Parser as a Psycholinguistic Model", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Hale", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The Johns Hopkins University", |
| "location": { |
| "addrLine": "3400 North Charles Street", |
| "postCode": "21218-2685", |
| "settlement": "Baltimore", |
| "region": "MD" |
| } |
| }, |
| "email": "hale@cogsci.jhu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word w i given its prefix w 0...i\u22121 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-byword basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke's probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.", |
| "pdf_parse": { |
| "paper_id": "N01-1021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word w i given its prefix w 0...i\u22121 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-byword basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke's probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "What is the relation between a person's knowledge of grammar and that same person's application of that knowledge in perceiving syntactic structure? The answer to be proposed here observes three principles.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Strong competence holds that the human sentence processing mechanism directly uses rules of grammar in its operation, and that a bare minimum of extragrammatical machinery is necessary. This hypothesis, originally proposed by Chomsky (Chomsky, 1965 , page 9) has been pursued by many researchers (Bresnan, 1982) (Stabler, 1991) (Steedman, 1992) (Shieber and Johnson, 1993) , and stands in contrast with an approach directed towards the discovery of autonomous principles unique to the processing mechanism.", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 248, |
| "text": "Chomsky (Chomsky, 1965", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 296, |
| "end": 311, |
| "text": "(Bresnan, 1982)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 312, |
| "end": 327, |
| "text": "(Stabler, 1991)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 328, |
| "end": 344, |
| "text": "(Steedman, 1992)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 345, |
| "end": 372, |
| "text": "(Shieber and Johnson, 1993)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Principle 1 The relation between the parser and grammar is one of strong competence.", |
| "sec_num": null |
| }, |
| { |
| "text": "The explanatory success of neural network and constraint-based lexicalist theories (McClelland and St. John, 1989) (MacDonald et al., 1994) (Tabor et al., 1997) suggests a statistical theory of language performance. The present work adopts a numerical view of competition in grammar that is grounded in probability.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 114, |
| "text": "(McClelland and St. John, 1989)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 115, |
| "end": 139, |
| "text": "(MacDonald et al., 1994)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 140, |
| "end": 160, |
| "text": "(Tabor et al., 1997)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Principle 2 Frequency affects performance.", |
| "sec_num": null |
| }, |
| { |
| "text": "Principle 3 Sentence processing is eager.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Principle 2 Frequency affects performance.", |
| "sec_num": null |
| }, |
| { |
| "text": "\"Eager\" in this sense means the experimental situations to be modeled are ones like self-paced reading in which sentence comprehenders are unrushed and no information is ignored at a point at which it could be used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Principle 2 Frequency affects performance.", |
| "sec_num": null |
| }, |
| { |
| "text": "The proposal is that a person's difficulty perceiving syntactic structure be modeled by word-toword surprisal (Attneave, 1959, page 6 ) which can be directly computed from a probabilistic phrasestructure grammar. The approach taken here uses a parsing algorithm developed by Stolcke. In the course of explaining the algorithm at a very high level I will indicate how the algorithm, interpreted as a psycholinguistic model, observes each principle. After that will come some simulation results, and then a conclusion.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 133, |
| "text": "(Attneave, 1959, page 6", |
| "ref_id": null |
| }, |
| { |
| "start": 275, |
| "end": 283, |
| "text": "Stolcke.", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Principle 2 Frequency affects performance.", |
| "sec_num": null |
| }, |
| { |
| "text": "Stolcke's parsing algorithm was initially applied as a component of an automatic speech recognition system. In speech recognition, one is often interested in the probability that some word will follow, given that a sequence of words has been seen. Given some lexicon of all possible words, a language model assigns a probability to every string of words from the lexicon. This defines a probabilistic language (Grenander, 1967) (Booth and Thompson, 1973) (Soule, 1974) (Wetherell, 1980) . A language model helps a speech recognizer focus its attention on words that are likely continuations of what it has recognized so far. This is typically done using conditional probabilities of the form", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 427, |
| "text": "(Grenander, 1967)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 428, |
| "end": 454, |
| "text": "(Booth and Thompson, 1973)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 455, |
| "end": 468, |
| "text": "(Soule, 1974)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 469, |
| "end": 486, |
| "text": "(Wetherell, 1980)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "P (W n = w n |W 1 = w 1 , . . . W n\u22121 = w n\u22121 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "the probability that the nth word will actually be w n given that the words leading up to the nth have been w 1 , w 2 , . . . w n\u22121 . Given some finite lexicon, the probability of each possible outcome for W n can be estimated using that outcome's relative frequency in a sample.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Traditional language models used for speech are ngram models, in which n \u2212 1 words of history serve as the basis for predicting the nth word. Such models do not have any notion of hierarchical syntactic structure, except as might be visible through an nword window.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Aware that the n-gram obscures many linguistically-significant distinctions (Chomsky, 1956, section 2. 3), many speech researchers (Jelinek and Lafferty, 1991) sought to incorporate hierarchical phrase structure into language modeling (see (Stolcke, 1997) ) although it was not until the late 1990s that such models were able to significantly improve on 3-grams (Chelba and Jelinek, 1998 ). Stolcke's probabilistic Earley parser is one way to use hierarchical phrase structure in a language model. The grammar it parses is a probabilistic context-free phrase structure grammar (PCFG), e.g.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 102, |
| "text": "(Chomsky, 1956, section 2.", |
| "ref_id": null |
| }, |
| { |
| "start": 131, |
| "end": 159, |
| "text": "(Jelinek and Lafferty, 1991)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 240, |
| "end": 255, |
| "text": "(Stolcke, 1997)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 362, |
| "end": 387, |
| "text": "(Chelba and Jelinek, 1998", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1.0 S \u2192 NP VP 0.5 NP \u2192 Det N 0.5 NP \u2192 NP VP . . . . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "see (Charniak, 1993, chapter 5) Such a grammar defines a probabilistic language in terms of a stochastic process that rewrites strings of grammar symbols according to the probabilities on the rules. Then each sentence in the language of the grammar has a probability equal to the product of the probabilities of all the rules used to generate it. This multiplication embodies the assumption that rule choices are independent. Sentences with more than one derivation accumulate the probability of all derivations that generate them. Through recursion, infinite languages can be specified; an important mathematical question in this context is whether or not such a grammar is consistent -whether it assigns some probability to infinite derivations, or whether all derivations are guaranteed to terminate. Even if a PCFG is consistent, it would appear to have another drawback: it only assigns probabilities to complete sentences of its language. This is as inconvenient for speech recognition as it is for modeling reading times.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 31, |
| "text": "(Charniak, 1993, chapter 5)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Stolcke's algorithm solves this problem by computing, at each word of an input string, the prefix probability. This is the sum of the probabilities of all derivations whose yield is compatible with the string seen so far. If the grammar is consistent (the probabilities of all derivations sum to 1.0) then subtracting the prefix probability from 1.0 gives the total probability of all the analyses the parser has disconfirmed. If the human parser is eager, then the \"work\" done during sentence processing is exactly this disconfirmation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The computation of prefix probabilities takes advantage of the design of the Earley parser (Earley, 1970) which by itself is not probabilistic. In this section I provide a brief overview of Stolcke's algorithm but the original paper should be consulted for full details (Stolcke, 1995) .", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 105, |
| "text": "(Earley, 1970)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 270, |
| "end": 285, |
| "text": "(Stolcke, 1995)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Earley parsers work top-down, and propagate predictions confirmed by the input string back up through a set of states representing hypotheses the parser is entertaining about the structure of the sentence. The global state of the parser at any one time is completely defined by this collection of states, a chart, which defines a tree set. A state is a record that specifies", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 the current input string position processed so far \u2022 a grammar rule", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 a \"dot-position\" in the rule representing how much of the rule has already been recognized \u2022 the leftmost edge of the substring this rule generates An Earley parser has three main functions, predict, scan and complete, each of which can enter new states into the chart. Starting from a dummy start state in which the dot is just to the left of the grammar's start symbol, predict adds new states for rules which could expand the start symbol. In these new predicted states, the dot is at the far left-hand side of each rule. After prediction, scan checks the input string: if the symbol immediately following the dot matches the current word in the input, then the dot is moved rightward, across the symbol. The parser has \"scanned\" this word. Finally, complete propagates this change throughout the chart. If, as a result of scanning, any states are now present in which the dot is at the end of a rule, then the left hand side of that rule has been recognized, and any other states having a dot immediately in front of the newly-recognized left hand side symbol can now have their dots moved as well. This happens over and over until no new states are generated. Parsing finishes when the dot in the dummy start state is moved across the grammar's start symbol.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Stolcke's innovation, as regards prefix probabilities is to add two additional pieces of information to each state: \u03b1, the forward, or prefix probability, and \u03b3 the \"inside\" probability. He notes that path An (unconstrained) Earley path, or simply path, is a sequence of Earley states linked by prediction, scanning, or completion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "constrained A path is said to be constrained by, or generate a string x if the terminals immediately to the left of the dot in all scanned states, in sequence, form the string x. . . . The significance of Earley paths is that they are in a one-to-one correspondence with left-most derivations. This will allow us to talk about probabilities of derivations, strings and prefixes in terms of the actions performed by Earley's parser. (Stolcke, 1995, page 8) This correspondence between paths of parser operations and derivations enables the computation of the prefix probability -the sum of all derivations compatible with the prefix seen so far. By the correspondence between derivations and Earley paths, one would need only to compute the sum of all paths that are constrained by the observed prefix. But this can be done in the course of parsing by storing the current prefix probability in each state. Then, when a new state is added by some parser operation, the contribution from each antecedent stateeach previous state linked by some parser operation -is summed in the new state. Knowing the prefix probability at each state and then summing for all parser operations that result in the same new state efficiently counts all possible derivations.", |
| "cite_spans": [ |
| { |
| "start": 432, |
| "end": 455, |
| "text": "(Stolcke, 1995, page 8)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Predicting a rule corresponds to multiplying by that rule's probability. Scanning does not alter any probabilities. Completion, though, requires knowing \u03b3, the inside probability, which records how probable was the inner structure of some recognized phrasal node. When a state is completed, a bottom-up confirmation is united with a top-down prediction, so the \u03b1 value of the complete-ee is multiplied by the \u03b3 value of the complete-er.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Important technical problems involving leftrecursive and unit productions are examined and overcome in (Stolcke, 1995) . However, these complications do not add any further machinery to the parsing algorithm per se beyond the grammar rules and the dot-moving conventions: in particular, there are no heuristic parsing principles or intermediate structures that are later destroyed. In this respect the algorithm observes strong competence -principle 1. In virtue of being a probabilistic parser it observes principle 2. Finally, in the sense that predict and complete each apply exhaustively at each new input word, the algorithm is eager, satisfying principle 3.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 118, |
| "text": "(Stolcke, 1995)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Earley parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Psycholinguistic theories vary regarding the amount bandwidth they attribute to the human sentence processing mechanism. Theories of initial parsing preferences (Fodor and Ferreira, 1998) suggest that the human parser is fundamentally serial: a function from a tree and new word to a new tree. These theories explain processing difficulty by appealing to \"garden pathing\" in which the current analysis is faced with words that cannot be reconciled with the structures built so far. A middle ground is held by bounded-parallelism theories (Narayanan and Jurafsky, 1998) (Roark and Johnson, 1999) . In these theories the human parser is modeled as a function from some subset of consistent trees and the new word, to a new tree subset. Garden paths arise in these theories when analyses fall out of the set of trees maintained from word to word, and have to be reanalyzed, as on strictly serial theories. Finally, there is the possibility of total parallelism, in which the entire set of trees compatible with the input is maintained somehow from word to word. On such a theory, garden-pathing cannot be explained by reanalysis.", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 187, |
| "text": "(Fodor and Ferreira, 1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 538, |
| "end": 568, |
| "text": "(Narayanan and Jurafsky, 1998)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 569, |
| "end": 594, |
| "text": "(Roark and Johnson, 1999)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parallelism", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The probabilistic Earley parser computes all parses of its input, so as a psycholinguistic theory it is a total parallelism theory. The explanation for garden-pathing will turn on the reduction in the probability of the new tree set compared with the previous tree set -reanalysis plays no role. Before illustrating this kind of explanation with a specific example, it will be important to first clarify the nature of the linking hypothesis between the operation of the probabilistic Earley parser and the measured effects of the human parser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parallelism", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The measure of cognitive effort mentioned earlier is defined over prefixes: for some observed prefix, the cognitive effort expended to parse that prefix is proportional to the total probability of all the structural analyses which cannot be compatible with the observed prefix. This is consistent with eagerness since, if the parser were to fail to infer the incompatibility of some incompatible analysis, it would be delaying a computation, and hence not be eager. This prefix-based linking hypothesis can be turned into one that generates predictions about word-byword reading times by comparing the total effort expended before some word to the total effort after: in particular, take the comparison to be a ratio. Making the further assumption that the probabilities on PCFG rules are statements about how difficult it is to disconfirm each rule 1 , then the ratio of the \u03b1 value for the previous word to the \u03b1 value for the current word measures the combined difficulty of disconfirming all disconfirmable structures at a given word -the definition of cognitive load. Scaling this number by taking its log gives the surprisal, and defines a word-based measure of cognitive effort in terms of the prefix-based one. Of course, if the language model is sensitive to hierarchical structure, then the measure of cognitive effort so defined will be structure-sensitive as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linking hypothesis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The debate over the form grammar takes in the mind is clearly a fundamental one for cognitive science. Much recent psycholinguistic work has generated a wealth of evidence that frequency of exposure to linguistic elements can affect our processing (Mitchell et al., 1995 ) (MacDonald et al., 1994 . However, there is no clear consensus as to the size of the elements over which exposure has clearest effect. Gibson and Pearlmutter identify it as an \"outstanding question\" whether or not phrase structure statistics are necessary to explain performance effects in sentence comprehension:", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 270, |
| "text": "(Mitchell et al., 1995", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 271, |
| "end": 296, |
| "text": ") (MacDonald et al., 1994", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Plausibility of Probabilistic Context-Free Grammar", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Are phrase-level contingent frequency constraints necessary to explain comprehension performance, or are the remaining types of constraints sufficient. If phraselevel contingent frequency constraints are necessary, can they subsume the effects of other constraints (e.g. locality) ? (Gibson and Pearlmutter, 1998, page 13) Equally, formal work in linguistics has demonstrated the inadequacy of context-free grammars as an appropriate model for natural language in the general case (Shieber, 1985) . To address this criticism, the same prefix probabilities could be computing using tree-adjoining grammars (Nederhof et al., 1998) . With context-free grammars serving as the implicit backdrop for much work in human sentence processing, as well as linguistics 2 simplicity seems as good a guide as any in the selection of a grammar formalism.", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 322, |
| "text": "(Gibson and Pearlmutter, 1998, page 13)", |
| "ref_id": null |
| }, |
| { |
| "start": 481, |
| "end": 496, |
| "text": "(Shieber, 1985)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 605, |
| "end": 628, |
| "text": "(Nederhof et al., 1998)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Plausibility of Probabilistic Context-Free Grammar", |
| "sec_num": "5" |
| }, |
| { |
| "text": "6 Garden-pathing", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Plausibility of Probabilistic Context-Free Grammar", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Probabilistic context-free grammar (1) will help illustrate the way a phrase-structured language model 2 Some important work in computational psycholinguistics (Ford, 1989 ) assumes a Lexical-Functional Grammar where the c-structure rules are essentially context-free and have attached to them \"strengths\" which one might interpret as probabilities.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 171, |
| "text": "(Ford, 1989", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A celebrated example", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "could account for garden path structural ambiguity. Grammar (1) generates the celebrated garden path sentence \"the horse raced past the barn fell\" (Bever, 1970) . English speakers hearing these words one by one are inclined to take \"the horse\" as the subject of \"raced,\" expecting the sentence to end at the word \"barn.\" This is the main verb reading in figure 1. The confusion between the main verb and the reduced relative readings, which is resolved upon hearing \"fell\" is the empirical phenomenon at issue.", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 160, |
| "text": "(Bever, 1970)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A celebrated example", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "As the parse trees indicate, grammar (1) analyzes reduced relative clauses as a VP adjoined to an NP 3 . In one sample of parsed text 4 such adjunctions are about 7 times less likely than simple NPs made up of a determiner followed by a noun. The probabilities of the other crucial rules are likewise estimated by their relative frequencies in the sample.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A celebrated example", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "(1) This simple grammar exhibits the essential character of the explanation: garden paths happen at points where the parser can disconfirm alternatives that together comprise a great amount of probability. Note the category ambiguity present with raced which can show up as both a past-tense verb (VBD) and a past participle (VBN). At \"fell,\" the parser garden-paths: up until that point, both the main-verb and reduced-relative structures are consistent with the input. The prefix probability before \"fell\" is scanned is more than 10 times greater than after, suggesting that the probability mass of the analyses disconfirmed at that point was indeed great. In fact, all of the probability assigned to the main-verb structure is now lost, and only parses that involve the low-probability NP rule survive -a rule introduced 5 words back.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A celebrated example", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "If this garden path effect is truly a result of both the main verb and the reduced relative structures being simultaneously available up until the final verb, then the effect should disappear when words intervene that cancel the reduced relative interpretation early on.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A comparison", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "To examine this possibility, consider now a different example sentence, this time from the language of grammar (2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A comparison", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "(2) The probabilities in grammar (2) are estimated from the same sample as before. It generates a sentence composed of words actually found in the sample, \"the banker told about the buy-back resigned.\" This sentence exhibits the same reduced relative clause structure as does \"the horse raced past the barn fell.\" ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A comparison", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The words who was cancel the main verb reading, and should make that condition easier to process. This asymmetry is borne out in graphs 4 and 5. At \"resigned\" the probabilistic Earley parser predicts less reading time in the subject relative condition than in the reduced relative condition. This comparison verifies that the same sorts of phenomena treated in reanalysis and bounded parallelism parsing theories fall out as cases of the present, total parallelism theory.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RC only the banker who was told about the buyback resigned", |
| "sec_num": null |
| }, |
| { |
| "text": "Although they used frequency estimates provided by corpus data, the previous two grammars were partially hand-built. They used a subset of the rules found in the sample of parsed text. A grammar including all rules observed in the entire sample supports the same sort of reasoning. In this grammar, instead of just 2 NP rules there are 532, along with 120 S rules. Many of these generate analyses compatible with prefixes of the reduced relative clause at various points during parsing, so the expectation is that the parser will be disconfirming many more hypotheses at each word than in the simpler example. Figure 6 shows the reading time predictions derived from this much richer grammar.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 610, |
| "end": 618, |
| "text": "Figure 6", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "An entirely empirical grammar", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Because the terminal vocabulary of this richer grammar is so much larger, a comparatively large amount of information is conveyed by the nouns \"banker\" and \"buy-back\" leading to high surprisal the banker told about the buy-back resigned . values at those words. However, the garden path effect is still observable at \"resigned\" where the prefix probability ratio is nearly 10 times greater than at either of the nouns. Amid the lexical effects, the probabilistic Earley parser is affected by the same structural ambiguity that affects English speakers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An entirely empirical grammar", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The same kind of explanation supports an account of the subject-object relative asymmetry (cf. references in (Gibson, 1998) ) in the processing of unreduced relative clauses. Since the Earley parser is designed to work with context-free grammars, the following example grammar adopts a GPSG-style analysis of relative clauses (Gazdar et al., 1985, page 155) . The estimates of the ratios for the two S[+R] rules are obtained by counting the proportion of subject relatives among all relatives in the Treebank's parsed Brown corpus 7 .", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 123, |
| "text": "(Gibson, 1998)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 326, |
| "end": 357, |
| "text": "(Gazdar et al., 1985, page 155)", |
| "ref_id": null |
| }, |
| { |
| "start": 400, |
| "end": 405, |
| "text": "S[+R]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subject/Object asymmetry", |
| "sec_num": "7" |
| }, |
| { |
| "text": "(3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subject/Object asymmetry", |
| "sec_num": "7" |
| }, |
| { |
| "text": "0.33 NP \u2192 SPECNP NBAR 0.33 NP \u2192 you 0.33 NP \u2192 me 1.0 SPECNP \u2192 DT 0.5 NBAR \u2192 NBAR S[+R] 0.5 NBAR \u2192 N 1.0 S \u2192 NP VP 0.86864638 S[+R] \u2192 NP[+R] VP 0.13135362 S[+R] \u2192 NP[+R] S/NP 1.0 S/NP \u2192 NP VP/NP 1.0 VP/NP \u2192 V NP/NP 1.0 VP \u2192 V NP 1.0 V \u2192 saw 1.0 NP[+R] \u2192 who 1.0 DT \u2192 the 1.0 N \u2192 man 1.0 NP/NP \u2192 7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subject/Object asymmetry", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In particular, relative clauses in the Treebank are analyzed as NP \u2192 NP SBAR (rule 1) SBAR \u2192 WHNP S (rule 2) where the S contains a trace *T* coindexed with the WHNP. The total number of structures in which both rule 1 and rule 2 apply is 5489. The total number where the first child of S is null is 4768. This estimate puts the total number of object relatives at 721 and the frequency of object relatives at 0.13135362 and the frequency of subject relatives at 0.86864638. generates object relatives. One might expect there to be a greater processing load for object relatives as soon as enough lexical material is present to determine that the sentence is in fact an object relative 8 . The same probabilistic Earley parser (modified to handle null-productions) explains this asymmetry in the same way as it explains the garden path effect. Its predictions, under the same linking hypothesis as in the previous cases, are depicted in graphs 7 and 8. The mean surprisal for the object relative is about 5.0 whereas the mean surprisal for the subject relative is about 2.1. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 108, |
| "text": "(rule 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Subject/Object asymmetry", |
| "sec_num": "7" |
| }, |
| { |
| "text": "These examples suggest that a \"total-parallelism\" parsing theory based on probabilistic grammar can characterize some important processing phenomena. In the domain of structural ambiguity in particular, the explanation is of a different kind than in traditional reanalysis models: the order of processing is not theoretically significant, but the estimate of its magnitude at each point in a sentence is. Results with empirically-derived grammars suggest an affirmative answer to Gibson and Pearlmutter's ques-tion: phrase-level contingent frequencies can do the work formerly done by other mechanisms. Pursuit of methodological principles 1, 2 and 3 has identified a model capable of describing some of the same phenomena that motivate psycholinguistic interest in other theoretical frameworks. Moreover, this recommends probabilistic grammars as an attractive possibility for psycholinguistics by providing clear, testable predictions and the potential for new mathematical insights.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": null |
| }, |
| { |
| "text": "This assumption is inevitable given principles 1 and 2. If there were separate processing costs distinct from the optimization costs postulated in the grammar, then strong competence is violated. Defining all grammatical structures as equally easy to disconfirm or perceive likewise voids the gradedness of grammaticality of any content.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See section 1.24 of the Treebank style guide 4 The sample, starts at sentence 93 of section 16 of the Treebank and goes for 500 sentences (12924 words) For information about the Penn Treebank project see http://www.cis.upenn.edu/~treebank/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Whether the quantitative values of the predicted reading times can be mapped onto a particular experiment involves taking some position on the oft-observed(Gibson and Sch\u00fctze, 1999) imperfect relationship between corpus frequency and psychological norms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This grammar also generates active and simple passive sentences, rating passive sentences as more probable than the actives. This is presumably a fact about the writing style favored by the Wall Street Journal.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The difference in probability between subject and object rules could be due to the work necessary to set up storage for the filler, effectively recapitulating the HOLD Hypothesis(Wanner and Maratsos, 1978, page 119)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Applications of Information Theory to Psychology: A summary of basic concepts, methods and results", |
| "authors": [ |
| { |
| "first": "Fred", |
| "middle": [], |
| "last": "Attneave", |
| "suffix": "" |
| } |
| ], |
| "year": 1959, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fred Attneave. 1959. Applications of Information Theory to Psychology: A summary of basic con- cepts, methods and results. Holt, Rinehart and Winston.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The cognitive basis for linguistic structures", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bever", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "Cognition and the Development of Language", |
| "volume": "", |
| "issue": "", |
| "pages": "279--362", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas G. Bever. 1970. The cognitive basis for linguistic structures. In J.R. Hayes, editor, Cog- nition and the Development of Language, pages 279-362. Wiley, New York.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Applying probability measures to abstract languages", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "A" |
| ], |
| "last": "Booth", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "IEEE Transactions on Computers", |
| "volume": "", |
| "issue": "5", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor L. Booth and Richard A. Thompson. 1973. Applying probability measures to abstract lan- guages. IEEE Transactions on Computers, C- 22(5).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Mental Representation of Grammatical Relations, pages xvii,lii", |
| "authors": [ |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Bresnan", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joan Bresnan. 1982. Introduction: Grammars as mental representations of language. In Joan Bres- nan, editor, The Mental Representation of Gram- matical Relations, pages xvii,lii. MIT Press, Cam- bridge, MA.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Statistical Language Learning", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak. 1993. Statistical Language Learn- ing. MIT Press.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Exploiting syntactic structure for language modelling", |
| "authors": [ |
| { |
| "first": "Ciprian", |
| "middle": [], |
| "last": "Chelba", |
| "suffix": "" |
| }, |
| { |
| "first": "Frederick", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of COLING-ACL '98", |
| "volume": "", |
| "issue": "", |
| "pages": "225--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ciprian Chelba and Frederick Jelinek. 1998. Ex- ploiting syntactic structure for language mod- elling. In Proceedings of COLING-ACL '98, pages 225-231, Montreal.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Three models for the description of language", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1956, |
| "venue": "IRE Transactions on Information Theory", |
| "volume": "2", |
| "issue": "3", |
| "pages": "113--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Chomsky. 1956. Three models for the de- scription of language. IRE Transactions on In- formation Theory, 2(3):113-124.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Aspects of the Theory of Syntax", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge MA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An efficient context-free parsing algorithm. Communications of the Association for Computing Machinery", |
| "authors": [ |
| { |
| "first": "Jay", |
| "middle": [], |
| "last": "Earley", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "", |
| "volume": "13", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jay Earley. 1970. An efficient context-free pars- ing algorithm. Communications of the Associa- tion for Computing Machinery, 13(2), February.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Reanalysis in sentence processing", |
| "authors": [], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janet Dean Fodor and Fernanda Ferreira, editors. 1998. Reanalysis in sentence processing, vol- ume 21 of Studies in Theoretical Psycholingustics. Kluwer, Dordrecht.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Parsing complexity and a theory of parsing", |
| "authors": [ |
| { |
| "first": "Marilyn", |
| "middle": [], |
| "last": "Ford", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Linguistic Structure in Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "239--272", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marilyn Ford. 1989. Parsing complexity and a the- ory of parsing. In Greg N. Carlson and Michael K. Tanenhaus, editors, Linguistic Structure in Lan- guage Processing, pages 239-272. Kluwer.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Generalized Phrase Structure Grammar", |
| "authors": [ |
| { |
| "first": "Gerald", |
| "middle": [], |
| "last": "Gazdar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ewan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Pullum", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerald Gazdar, Ewan Klein, Geoffrey Pullum, and Ivan Sag. 1985. Generalized Phrase Structure Grammar. Harvard University Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Constraints on sentence processing", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Gibson", |
| "suffix": "" |
| }, |
| { |
| "first": "Neal", |
| "middle": [ |
| "J" |
| ], |
| "last": "Pearlmutter", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Trends in Cognitive Sciences", |
| "volume": "2", |
| "issue": "", |
| "pages": "262--268", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Gibson and Neal J. Pearlmutter. 1998. Constraints on sentence processing. Trends in Cognitive Sciences, 2:262-268.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Disambiguation preferences in noun phrase conjunction do not mirror corpus frequency", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Gibson", |
| "suffix": "" |
| }, |
| { |
| "first": "Carson", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Journal of Memory and Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Gibson and Carson Sch\u00fctze. 1999. Disam- biguation preferences in noun phrase conjunction do not mirror corpus frequency. Journal of Mem- ory and Language.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Linguistic complexity: locality of syntactic dependencies", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Gibson", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Cognition", |
| "volume": "68", |
| "issue": "", |
| "pages": "1--76", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Gibson. 1998. Linguistic complexity: local- ity of syntactic dependencies. Cognition, 68:1-76.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Syntax-controlled probabilities", |
| "authors": [ |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Grenander", |
| "suffix": "" |
| } |
| ], |
| "year": 1967, |
| "venue": "Brown University Division of Applied Mathematics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ulf Grenander. 1967. Syntax-controlled probabili- ties. Technical report, Brown University Division of Applied Mathematics, Providence, RI.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Computation of the probability of initial substring generation by stochastic context-free grammars", |
| "authors": [ |
| { |
| "first": "Frederick", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frederick Jelinek and John D. Lafferty. 1991. Com- putation of the probability of initial substring generation by stochastic context-free grammars. Computational Linguistics, 17(3).", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Lexical nature of syntactic ambiguity resolution", |
| "authors": [ |
| { |
| "first": "Maryellen", |
| "middle": [ |
| "C" |
| ], |
| "last": "Macdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Neal", |
| "middle": [ |
| "J" |
| ], |
| "last": "Pearlmutter", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [ |
| "S" |
| ], |
| "last": "Seidenberg", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Psychological Review", |
| "volume": "101", |
| "issue": "4", |
| "pages": "676--703", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maryellen C. MacDonald, Neal J. Pearlmutter, and Mark S. Seidenberg. 1994. Lexical nature of syn- tactic ambiguity resolution. Psychological Review, 101(4):676-703.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Sentence comprehension: A PDP approach. Language and Cognitive Processes", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Mcclelland", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "St", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "4", |
| "issue": "", |
| "pages": "287--336", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James McClelland and Mark St. John. 1989. Sen- tence comprehension: A PDP approach. Lan- guage and Cognitive Processes, 4:287-336.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Exposurebased models of human parsing: Evidence for the use of coarse-grained (nonlexical) statistical records", |
| "authors": [ |
| { |
| "first": "Don", |
| "middle": [ |
| "C" |
| ], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Cuetos", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "B" |
| ], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Corley", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brysbaert", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Journal of Psycholinguistic Research", |
| "volume": "24", |
| "issue": "6", |
| "pages": "469--488", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Don C. Mitchell, Fernando Cuetos, Martin M.B. Corley, and Marc Brysbaert. 1995. Exposure- based models of human parsing: Evidence for the use of coarse-grained (nonlexical) statisti- cal records. Journal of Psycholinguistic Research, 24(6):469-488.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Bayesian models of human sentence processing", |
| "authors": [ |
| { |
| "first": "Srini", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 19th Annual Conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Srini Narayanan and Daniel Jurafsky. 1998. Bayesian models of human sentence processing. In Proceedings of the 19th Annual Conference of the Cognitive Science Society, University of Wisconsin-Madson.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Prefix probabilities from stochastic tree adjoining grammars", |
| "authors": [ |
| { |
| "first": "Mark-Jan", |
| "middle": [], |
| "last": "Nederhof", |
| "suffix": "" |
| }, |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Sarkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of COLING-ACL '98", |
| "volume": "", |
| "issue": "", |
| "pages": "953--959", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark-Jan Nederhof, Anoop Sarkar, and Giorgio Satta. 1998. Prefix probabilities from stochas- tic tree adjoining grammars. In Proceedings of COLING-ACL '98, pages 953-959, Montreal.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Broad coverage predictive parsing. Presented at the 12th Annual CUNY Conference on Human Sentence Processing", |
| "authors": [ |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brian Roark and Mark Johnson. 1999. Broad cover- age predictive parsing. Presented at the 12th An- nual CUNY Conference on Human Sentence Pro- cessing, March.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Variations on incremental interpretation", |
| "authors": [ |
| { |
| "first": "Stuart", |
| "middle": [], |
| "last": "Shieber", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Journal of Psycholinguistic Research", |
| "volume": "22", |
| "issue": "2", |
| "pages": "287--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart Shieber and Mark Johnson. 1993. Variations on incremental interpretation. Journal of Psy- cholinguistic Research, 22(2):287-318.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Evidence against the contextfreeness of natural language", |
| "authors": [ |
| { |
| "first": "Stuart", |
| "middle": [], |
| "last": "Shieber", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Linguistics and Philosophy", |
| "volume": "8", |
| "issue": "", |
| "pages": "333--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart Shieber. 1985. Evidence against the context- freeness of natural language. Linguistics and Phi- losophy, 8:333-343.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Entropies of probabilistic grammars", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soule", |
| "suffix": "" |
| } |
| ], |
| "year": 1974, |
| "venue": "Information and Control", |
| "volume": "25", |
| "issue": "", |
| "pages": "57--74", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Soule. 1974. Entropies of probabilistic grammars. Information and Control, 25(57-74).", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Avoid the pedestrian's paradox", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Stabler", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Principle-Based Parsing: computation and psycholinguistics, Studies in Linguistics and Philosophy", |
| "volume": "", |
| "issue": "", |
| "pages": "199--237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Stabler. 1991. Avoid the pedestrian's para- dox. In Robert C. Berwick, Steven P. Abney, and Carol Tenny, editors, Principle-Based Parsing: computation and psycholinguistics, Studies in Lin- guistics and Philosophy, pages 199-237. Kluwer, Dordrecht.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Grammars and processors", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Steedman. 1992. Grammars and processors. Technical Report TR MS-CIS-92-52, University of Pennsylvania CIS Department.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "An efficient probabilistic context-free parsing algorithm that computes prefix probabilities", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke. 1995. An efficient probabilis- tic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2).", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Linguistic knowledge and empirical methods in speech recognition", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "AI Magazine", |
| "volume": "18", |
| "issue": "4", |
| "pages": "25--31", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke. 1997. Linguistic knowledge and empirical methods in speech recognition. AI Mag- azine, 18(4):25-31.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Parsing in a dynamical system: An attractor-based account of the interaction of lexical and structural constraints in sentence processing", |
| "authors": [ |
| { |
| "first": "Whitney", |
| "middle": [], |
| "last": "Tabor", |
| "suffix": "" |
| }, |
| { |
| "first": "Cornell", |
| "middle": [], |
| "last": "Juliano", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Tanenhaus", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Language and Cognitive Processes", |
| "volume": "12", |
| "issue": "2/3", |
| "pages": "211--271", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Whitney Tabor, Cornell Juliano, and Michael Tanenhaus. 1997. Parsing in a dynamical sys- tem: An attractor-based account of the interac- tion of lexical and structural constraints in sen- tence processing. Language and Cognitive Pro- cesses, 12(2/3):211-271.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Probabilistic languages: A review and some open questions", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Wanner", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Maratsos", |
| "suffix": "" |
| } |
| ], |
| "year": 1978, |
| "venue": "Linguistic Theory and Psychological Reality", |
| "volume": "3", |
| "issue": "", |
| "pages": "119--161", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Wanner and Michael Maratsos. 1978. An ATN approach to comprehension. In Morris Halle, Joan Bresnan, and George A. Miller, editors, Linguistic Theory and Psychological Reality, chapter 3, pages 119-161. MIT Press, Cambridge, Massachusetts. C.S. Wetherell. 1980. Probabilistic languages: A re- view and some open questions. Computing Sur- veys, 12(4).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Main verb readingThe human sentence processing mechanism is metaphorically led up the garden path by the main verb reading, when, upon hearing \"fell\" it is forced to accept the alternative reduced relative reading shown in figure 2.", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "Figure 3shows the reading time predictions 5 derived via the linking hypothesis that reading time at word n is proportional to the surprisal log \u03b1n\u2212Predictions of probabilistic Earley parser on simple grammar", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "also generates 6 the subject relative \"the banker who was told about the buy-back resigned.\" Now a comparison of two conditions is possible.MV and RC the banker told about the buy-back resigned", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "text": "Mean: 16.44", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "text": "Predictions of Earley parser on richer grammar", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF6": { |
| "type_str": "figure", |
| "text": "Object relative clause", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "text": "Grammar(3)generates both subject and object relative clauses. S[+R] \u2192 NP[+R] VP is the rule that generates subject relatives and S[+R] \u2192 NP[+R] S/NP", |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |