| { |
| "paper_id": "C98-1022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:28:52.723597Z" |
| }, |
| "title": "A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis", |
| "authors": [ |
| { |
| "first": "Rens", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Amsterdam", |
| "location": { |
| "addrLine": "Spuistraat 134", |
| "postCode": "NL-1012 VB", |
| "settlement": "Amsterdam" |
| } |
| }, |
| "email": "rens.bod@let.uva.nl" |
| }, |
| { |
| "first": "Ronald", |
| "middle": [], |
| "last": "Kaplan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kaplan@pare.xerox.corn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Wc develop a l)ata-Oricntcd Parsing (DOP) model based on the syntactic representations of Lexicalf;unctional Grammar (LFG). We start by summarizing the original DOP model for tree representations and then show how it can be extended with corresponding functional structures. The resulting LFG-DOP model triggers a new, corpus-based notion of grammaticality, and its probability models exhibit interesting behavior with respect to specificity and the interpretation of ill-formed strings.", |
| "pdf_parse": { |
| "paper_id": "C98-1022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Wc develop a l)ata-Oricntcd Parsing (DOP) model based on the syntactic representations of Lexicalf;unctional Grammar (LFG). We start by summarizing the original DOP model for tree representations and then show how it can be extended with corresponding functional structures. The resulting LFG-DOP model triggers a new, corpus-based notion of grammaticality, and its probability models exhibit interesting behavior with respect to specificity and the interpretation of ill-formed strings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Data-Oriented Parsing (DOP) models of natural language embody the assumption that human language perception and pr~duction works with representations of past language experiences, rather than with abstract grammar rules (cf. Bod 1992, 95; Scha 1992; Sima'an 1995; Rajman 1995) . DOP models therefore maintaiIi hu'gc corpora of linguistic representations of previously occurring utterances. New utterances arc analyzed by combining (arbitrarily large) fragments from the corpus; the occurrence-frequencies of the fragments are used to determine wbich analysis is the most probable one. In accordance with the general DOP architecture outlined by Bod (1995) , a particular DOP model is described by specifying settings for the following four |~afatneters'. ,, a formal definition of a well-formed representation for utterance attalyses,", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 238, |
| "text": "Bod 1992, 95;", |
| "ref_id": null |
| }, |
| { |
| "start": 239, |
| "end": 249, |
| "text": "Scha 1992;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 250, |
| "end": 263, |
| "text": "Sima'an 1995;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 264, |
| "end": 276, |
| "text": "Rajman 1995)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 645, |
| "end": 655, |
| "text": "Bod (1995)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 a set of decomposition operations that divide a given utterance analysis into a set of fragments, \u2022 a set of composition operations by which such fragments may bc rccombined to derive an analysis of a new utterance, and \u2022 a definition of a probabilio' model that indicates how the probability of a new utterance analysis is computed on tim basis of the probabilities of the fragments that combine to make it up.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "['revious instantiations of the DOP architecture were based on utterance-analyses represented as surface phrase-structure trees (\"Tree-DOP\", e.g. Bod 1993; 1.(ajman 1995; Sinta'an 1995; Goodman 1996; l{onncma et al. 1997) . Tree-DOP uses two decomt)osition operations that produce connected subtrees of utterance representations: (1) the Root operation selects any node of a tree to be the root of the new subtrce and erases all nodes except the selected node and the nodes it dominates; (2) the Frontier operation then chooses a set (possibly empty) of nodes in the new subtree different from its root and erases all subtrees dominated by the chosen nodes. The only composition operation used by Tree-I)OP is a node-substitution operation that replaces the left-most nonterminal frontier node in a subtree with a fragment whose root category lnatches the category of tile frontier node. Thus Tree-DOP provides treerepresentations for new utterances by combining fragments from a corpus of phrase structure trees.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 155, |
| "text": "Bod 1993;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 156, |
| "end": 170, |
| "text": "1.(ajman 1995;", |
| "ref_id": null |
| }, |
| { |
| "start": 171, |
| "end": 185, |
| "text": "Sinta'an 1995;", |
| "ref_id": null |
| }, |
| { |
| "start": 186, |
| "end": 199, |
| "text": "Goodman 1996;", |
| "ref_id": null |
| }, |
| { |
| "start": 200, |
| "end": 221, |
| "text": "l{onncma et al. 1997)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "A Tree-DOP representation R can typically be derived in many different ways. If each derivation D has a probability P(D), then the probability of deriving R is the sum of the individual derivation probabilities: P(R) = Yq) derives R P(D) A Tree-DOP derivation D = <tj, t2 ... tk> is produced by a stochastic branching process, it starts by randomly choosing a fragment t~ labeled with the initial category (e.g. St. At each subsequent step, a next fragment is chosen at random from among the set of competitors for composition into the current subtree. The process stops when a tree results with no nonterminal leaves, l,et CP(tlCS) denote the probability of choosing a tree t from a contpetilion set CS containing t. Then the probability of a derivation is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "P(<tl, t2 ... tk>) = [liCP(ti I CSi)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "where the competition probability CP(t I CS) is given by CP(t I CS) = P(t) / Zt, e CS P(t') Here, P(t) is the fragment probability for t in a given corpus. Let Ti_ 1 = tl \u00b0 t2 \u00b0 ... o ti.l be the subanalysis just before the ith step of the process, let LNC(Ti_I ) denote the category of the leftmost nonterminal of 7\"/-1, and let r(t) denote the root category of a fragment t. Then the competition set at the /tit step is CSi = { t:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "r(t)=l,NC(Ti. 1) }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "That is, the competition sets for Tree-DOP are determined by the category of the leftmost nontermina[ of the current subanalysis. This is not the only possible definition of competition set. As Manning and Carpenter (1997) have shown, the competition sets can be made dependent on the composition operation. Their left-corner language model would also apply to Tree-DOP, yielding a different definition for the competition sets. But the properties of such Tree-DOP models have not been investigated.", |
| "cite_spans": [ |
| { |
| "start": 194, |
| "end": 222, |
| "text": "Manning and Carpenter (1997)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Experiments with Tree-DOP on the Penn Treebank and the OVIS corpus show a consistent increase in parse accuracy when larger and more complex subtrees are taken into account (cf. Bod 1993, 95, 98; Bonnema et al. 1997; Sekine & Grishman 1995; Sima'an 1995) . However, \"Free-DOP is limited in that it cannot account for underlying syntactic (and semantic) dependencies that are not reflected directly in a surface tree. All modern linguistic theories propose more articulated representations and mechanisms in order to characterize such linguistic phenomena. DOP models for a number of richer representations have been explored (van den Berg et al. 1994; Tugwell 1995) , but these approaches have remained context-free in their generative power. In contrast, Lexical-Functional Grammar (Kaplan & Bresnan 1982; Kaplan 1989) , which assigns representations consisting of a surface constituent tree enriched with a corresponding functional structure, is known to be beyond context-free. In the current work, we develop a DOP model based on representations defined by LFG theory (\"LFG-DOP\"). That is, we provide a new instantiation for the four parameters of the DOP architecture. We will see that this basic LFG-DOP model triggers a new, corpus-based notion of grammaticality, and that it leads to a different class of its probability models which exhibit interesting properties with respect to specificity and the interpretation of ill-formed strings.", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 195, |
| "text": "Bod 1993, 95, 98;", |
| "ref_id": null |
| }, |
| { |
| "start": 196, |
| "end": 216, |
| "text": "Bonnema et al. 1997;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 217, |
| "end": 240, |
| "text": "Sekine & Grishman 1995;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 241, |
| "end": 254, |
| "text": "Sima'an 1995)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 634, |
| "end": 651, |
| "text": "Berg et al. 1994;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 652, |
| "end": 665, |
| "text": "Tugwell 1995)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 783, |
| "end": 806, |
| "text": "(Kaplan & Bresnan 1982;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 807, |
| "end": 819, |
| "text": "Kaplan 1989)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The definition of a well-formed representation for utterance-analyses follows from LFG theory, that is, every utterance is annotated with a c-structure, an fstructure and a mapping 0 between them. The cstructure is a tree that describes the surface constituent structure of an utterance; the f-structure is an attribute-value matrix marking the grammatical relations of subject, predicate and object, as well as providing agreement features and semantic forms; and 0 is a correspondence function that maps nodes of the c-structure into units of the f-structure (Kaplan & Bresnan 1982; Kaplan 1989 ). The following figure shows a representation for the utterance Kim eats. (We leave out some features to keep the example simple.)", |
| "cite_spans": [ |
| { |
| "start": 561, |
| "end": 584, |
| "text": "(Kaplan & Bresnan 1982;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 585, |
| "end": 596, |
| "text": "Kaplan 1989", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A DOP model based on Lexieal-Funetional representations Representations", |
| "sec_num": "2." |
| }, |
| { |
| "text": "(1) ~[ P~ED ,K,,,,]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A DOP model based on Lexieal-Funetional representations Representations", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Note that the 0 correspondence function gives an explicit characterization of the relation between the superficial and underlying syntactic properties of an utterance, indicating how certain parts of the string carry information about particular units of underlying structure. As such, it will play a crucial role in our definition for the decomposition and composition operations of LFG-DOP. In (1) we see for instance that the NP node maps to the subject f-structure, and the S and VP nodes map to the outermost f-structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A DOP model based on Lexieal-Funetional representations Representations", |
| "sec_num": "2." |
| }, |
| { |
| "text": "It is generally the case that the nodes in a subtree carry information only about the f-structure units that the subtree's root gives access to. The notion of accessibility is made precise in the following definition: An f-structure unit fis O-accessible from a node n iff either n is 0-1inked to f (that is, f= 0(n) ) off is contained within 0(n) (that is, there is a chain of attributes that leads from 0(n) to f).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A DOP model based on Lexieal-Funetional representations Representations", |
| "sec_num": "2." |
| }, |
| { |
| "text": "All the f-structure units in (1) are 0-accessible from for instance the S node and the VP node, but the TENSE and top-level PRED are not 0-accessible from the NP node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A DOP model based on Lexieal-Funetional representations Representations", |
| "sec_num": "2." |
| }, |
| { |
| "text": "According to LFG theory, c-structures and fstructures must satisfy certain formal well-formedness conditions. A c-structure/f-structure pair is a valid LFG representation only if it satisfies the Nonbranching Dominance, Uniqueness, Coherence and Completeness conditions (Kaplan & Bresnan 1982) . Nonbranching Dominance demands that no c-structure category appears twice in a nonbranching dominance chain; Uniqueness asserts that there can be at most one value for any attribute in the f-structure; Coherence prohibits the appearance of grammatical functions that are not governed by the lexical predicate; and Completeness requires that all the functions that a predicate governs appear as attributes in the local f-structure.", |
| "cite_spans": [ |
| { |
| "start": 270, |
| "end": 293, |
| "text": "(Kaplan & Bresnan 1982)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A DOP model based on Lexieal-Funetional representations Representations", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Many different DOP models are compatible with the system of LFG representations. In this paper we outline a basic LFG-DOP model which extends the operations of Tree-DOP to take correspondences and f-structure features into account. The decomposition operations for this model will produce fragments of the composite LFG representations. These will consist of connected subtrees whose nodes are in 0correspondence with sub-units of f-structures. We extend the Root and Frontier decomposition operations of Tree-DOP so that they also apply to the nodes of the c-structure while respecting the fundamental principles of c-structure/f-structure correspondence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "When a node is selected by the Root operation, all nodes outside of that node's subtrec are erased, just as in Tree-DOP. Further, for LFG-DOP, all 0 links leaving the erased nodes are removed and all f-structure units that are not 0-accessible from the remaining nodes are erased. Root thus maintains the intuitive correlation between nodes and the information in their corresponding f-structures. For example, if Root selects the NP in (1), then the fstructure corresponding to the S node is erased, giving (2) as a possible fragment:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) ~1 PRED ' Kim' [ NP NUM SG ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "In addition the Root operation deletes from the remaining f-structure all semantic forms that are local to f-structures that correspond to erased c-structure nodes, and it thereby also maintains the fundamental two-way connection between words and meanings. Thus, if Root selects the VP node so that the NP is erased, the subject semantic form \"Kim\" is also deleted:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "(3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "eats ['RED 'eat(SUB J)'", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "As with Tree-DOP, the Frontier operation then selects a set of frontier nodes and deletes all subtrees they dominate. Like Root, it also removes tile \u00a2 links of the deleted nodes and erases any semantic form that corresponds to any of those nodes. Frontier does not delete any other f-structure features. This reflects the fact that all features are \u00a2~-accessible from the fragment's root even when nodes below the frontier are erased. For instance, if the VP in (1) is selected as a frontier node, Frontier erases the predicate \"cat(SUB J)\" lrom the fragment:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "(4) NUM SG ] 7 VP ~ 1 iNSt-|'RI!S Kim", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the Root and Frontier operations retain the sulLject's NUM feature in the VP-rooted fragment 3, even thougla the subject NP is not present. This reflects the fact, usually encoded in particular grammar rules or lexical entries, that verbs of English carry agreement features for their subjects. On the other hand, fragment (4) retains the predicate's TENSE feature, reflecting the possibility that English subjects might also carry information about their predicate's tense. Subject-tense agreement as encoded in (4) is a pattern seen in some languages (e.g. the split-ergativity pattern of languages like Hindi, Urdu and Georgian) and thus there is no tmiversal principle by which fragments such as (4) can be ruled out. But in order to represent directly the possibility that subject-tense agreement is not a dependency of English, we also allow an S fragment in which the TENSE feature is delcted, as in (5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "Kim Fragment (5) is produced by a third decomposition operation, Discard, defined to construct generalizations of the fragments supplied by Root and Frontier. l)iscard acts to delete combinations of attrit~ute-value pairs subject to the following restriction: Discard does not delete pairs whose values C-correspond to remaining c-structure nodes. This condition maintains the essential correspondences of LFG representations: if a cstructm'c and an f-structure are paired in one fragment provided by Root and Frontier, then Discard also pairs that c-structure with all generalizations of that l:raglnent's f-structure. Fragment (5) results from applying Discard to the TENSE feature in (4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "Discard also produces fragments such as (6), where the subject's number in (3) has been deleted:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "(6) V eats suru [ ] TENSE PRES PRED 'eat(SUB J)'", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "Again, since we have no language-specific knowledge apart from the corpus, we have no basis for ruling out fragments like (6). Indeed, it is quite intuitive to omit the subject's number in lYagments derived from sentences with past-tense verhs or modals. Thus the specification of Discard reflects the fact that LFG representations, unlike LFG grammars, do not indicate unambiguously the c-structure source (or sources) of their f-structure feature values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposition operations", |
| "sec_num": null |
| }, |
| { |
| "text": "In LFG-DOP the operation for combining fragments, again indicated by o, is carried out in two steps. First the c-structures are combined by left-most substitution subject to the category-matching condition, just as in Tree-DOP. This is followed by the rccursivc unification of the f-structures corresponding to the matching nodes. The result retains the correspondences of the fragments being combined. A derivation for an LFG-DOP representation R is a sequence of fragments tile first of which is labeled with S and for which the iterative application of the composition operation produces R.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The composition operation", |
| "sec_num": null |
| }, |
| { |
| "text": "We show in (7) the effect of the LFG composition operation using two fragments l'rom representations of an imaginary corpus containing the sentences Kim eats and People ate. The VP-rooted fragment is substituted for the VP in the first fi'agment, and the second f-structure unifies with the first f-structure, resulting in a representation for the new sentence Kim ate. This representation satisfies the well-formedness conditions and is therefore valid. Note that in LFG-DOP, as in Tree-DOP, the same representation may be produced by several derivations involving different fragments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The composition operation", |
| "sec_num": null |
| }, |
| { |
| "text": "Another valid representation for the sentence Kim ate could be composed from a fragment for Kim that does not preserve the number feature, leading to a representation which is unmarked for number. The probability models we discuss below have the desirable property that they tend to assign higher probabilities to more specific representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The composition operation", |
| "sec_num": null |
| }, |
| { |
| "text": "The following derivation produces a valid representation for the intuitively ungrammatical sentence People eats: This system of fragments and composition thus provides a representational basis for a robust model of language comprehension in that it assigns at least some representations to many strings that would generally be regarded as ill-formed. A correlate of this advantage, however, is the tact that it does not offer a direct formal account of metalinguistic judgments of grammaticality. Nevertheless, we can reconstruct the notion of grammaticality by means of the following definition:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The composition operation", |
| "sec_num": null |
| }, |
| { |
| "text": "A sentence is grammatical with respect to a corpus if and only if it has at least one valid representation with at least one derivation whose fragments are produced only by Root and Frontier and not by Discard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The composition operation", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus the system is robust in that it assigns three representations (singular, plural, and unmarked as the subject's number) to the string People eats, based on fragments for which the number feature of people, eats, or both has been discarded. But unless the corpus contains non-plural instances of people or nonsingular instances of eats, there will be no Discardfree derivation and the string will be classified as ungrammatical (with respect to the corpus).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The composition operation", |
| "sec_num": null |
| }, |
| { |
| "text": "As in Tree-DOP, an LFG-DOP representation R can typically be dcrived in many different ways. If each derivation D has a probability P(D), then the probability of deriving R is again the probability of producing it by any of its derivations. This is the sum of the individual derivation probabilities: An I_,FG-DOP derivation is also produced by a stochastic branching process which at each step makes a random selection from a competition set of competing fragments. Let CP(fl CS) denote the probability of choosing a fragment f from a competition set CS containing f then the probability of a derivation D = <fl,f2 ...fk > is (10) P(<fl,Ji ...fk >) : I-Ii CP~. I CSi) where as in Tree-DOP, CP(fl CS) is expressed in terms of fragment probabilities PQ3 by the formula 1 1CPffl CS) = eo / Ere CS P0 \u00b0)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "Tree-DOP is the special case where there are no conditions of validity other than the ones that are enforced at each step of the stochastic process by the composition operation. This is not generally the case and is certainly not the case for the Completeness Condition of LFG representations: Completeness is a property of a final representation that cannot be evaluated at any intermediate steps of the process. However, we can define probabilities for the valid representations by sampling only from such representations in the output of the stochastic process. The probability of sampling a particular valid representation R is given by (12) P(R I R is valid) = P(R) / ~R' is valid P(R')", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "This formula assigns probabilities to valid representations whether or not the stochastic process guarantees validity. The valid representions for a particular utterance u are obtained by a further sampling step and their probabilities are given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "(13) P(R I R is valid and yields u) = P(R) / ]~R' is valid and yields u P(R') The formulas (9) through (13) will be part of any LFG-DOP probability model. The models will differ only in how the competition sets are defined, and this in turn depends on which well-formedness conditions are enforced on-line during the stochastic branching process and which are evaluated by the off-line validity sampling process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "One model, which we call M1, is a straightforward extension of Tree-DOP's probability model. This computes the competition sets only on the basis of the category-matching condition, leaving all other well-formedness conditions for off-line sampling. Thus for M1 the competition sets are defined simply in terms of the categories of a fragment's c-structure root node. Suppose that Fi. 1 =fl \u00b0f2 o ... ofi_l is the current subanalysis at the beginning of step i in the process, that LNC(Fi. I) denotes the category of the leflmost nonterminal node of the c-structure of F i_l, and that r(f) is now interpreted as the root-node category of fs c-structure component. Then the competition set for the i th step is (14) CS i = {f: r00=LNC(Fi_ 1 ) } Since these competition sets depend only on the category of the leftmost nonterminal of the current cstructure, the competition sets group together all fragments with the same root category, independent of any other properties they may have or that a particular derivation may have. The competition probability for a fragment can be expressed by the formula (15) CPO') = P(f) / Ef : rOa)=rq') P(f') We see that the choice of a fragment at a particular step in the stochastic process depends only on the category of its root node; other well-formedness properties of the representation are not used in making fragment selections. Thus, with this model the stochastic process may produce many invalid reprcsentations; we rely on sampling of valid representations and the conditional probabilities given by (12) and (13) to take the Uniqueness, Coherence, and Completeness Conditions into account.", |
| "cite_spans": [ |
| { |
| "start": 710, |
| "end": 714, |
| "text": "(14)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "Another possible model (M2) defines the competition sets so that they take a second condition, Uniqueness, into account in addition to the root node category. For M2 the competing fragments at a particular step in the stochastic derivation process are those whose c-structures have the same root node category as I~NC(Fi.1) and also whose f-structures are consistently unifiable with the f-structure of F i.l. Thus the competition set for the ith step is 16CSi = { f: r(/')=LNC(Fi.I ) and f is unifiable with the f-structure of Fi.1 }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "Although it is still the case that the categorymatching condition is independent of the derivation, the unifiability requirement means that the competition sets vary according to the representation produced by the sequence of previous steps in the stochastic process. Unifiability must be determined at each step in the process to produce a new competition set, and the competition probability reinains dependent on the particular step:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "(17) CP(fi [ CSi) = P(fi) / Yf: r0\u00b0)=r(fi)and flis unifiable with f{-1P0\")", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "On this model we again rely on sampling and the conditional probabilities (12) and (13) to take just the Coherence and Completeness Conditions into account.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "In model M3 we define the stochastic process to enforce three conditions, Coherence, Uniqueness and category-matching, so that it only produces representations with well-formed c-structures that correspond to coherent and consistent f-structures. The competition probabilities for this model are given by the obvious extension of (17). It is not possible, however, to construct a model in which the Completeness Condition is enforced during the derivation process. This is because the satisfiability of the Completeness Condition depends not only on the results of previous steps of a derivation but also on the following steps (see Kaplan & Bresnan 1982) . This nonmonotonic property means that the appropriate step-wise competition sets cannot be defined and that this condition can only be enforced at the final stage of validity sampling.", |
| "cite_spans": [ |
| { |
| "start": 633, |
| "end": 655, |
| "text": "Kaplan & Bresnan 1982)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "In each of these three models the categorymatching condition is evaluated on-line during the derivation process while other conditions are either evaluated on-line or off-line by the after-the-fact sampling process. LFG-DOP is crucially different from Tree-DOP in that at least one validity requirement, the Completeness Condition, must always be left to the post-derivation process. Note that a number of other models are possible which enforce other combinations of these three conditions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability models", |
| "sec_num": null |
| }, |
| { |
| "text": "We illustrate LFG-DOP using a very small corpus consisting of the two simplified LFG representations shown in (18):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustration and properties of LFG-DOP", |
| "sec_num": "3." |
| }, |
| { |
| "text": "(18) ~UM s~ 1/ PRED 'people']] ..... J,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustration and properties of LFG-DOP", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The fragments from this corpus can be composed to provide representations for the two observed sentences plus two new utterances, John walked and People fell. This is sufficient to demonstrate that the probability models M1 and M2 assign different probabilities to particular representations. We have omitted the TENSE feature and the lexical categories N and V to reduce the number of the fragments we have to deal with. Applying the Root and Frontier operators systematically to the first corpus representation produces the fragments in the first column of (19), while the second column shows the additional f-structure that is associated with each cstructure by the Discard operation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "A total of 12 fragments are produced froin this representation, and by analogy 12 fragments with either PL or unmarked NUM values will also result from People walked. Note that the [S NP VP] fragment with the unspecified NUM value is produced for both sentences and thus its corpus frequency is 2. There arc 14 other S-rooted fragments, 4 NP-rooted fragments, and 4 VP-rooted fragments; each of these occurs only once.", |
| "cite_spans": [ |
| { |
| "start": 181, |
| "end": 190, |
| "text": "[S NP VP]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "These fragments can be used to derive three different representations for John walked (singular, plural, and unmarked as the subject's number). To facilitate the presentation of our derivations and probability calculations, we denote each fragment by an abbreviated name that indicates its c-structure root-node category, the sequence of its frontier-node labels, and whether its subject's number is SG, PL, or unmarked (indicated by U). Thus the first fragment in (19) is referred to as S/John-fell/SG and the unmarked fragment that Discard produces from it is referred to as S/John-fell/U. Given this naming convention, we can specify one of the derivations for John walked by the expression S/NP-VP/U o NP/John/SG o VP/walked/U, corresponding to an analysis in which the subject's number is marked as SG. The fragrnent VP/walked/U of course comes from People walked, the second corpus sentence, and does not appear in (19) Model M1 evaluates only the Tree-DOP root-category condition during the stochastic branching process, and the competition sets are fixed independent of the derivation. The probability of choosing the fragment S/NP-VP/U, given that an S-rooted fragment is required, is always 2116, its frequency divided by the sum of the frequencies of all the S fragments. Similarly, the probability of then choosing NP/John/SG to substitute at the NP frontier node is 1/4, since the NP competition set contains 4 fragments each with frequency I. Thus, under model MI the probability of producing the complete derivation S/NP-VP/U o NP/John/SG o VP/walked/U is 2116xl/4\u00d71/4--2/256. This probability is small because it indicates the likelihood of this derivation compared to other derivations for John walked and for the three other analyzable strings. The computation of the other M1 derivation probabilities for John walked is left to the reader. There are 5 different derivations for the representation with SG number and 5 for the PL number, while there are only 3 ways of producing the unmarked number U. The conditional probabilities for the particular representations (SG, PL, U) can be calculated by (9) and (13), and are given below. P(NUM=SG [ valid and yield = John walked) = .353 P(NUM=PL I valid and yield = John walked) = .353 P(NUM=U I valid and yield = John walked) = .294", |
| "cite_spans": [ |
| { |
| "start": 921, |
| "end": 925, |
| "text": "(19)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "We see that the two specific representations are equally likely and each of them is more probable than the representation with unmarked NUM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "Model M2 produces a slightly different distribution of probabilities. Under this model, the consistency reqmrement ~s used in addition to the root-category matching requirement to define the competition sets at each step of the branching process. This means that the first fragment that instantiates the NUM feature to either SG or PL constrains the competition sets for the following choices in a derivation. Thus, having chosen the NP/John/SG fragment in the derivation S/NP-VP/U o NP/John/SG o VP/walked/U, only 3 VP fragments instead of 4 remain in the competition set at the next step, since the VP/walked/PL fragment is no longer available. The probability for this derivation under model M2 is therefore 2/16xl/4xl/3=2/192, slightly higher than the probability assigned to it by M1. Table 1 shows the complete set of derivations and their M2 probabilities for John walked. For model M2 the unmarked representation is less likely than under M 1, and now there is a slight bias in favor of the value SG over PL. The SG value is favored because it is carried by substitutions for the left-most word of the utterance and thus reduces competition for subsequent choices. The value PL would be more probable for the sentence People fell.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 790, |
| "end": 797, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus both models give higher probability to the more specific representations. Moreover, M1 assigns the same probability to SG and PL, whereas M2 doesn't.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "M2 reflects a left-to-right bias (which might be psycholingt, istically interesting --a so-called primacy effect), whereas M I is, like Tree-DOP, order independent. It turns out that all LFG-DOP probability models (M1, M2 and M3) display a preference for the most specific representation. This preference partly depends on the number of derivations: specific representations tend to have more derivations than generalized (i.e., unmarked) representations, and consequently tend to get higher probabilities --other things being equal. However, this preference also depends on the number of feature values: the more feature values, the longer the minimal derivation length must be in order to get a preference for the most specific representation (Cormons, forthcoming) .", |
| "cite_spans": [ |
| { |
| "start": 745, |
| "end": 767, |
| "text": "(Cormons, forthcoming)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "The bias in favor of more specific representations, and consequently fewer Discard-produced feature generalizations, is especially interesting for the interpretation of ill-formed input strings. Bod & Kaplan (1997) show that in analyzing an intuitively ungrammatical string like These boys walks, there is a probabilistic accumulation of evidence for the plural interpretation over the singular and unmarked one (for all models M1, M2 and M3). This is because both These and boys carry the PL feature while only walks is a source for the SG feature, leading to more dcriwttions for the PL reading of These boys walks. In case of \"equal evidence\" as in the ill-formed string Boys walks, model M I assigns the same probability to t'1, and SG, while models M2 and M3 prefer the PL interpretation due to their left-to-right bias.", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 214, |
| "text": "Bod & Kaplan (1997)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lle w J,&", |
| "sec_num": null |
| }, |
| { |
| "text": "Previous DOP models were based on context-free tree representations that cannot adequately represent all linguistic phenomena. In tiffs paper, we gave a DOP model based on the more articulated representations provided by LFG theory. LFG-DOP combines the advantages of two approaches: the linguistic adequacy of LFG together with the robustness of DOP. LFG-DOP triggers a new, corpus-based notion of grammaticality, and its probability models exhibit a preference for the most specific analysis containing the fewest number of feature generalizations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and computational issues", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The main goal of this paper was to provide the theoretical background of LFG-DOP. As to the computational aspects of LFG-DOP, the problem of finding the most probable representation of a sentence is NP-hard even for Tree-DOP. This problem may be tackled by Monte Carlo sampling techniques (as in Tree-DOP, cf. Bod 1995) or by computing the Viterbi n best derivations of a sentence. Other optimization heuristics may consist of restricting the fragment space, t'or example by putting an upper bound on the fragment depth, or by constraining the decomposition operations. To date, a couple of LFG-DOP iinplementations are either operational (Cormons, forthcoming) or under development, and corpora with LFG representations have recently been developed (at XRCE France and Xerox PARC). Experiments with these corpora will be presented in due time.", |
| "cite_spans": [ |
| { |
| "start": 310, |
| "end": 319, |
| "text": "Bod 1995)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 639, |
| "end": 661, |
| "text": "(Cormons, forthcoming)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and computational issues", |
| "sec_num": "4." |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank Joan Bresnan, Mary Dalrymple, Mark Johnson, Martin Kay, John Maxwell, Remko Scha, Khalil Sima'an, Andy Way and three anonymous reviewers for helpful comments. We are most grateful to Boris Cormons whose comments were particularly helpful. This research was supported by NWO, the Dutch Organization for Scientific Research. The initial stages of this work were carried out while the second author was a Fellow of the Netherlands Institute for Advanced Study (NIAS). Subsequent stages were also carried out while the first author was a Consultant at Xerox PARC.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Computational Model of Language Performance: Data Oriented Parsing", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Berg", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Scha", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings Ninth Amsterdam Colloquium", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. vall den Berg, R. Bod and R. Scha 1994. \"A Corpus- Based Approach to Semantic Interpretation\", Proceedings Ninth Amsterdam Colloquium, Amsterdam, The Netherlands. R. Bod 1992. \"A Computational Model of Language Performance: Data Oriented Parsing\", Proceedings COLING-92, Nantes, France.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Using an Annotated Corpus as a Stochastic Grammar", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings EACL'93", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Bod 1993. \"Using an Annotated Corpus as a Stochastic Grammar\", Proceedings EACL'93, Utrecht, The Netherlands.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Enriching Linguistics with Statistics: Performance Models of Natural Language", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Bod 1995. Enriching Linguistics with Statistics: Performance Models of Natural Language, 1LLC", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Spoken Dialogue Interpretation with the DOP Model", |
| "authors": [], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dissertation Series 1995-14, University of Amsterdam R. Bod 1998. \"Spoken Dialogue Interpretation with the DOP Model\", this proceedings.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "On Performance models for Lexical-Functional Analysis", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kaplan", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Bod and R. Kaplan 1997. \"On Performance models for Lexical-Functional Analysis\", Paper presented at the Computational Psycholinguistics Conference 1997, Berkeley (Ca).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A DOP Model for Semantic Interpretation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bonnema", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Scha", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings ACL/EACL-97", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Bonnema, R. Bod and R. Scha 1997. \"A DOP Model for Semantic Interpretation\", Proceedings ACL/EACL-97, Madrid, Spain.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Analyse et desambiguisation: Une apt, vche purement ?~ base de colpus (Data-Oriented Parsing) pour le formalisme des Grammaires Lexicales Fonctionnelles", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Cormons", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proceedings Empirical Methods in Natural Language Processing", |
| "volume": "5", |
| "issue": "", |
| "pages": "305--322", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Cormons, forthcoming. Analyse et desambiguisation: Une apt, vche purement ?~ base de colpus (Data-Oriented Parsing) pour le formalisme des Grammaires Lexicales Fonctionnelles, PhD thesis, Universit6 de Rennes, France. J. Goodman 1996. \"Efficient Algorithms for Parsing the DOP Model\", Proceedings Empirical Methods in Natural Language Processing, Philadelphia, Pennsylvania. R. Kaplan 1989. \"The Formal Architecture of I~exical- Functional Grammar\", Journal of lnformation Science and Engineering, vol. 5,305-322.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Lexical-Fnnctional Grammar: A Formal System for Grammatical Representatimf", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kaplan", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bresnan", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Kaplan and J. Bresnan 1982. \"Lexical-Fnnctional Grammar: A Formal System for Grammatical Representatimf', in J. Bresnan (ed.), The Mental Representation of Grammatical Relations, The MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Probabilistic parsing using left corner language models", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Carpenter", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings IWPT'97", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Manning and B. Carpenter 1997. \"Probabilistic parsing using left corner language models\", Proceedings IWPT'97, Boston (Mass.).", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Approche Probabiliste de I'Analyse Syntaxique", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Rajman", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Traitement Automatique des Lz~ngues", |
| "volume": "36", |
| "issue": "", |
| "pages": "1--2", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Rajman 1995. \"Approche Probabiliste de I'Analyse Syntaxique\", Traitement Automatique des Lz~ngues, vol. 36(1-2).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Virtuele Grammatica's en Creatieve Algoritmen", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Scha", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Gramma/TIT", |
| "volume": "1", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Scha 1992. \"Virtuele Grammatica's en Creatieve Algoritmen\", Gramma/TIT 1(1).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A Corpus-based Probabilistic Grammar with Only Two Non-terminals", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sekine", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings Fourth huernational Workshop on Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Sekine and R. Grishman 1995. \"A Corpus-based Probabilistic Grammar with Only Two Non-terminals\", Proceedings Fourth huernational Workshop on Parsing Technologies, Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "An optimized algorithm for Data Oriented Parsing", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Sima'an", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Sima'an 1995. \"An optimized algorithm for Data Oriented Parsing\", in R. Mitkov and N. Nicolov (eds.), Recent Advances in Natural Language Processing 1995, John Benjamins, Amsterdam.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A State-Transition Grammar for Data-Oriented Parsing", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings European Chapter of the ACL'95", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Tugwell 1995. \"A State-Transition Grammar for Data- Oriented Parsing\", Proceedings European Chapter of the ACL'95, Dublin, Ireland.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "P(R) = ]~I) derives R P(D)" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "S/NP-VP/U \u00b0 NP/Jolm/SG \u00b0 VP/walked/U SG 2/16 x 1/4 x 1/3 S/NP-VP/SG o NP/John/SG o VP/walked/U SG 1/16 x 1/3 x 1/3 S/NP-VP/SG \u00b0 NP/John/U \u00b0 VP/walked/and yield = John walked) = 35/576 = .061 P(NUM=SG I valid and yield = John walked) = 70/182 = .38S/NP-VP/U o NP/John/U o VP/walked/PL PL 2/16 x I/4 x 1/4 S/NP-VP/PL o NP/John/U o VP/walked/PL PL 1/16 x 1/3 x 1/3 S/NP-VP/PI, \u00b0 NP/John/U \u00b0 VP/walked/and yield = John walked) = 335/576 = .058 P(NUM=PL I valid and yield = John walked) = 67/182 = .37 S/NP-VP/U o NWJohn/U o VP/walked/and yield = John walked) = 22.5/576 = .039 P(NUM=U I valid and yield = John walked) = 45/182 = .25 \"Fable 1: Model M2 derivations, subject number features, and probabilities for John walkedThe total probability for the derivations that produce John walked is .158, and the conditional probabilities for the three representations are: P(NUM=SG I valid and yield = John walked) = .38 P(NUM=PL I valid and yield = John walked) = .37 P(NUM=U I valid and yield = John walked) = .25" |
| } |
| } |
| } |
| } |