ACL-OCL / Base_JSON /prefixC /json /C00 /C00-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C00-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:30:05.730264Z"
},
"title": "An Empirical Evaluation of LFG-DOP",
"authors": [
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leeds",
"location": {
"postCode": "LS2 9JT",
"settlement": "Leeds"
}
},
"email": "rens@scs.leeds.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents an empirical assessment of the LFG-DOP model introduced by Bod & Kaplan (1998). The parser we describe uses fragments from LFG-annotated sentences to parse new sentences and Monte Carlo techniques to compute the most probable parse. While our main goal is to test Bod & Kaplan's model, we will also test a version of LFG-DOP which treats generalized fragments as previously unseen events. Experiments with the Verbmobil and Homecentre corpora show that our version of LFG-DOP outperforms Bod & Kaplan's model, and that LFG's functional information improves the parse accuracy of tree structures.",
"pdf_parse": {
"paper_id": "C00-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents an empirical assessment of the LFG-DOP model introduced by Bod & Kaplan (1998). The parser we describe uses fragments from LFG-annotated sentences to parse new sentences and Monte Carlo techniques to compute the most probable parse. While our main goal is to test Bod & Kaplan's model, we will also test a version of LFG-DOP which treats generalized fragments as previously unseen events. Experiments with the Verbmobil and Homecentre corpora show that our version of LFG-DOP outperforms Bod & Kaplan's model, and that LFG's functional information improves the parse accuracy of tree structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We present an empirical evaluation of the LFG-DOP model introduced by Bod & Kaplan (1998) . LFG-DOP is a Data-Oriented Parsing (DOP) model (Bod 1993, 98) based on the syntactic representations of Lexical-Functional Grammar (Kaplan & Bresnan 1982) . A DOP model provides linguistic representations for an unlimited set of sentences by generalizing from a given corpus of annotated exemplars. It operates by decomposing the given representations into (arbitrarily large) fragments and recomposing those pieces to analyze new sentences. The occurrence-frequencies of the fragments are used to determine the most probable analysis of a sentence.",
"cite_spans": [
{
"start": 70,
"end": 89,
"text": "Bod & Kaplan (1998)",
"ref_id": "BIBREF5"
},
{
"start": 139,
"end": 153,
"text": "(Bod 1993, 98)",
"ref_id": null
},
{
"start": 223,
"end": 246,
"text": "(Kaplan & Bresnan 1982)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So far, DOP models have been implemented for phrase-structure trees and logical-semantic representations (cf. Bod 1993, 98; Sima'an 1995, 99; Bonnema et al. 1997; Goodman 1998) . However, these DOP models are limited in that they cannot account for underlying syntactic and semantic dependencies that are not reflected directly in a surface tree. DOP models for a number of richer representations have been explored (van den Berg et al. 1994; Tugwell 1995) , but these approaches have remained context-free in their generative power. In contrast, Lexical-Functional Grammar (Kaplan & Bresnan 1982) is known to be beyond context-free. In Bod & Kaplan (1998) , a first DOP model was proposed based on representations defined by LFG theory (\"LFG-DOP\"). 1 This model was 1 DOP models have recently also been proposed for Tree -Adjoining Grammar and Head-driven Phrase Structure Grammar (cf. Neumann & Flickinger 1999) . studied from a mathematical perspective by Cormons (1999) who also accomplished a first simple experiment with LFG-DOP. Next, Way (1999) studied LFG-DOP as an architecture for machine translation. The current paper contains the first extensive empirical evaluation of LFG-DOP on the currently available LFG-annotated corpora: the Verbmobil corpus and the Homecentre corpus. Both corpora were annotated at Xerox PARC.",
"cite_spans": [
{
"start": 110,
"end": 123,
"text": "Bod 1993, 98;",
"ref_id": null
},
{
"start": 124,
"end": 141,
"text": "Sima'an 1995, 99;",
"ref_id": null
},
{
"start": 142,
"end": 162,
"text": "Bonnema et al. 1997;",
"ref_id": "BIBREF6"
},
{
"start": 163,
"end": 176,
"text": "Goodman 1998)",
"ref_id": null
},
{
"start": 416,
"end": 442,
"text": "(van den Berg et al. 1994;",
"ref_id": "BIBREF0"
},
{
"start": 443,
"end": 456,
"text": "Tugwell 1995)",
"ref_id": "BIBREF21"
},
{
"start": 574,
"end": 597,
"text": "(Kaplan & Bresnan 1982)",
"ref_id": "BIBREF13"
},
{
"start": 637,
"end": 656,
"text": "Bod & Kaplan (1998)",
"ref_id": "BIBREF5"
},
{
"start": 750,
"end": 751,
"text": "1",
"ref_id": null
},
{
"start": 887,
"end": 913,
"text": "Neumann & Flickinger 1999)",
"ref_id": "BIBREF16"
},
{
"start": 959,
"end": 973,
"text": "Cormons (1999)",
"ref_id": "BIBREF8"
},
{
"start": 1042,
"end": 1052,
"text": "Way (1999)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our parser uses fragments from LFG-annotated sentences to parse new sentences, and Monte Carlo techniques to compute the most probable parse. Although our main goal is to test Bod & Kaplan's LFG-DOP model, we will also test a modified version of LFG-DOP which uses a different model for computing fragment probabilities. While Bod & Kaplan treat all fragments probabilistically equal regardless whether they contain generalized features, we will propose a more fine-grained probability model which treats fragments with generalized features as previously unseen events and assigns probabilities to these fragments by means of discounting. The experiments indicate that our probability model outperforms Bod & Kaplan's probability model on the Verbmobil and Homecentre corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: we first summarize the LFG-DOP model and go into our proposed extension. Next, we explain the Monte Carlo parsing technique for estimating the most probable LFGparse of a sentence. In section 3, we test our parser on sentences from the LFG-annotated corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In accordance with Bod (1998) , a particular DOP model is described by specifying settings for the following four parameters:",
"cite_spans": [
{
"start": 19,
"end": 29,
"text": "Bod (1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of LFG-DOP and an Extension",
"sec_num": "2"
},
{
"text": "\u2022 a formal definition of a well-formed representation for utterance analyses, \u2022 a set of decomposition operations that divide a given utterance analysis into a set of fragments, \u2022 a set of composition operations by which such fragments may be recombined to derive an analysis of a new utterance, and \u2022 a probability model that indicates how the probability of a new utterance analysis is computed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of LFG-DOP and an Extension",
"sec_num": "2"
},
{
"text": "In defining a DOP model for Lexical-Functional Grammar representations, Bod & Kaplan (1998) give the following settings for DOP's four parameters.",
"cite_spans": [
{
"start": 72,
"end": 91,
"text": "Bod & Kaplan (1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of LFG-DOP and an Extension",
"sec_num": "2"
},
{
"text": "The representations used by LFG-DOP are directly taken from LFG: they consist of a c-structure, an f-structure and a mapping \u03c6 between them (see Kaplan & Bresnan 1982) . The following figure shows an example representation for the utterance Kim eats. (We leave out some features to keep the example simple.) Bod & Kaplan also introduce the notion of accessibility which they later use for defining the decomposition operations of LFG-DOP:",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "Kaplan & Bresnan 1982)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "2.1"
},
{
"text": "An f-structure unit f is \u03c6-accessible from a node n iff either n is \u03c6 -linked to f (that is, f = \u03c6 (n) ) or f is contained within \u03c6(n) (that is, there is a chain of attributes that leads from \u03c6(n) to f).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "2.1"
},
{
"text": "According to the LFG representation theory, c-structures and f-structures must satisfy certain formal wellformedness conditions. A c-structure/f-structure pair is a valid LFG representation only if it satisfies the Nonbranching Dominance, Uniqueness, Coherence and Completeness conditions (see Kaplan & Bresnan 1982) .",
"cite_spans": [
{
"start": 294,
"end": 316,
"text": "Kaplan & Bresnan 1982)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "2.1"
},
{
"text": "The fragments for LFG-DOP consist of connected subtrees whose nodes are in \u03c6-correspondence with the correponding sub-units of f-structures. To give a precise definition of LFG-DOP fragments, it is convenient to recall the decomposition operations employed by the simpler \"Tree-DOP\" model which is based on phrasestructure trees only (Bod 1998 ):",
"cite_spans": [
{
"start": 334,
"end": 343,
"text": "(Bod 1998",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposition operations and Fragments",
"sec_num": "2.2"
},
{
"text": "(1) Root: the Root operation selects any node of a tree to be the root of the new subtree and erases all nodes except the selected node and the nodes it dominates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposition operations and Fragments",
"sec_num": "2.2"
},
{
"text": "(2) Frontier: the Frontier operation then chooses a set (possibly empty) of nodes in the new subtree different from its root and erases all subtrees dominated by the chosen nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposition operations and Fragments",
"sec_num": "2.2"
},
{
"text": "Bod & Kaplan extend Tree-DOP's Root and Frontier operations so that they also apply to the nodes of the c-structure in LFG, while respecting the fundamental principles of c-structure/f-structure correspondence. When a node is selected by the Root operation, all nodes outside of that node's subtree are erased, just as in Tree-DOP. Further, for LFG-DOP, all \u03c6 links leaving the erased nodes are removed and all f-structure units that are not \u03c6-accessible from the remaining nodes are erased. For example, if Root selects the NP in figure 1 , then the f-structure corresponding to the S node is erased, giving figure 2 as a possible fragment: In addition the R o o t operation deletes from the remaining f-structure all semantic forms that are local to f-structures that correspond to erased c-structure nodes, and it thereby also maintains the fundamental two-way connection between words and meanings. Thus, if Root selects the VP node so that the NP is erased, the subject semantic form \"Kim\" is also deleted: As with Tree-DOP, the Frontier operation then selects a set of frontier nodes and deletes all subtrees they dominate. Like Root, it also removes the \u03c6 links of the deleted nodes and erases any semantic form that corresponds to any of those nodes. Frontier does not delete any other f-structure features, however. For instance, if the NP in figure 1 is selected as a frontier node, Frontier erases the predicate \"Kim\" from the fragment: Figure 5 . A Discard -generated fragment",
"cite_spans": [],
"ref_spans": [
{
"start": 531,
"end": 540,
"text": "figure 1",
"ref_id": null
},
{
"start": 1449,
"end": 1457,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decomposition operations and Fragments",
"sec_num": "2.2"
},
{
"text": "In LFG-DOP the operation for combining fragments, indicated by \u00b0, is carried out in two steps. First the cstructures are combined by left-most substitution subject to the category-matching condition, just as in Tree-DOP (cf. Bod 1993, 98) . This is followed by the recursive unification of the f-structures corresponding to the matching nodes. A derivation for an LFG-DOP representation R is a sequence of fragments the first of which is labeled with S and for which the iterative application of the composition operation produces R.",
"cite_spans": [
{
"start": 225,
"end": 238,
"text": "Bod 1993, 98)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The composition operation",
"sec_num": "2.3"
},
{
"text": "The two-stage composition operation is illustrated by a simple example. We therefore assume a corpus containing the representation in figure 1 for the sentence Kim eats and the representation in figure 6 for the sentence John fell. This representation satisfies the well-formedness conditions and is therefore valid. Note that the sentence Kim fell can be parsed by fragments that are generated by the decomposition operations Root and Frontier only, without using generalized fragments (i.e. fragments generated by the Discard operation). Bod & Kaplan (1998) call a sentence \"grammatical with respect to a corpus\" if it can be parsed without generalized fragments. Generalized fragments are needed only to parse sentences that are \"ungrammatical with respect to the corpus\".",
"cite_spans": [
{
"start": 540,
"end": 559,
"text": "Bod & Kaplan (1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The composition operation",
"sec_num": "2.3"
},
{
"text": "As in Tree-DOP, an LFG-DOP representation R can typically be derived in many different ways. If each derivation D has a probability P(D), then the probability of deriving R is the sum of the individual derivation probabilities, as shown in (1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "P(R) = \u03a3 D derives R P(D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "An LFG-DOP derivation is produced by a stochastic process which starts by randomly choosing a fragment whose c-structure is labeled with the initial category (e.g. S). At each subsequent step, a next fragment is chosen at random from among the fragments that can be composed with the current subanalysis. The chosen fragment is composed with the current subanalysis to produce a new one; the process stops when an analysis results with no non-terminal leaves. We will call the set of composable fragments at a certain step in the stochastic process the competition set at that step. Let CP(f | CS) denote the probability of choosing a fragment f from a competition set CS containing f, then the probability of a derivation D = <f 1 , f 2 ... f k > is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "P(<f 1 , f 2 ... f k >) = \u03a0 i CP(f i | CS i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "where the competition probability CP(f | CS) is expressed in terms of fragment probabilities P(f):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "\u03a3 f'\u2208CS P(f') P(f) (3) CP(f | CS) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "Bod & Kaplan give three definitions of increasing complexity for the competition set: the first definition groups all fragments that only satisfy the Categorymatching condition of the composition operation (thus leaving out the Uniqueness, Coherence and Completeness conditions); the second definition groups all fragments which satisfy both Category-matching and Uniqueness; and the third definition groups all fragments which satisfy Category-matching, Uniqueness and Coherence. Bod & Kaplan point out that the Completeness condition cannot be enforced at each step of the stochastic derivation process. It is a property of the final representation which can only be enforced by sampling valid representations from the output of the stochastic process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "In this paper, we will only deal with the third definition of competition set, as it selects only those fragments at each derivation step that may finally result in a valid LFG representation, thus reducing the off-line validity checking just to the Completeness condition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "Notice that the computation of the competition probability in (3) still requires a definition for the fragment probability P(f). Bod & Kaplan define the probability of a fragment simply as its relative frequency in the bag of all fragments generated from the corpus. Thus Bod & Kaplan do not distinguish between Root/Frontier-generated fragments and Discardgenerated fragments, the latter being generalizations over Root/Frontier-generated fragments. Although Bod & Kaplan illustrate with a simple example that their probability model exhibits a preference for the most specific representation containing the fewest feature generalizations (mainly because specific representations tend to have more derivations than generalized representations), they do not perform an empirical evaluation of their model. We will assess their model on the LFG-annotated Verbmobil and Homecentre corpora in section 3 of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "However, we will also assess an alternative definition of fragment probability which is a refinement of Bod & Kaplan's model. This definition d o e s distinguish between fragments supplied by Root/Frontier and fragments supplied by Discard. We will treat the first type of fragments as seen events, and the second type of fragments as previously unseen events. We thus create two separate bags corresponding to two separate distributions: a bag with fragments generated by Root and Frontier, and a bag with fragments generated by Discard. We assign probability mass to the fragments of each bag by means of d i s c o u n t i n g : the relative frequencies of seen events are discounted and the gained probability mass is reserved for the bag of unseen events (cf. Ney et al. 1997) . We accomplish this by a very simple estimator: the Turing-Good estimator (Good 1953) which computes the probability mass of unseen events as n 1 /N where n 1 is the number of singleton events and N is the total number of seen events. This probability mass is assigned to the bag of Discardgenerated fragments. The remaining mass (1 \u2212 n 1 /N) is assigned to the bag of R o o t /F r o n t i e r -g e n e r a t e d fragments. Thus the total probability mass is redistributed over the seen and unseen fragments. The probability of each fragment is then computed as its relative frequency 2 in its bag multiplied by the probability mass assigned to this bag. Let | f | denote the frequency of a fragment f, then its probability is given by:",
"cite_spans": [
{
"start": 764,
"end": 780,
"text": "Ney et al. 1997)",
"ref_id": "BIBREF17"
},
{
"start": 856,
"end": 867,
"text": "(Good 1953)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "2 Bod (2000) discusses some alternative fragment probability estimators, e.g. based on maximum likelihood.",
"cite_spans": [
{
"start": 2,
"end": 12,
"text": "Bod (2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "| f | \u03a3 f': f' is generated by Root/ Frontier | f'| (1 \u2212 n 1 /N) (4) P(f | f is generated by Root /Frontier) = (5) P(f | f is generated by Discard) = (n 1 /N) | f | \u03a3 f': f' is generated by Discard | f'|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "Note that this probability model assigns less probability mass to Discard-generated fragments than Bod & Kaplan's model. For each Root/Frontier-generated fragment there are exponentially many Discardgenerated fragments (exponential in the number of features the fragment contains), which means that in Bod & Kaplan's model the Discard-generated fragments absorb a vast amount of probability mass. Our model, on the other hand, assigns a fixed probability mass to the distribution of Discard-generated fragments and therefore the exponential explosion of these fragments does not affect the probabilities of Root/Frontiergenerated fragments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability models",
"sec_num": "2.4"
},
{
"text": "In his PhD-thesis, Cormons (1999) describes a parsing algorithm for LFG-DOP which is based on the Tree-DOP parsing technique given in Bod (1998) The indexed trees are then fragmented by applying the Tree-DOP decomposition operations described in section 2. Next, the LFG-DOP decomposition operations Root, Frontier and Discard are applied to the f-structure units that correspond to the indices in the c-structure subtrees.",
"cite_spans": [
{
"start": 134,
"end": 144,
"text": "Bod (1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Testing the LFG-DOP model 3.1 Computing the most probable analysis",
"sec_num": "3"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing the LFG-DOP model 3.1 Computing the most probable analysis",
"sec_num": "3"
},
{
"text": "Having obtained the set of LFG-DOP fragments in this way, each test sentence is parsed by a bottom-up chart parser using initially the indexed subtrees only. Thus only the Category-matching condition is enforced during the chart-parsing process. The Uniqueness and Coherence conditions of the corresponding f-structure units are enforced during the disambiguation (or chartdecoding) process. Disambiguation is accomplished by computing a large number of random derivations from the chart; this technique is known as \"Monte Carlo disambiguation\" and has been extensively described in the literature (e.g. Bod 1998; Chappelier & Rajman 1998; Goodman 1998) . Sampling a random derivation from the chart consists of choosing at random one of the fragments from the set of composable fragments at every labeled chart-entry (in a top-down, leftmost order so as to maintain the LFG-DOP derivation order). Thus the competition set of composable fragments is computed on the fly at each derivation step during the Monte Carlo sampling process by grouping the f-structure units that unify and that are coherent with the subderivation built so far. As mentioned in 2.4, the Completeness condition can only be checked after the derivation process. Incomplete derivations are simply removed from the sampling distribution. After sampling a large number of random derivations that satisfy the LFG validity requirements, the most probable analysis is estimated by the analysis which results most often from the sampled derivations. For our experiments in section 3.2, we used a sample size of N = 10,000 derivations which corresponds to a maximal standard error \u03c3 of 0.005 (\u03c3 \u2264 1/(2\u221a\u039d), see Bod 1998 ).",
"cite_spans": [
{
"start": 604,
"end": 613,
"text": "Bod 1998;",
"ref_id": "BIBREF3"
},
{
"start": 614,
"end": 639,
"text": "Chappelier & Rajman 1998;",
"ref_id": "BIBREF7"
},
{
"start": 640,
"end": 653,
"text": "Goodman 1998)",
"ref_id": null
},
{
"start": 1676,
"end": 1684,
"text": "Bod 1998",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Testing the LFG-DOP model 3.1 Computing the most probable analysis",
"sec_num": "3"
},
{
"text": "We tested LFG-DOP on two LFG-annotated corpora: the Verbmobil corpus, which contains appointment planning dialogues, and the Homecentre corpus, which contains Xerox printer documentation. Both corpora have been annotated by Xerox PARC. They contain packed LFGrepresentations (Maxwell & Kaplan 1991) of the grammatical parses of each sentence together with an indication which of these parses is the correct one. The parses are represented in a binary form and were debinarized using software provided to us by Xerox PARC. 3 For our experiments we only used the correct parses of each sentence resulting in 540 Verbmobil parses and 980 Homecentre parses. Each corpus was divided into a 90% training set and a 10% test set. This division was random except for one constraint: that all the words in the test set actually occurred in the training set. The sentences from the test set were parsed and disambiguated by means of the fragments from the training set. Due to memory limitations, we limited the depth of the indexed subtrees to 4. Because of the small size of the corpora we averaged our results on 10 different training/test set splits. Besides an exact match accuracy metric, we also used a more fine-grained metric based on the well-known PARSEVAL metrics that evaluate phrase-structure trees (Black et al. 1991) . The PARSEVAL metrics compare a proposed parse P with the corresponding correct treebank parse T as follows:",
"cite_spans": [
{
"start": 275,
"end": 298,
"text": "(Maxwell & Kaplan 1991)",
"ref_id": "BIBREF15"
},
{
"start": 522,
"end": 523,
"text": "3",
"ref_id": null
},
{
"start": 1302,
"end": 1321,
"text": "(Black et al. 1991)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with LFG-DOP",
"sec_num": "3.2"
},
{
"text": "Precision = # correct constituents in P # constituents in P # correct constituents in P # constituents in T Recall =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with LFG-DOP",
"sec_num": "3.2"
},
{
"text": "In order to apply these metrics to LFG analyses, we extend the PARSEVAL notion of \"correct constituent\" in the following way: a constituent in P is correct if there exists a constituent in T of the same label that spans the same words and that \u03c6 -corresponds to the same fstructure unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with LFG-DOP",
"sec_num": "3.2"
},
{
"text": "We illustrate the evaluation metrics with a simple example. In the next figure, a proposed parse P is compared with the correct parse T for the test sentence Kim fell. The proposed parse is incorrect since it has the incorrect feature value for the TENSE attribute. Thus, if this were the only test sentence, the exact match would be 0%. The precision, on the other hand, is higher than 0% as it compares the parse on a constituent basis. Both the proposed parse and the correct parse contain three constituents: S, NP and VP. While all three constituents in P have the same label and span the same words as in T, only the NP constituent in P also maps to the same fstructure unit as in T. The precision is thus equal to 1/3. Note that in this example the recall is equal to the precision, but this need not always be the case. In our experiments we are first of all interested in comparing the performance of Bod & Kaplan's probability model against our probability model (as explained in section 2.4). Moreover, we also want to study the contribution of Discard-generated fragments to the parse accuracy. We therefore created for each training set two sets of fragments: one which contains all fragments (up to depth 4) and one which excludes the fragments generated by Discard. The exclusion of the D i s c a r d -generated fragments means that all probability mass goes to the fragments generated by Root and Frontier in which case our model is equivalent to Bod & Kaplan's. The following two tables present the results of our experiments where +Discard refers to the full set of fragments and \u2212Discard refers to the fragment set without Discard-generated fragments. Cormons (1999) has made a mathematical observation which also shows that generalized fragments can get too much probability mass.",
"cite_spans": [
{
"start": 1671,
"end": 1685,
"text": "Cormons (1999)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with LFG-DOP",
"sec_num": "3.2"
},
{
"text": "The tables also show that our way of assigning probabilities to Discard-generated fragments leads only to a slight accuracy increase (compared to the experiments in which Discard-generated fragments are excluded). According to paired t-testing none of these differences in accuracy were statistically significant. This suggests that Discard-generated fragments do not significantly contribute to the parse accuracy, or that perhaps these fragments are too numerous to be reliably estimated on the basis of our small corpora. We also varied the probability mass assigned to Discardgenerated fragments: except for very small (\u2264 0.01) or large values (\u2265 0.88), which led to an accuracy decrease, there was no significant change. 4 It is difficult to say how good or bad our results are with respect to other approaches. The only other published results on the LFG-annotated Verbmobil and Homecentre corpora are by Johnson et al. (1999) and Johnson & Riezler (2000) who use a log-linear model to estimate probabilities. But while we first parse the test sentences with fragments from the training set and subsequently compute the most probable parse, Johnson et al. directly use the packed LFG-representations from the test set to select the most probable parse, thereby completely skipping the parsing phase (Mark Johnson, p.c.) . Moreover, 42% of the Verbmobil sentences and 51% of the Homecentre sentences are unambiguous (i.e. their packed LFG-representations contain only one analysis), which makes Johnson et al's task completely trivial for these sentences. In our approach, all test sentences were ambiguous, resulting in a much more difficult task. A quantitative comparison between our model and Johnson et al.'s is therefore meaningless.",
"cite_spans": [
{
"start": 726,
"end": 727,
"text": "4",
"ref_id": null
},
{
"start": 911,
"end": 932,
"text": "Johnson et al. (1999)",
"ref_id": "BIBREF11"
},
{
"start": 937,
"end": 961,
"text": "Johnson & Riezler (2000)",
"ref_id": "BIBREF12"
},
{
"start": 1305,
"end": 1325,
"text": "(Mark Johnson, p.c.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with LFG-DOP",
"sec_num": "3.2"
},
{
"text": "Finally, we are interested in the impact of functional structures on predicting the correct constituent structures. We therefore removed all fstructure units from the fragments (thus yielding a Tree-DOP model) and compared the results against our version of LFG-DOP (which include the Discardgenerated fragments). We evaluated the parse accuracy on the tree-structures only, using exact match together with the PARSEVAL measures. We used the same training/test set splits as in the previous experiments and limited the maximum subtree depth again to 4. The following tables show the results. Table 3 . C-structure accuracy on the Verbmobil 4 Although generalized fragments thus seem statistically unimportant for these corpora, they remain important for parsing ungrammatical sentences (which was the original motivation for including them --see Bod & Kaplan 1998) . Table 4 . C-structure accuracy on the Homecentre",
"cite_spans": [
{
"start": 640,
"end": 641,
"text": "4",
"ref_id": null
},
{
"start": 846,
"end": 864,
"text": "Bod & Kaplan 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 592,
"end": 599,
"text": "Table 3",
"ref_id": null
},
{
"start": 867,
"end": 874,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments with LFG-DOP",
"sec_num": "3.2"
},
{
"text": "The results indicate that LFG-DOP's functional structures help to improve the parse accuracy of treestructures. In other words, LFG-DOP outperforms Tree-DOP if evaluated on tree-structures only. According to paired t-tests the differences in accuracy were statistically significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with LFG-DOP",
"sec_num": "3.2"
},
{
"text": "We have given an empirical assessment of the LFG-DOP model introduced by Bod & Kaplan (1998) . We developed a new probability model for LFG-DOP which treats fragments with generalized features as previously unseen events. The experiments showed that our probability model outperforms Bod & Kaplan's model on the Verbmobil and Homecentre corpora. Moreover, Bod & Kaplan's model turned out to be inadequate in dealing with generalized fragments. We also established that the contribution of generalized fragments to the parse accuracy in our model is minimal and statistically insignificant. Finally, we showed that LFG's functional structures contribute to significantly higher parse accuracy on tree structures. This suggests that our model may be successfully used to exploit the functional annotations in the Penn Treebank (Marcus et al. 1994) , provided that these annotations can be converted into LFG-style functional structures. As future research, we want to test LFG-DOP using log-linear models, as such models maximize the likelihood of the training corpus.",
"cite_spans": [
{
"start": 73,
"end": 92,
"text": "Bod & Kaplan (1998)",
"ref_id": "BIBREF5"
},
{
"start": 825,
"end": 845,
"text": "(Marcus et al. 1994)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Thanks to Hadar Shemtov for providing us with the relevant software.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Corpus-Based Approach to Semantic Interpretation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bod",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scha",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings Ninth Amsterdam Colloquium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. van den Berg, R. Bod and R. Scha, 1994. \"A Corpus-Based Approach to Semantic Interpretation\", Proceedings Ninth Amsterdam Colloquium , Amsterdam, The Netherlands.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Procedure for Quantitatively Comparing the Syntactic Coverage of English",
"authors": [
{
"first": "E",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Black et al., 1991. \"A Procedure for Quantitatively Comparing the Syntactic Coverage of English\", Proceedings DARPA Speech and Natural Language Workshop , Pacific Grove, Morgan Kaufmann.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using an Annotated Language Corpus as a Virtual Stochastic Grammar",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings AAAI'93",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bod, 1993. \"Using an Annotated Language Corpus as a Virtual Stochastic Grammar\", Proceedings AAAI'93, Washington D.C.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Beyond Grammar",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bod, 1998. Beyond Grammar, CSLI Publications, Cambridge University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Parsing with the Shortest Derivation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings COLING-2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bod, 2000. \"Parsing with the Shortest Derivation\", Proceed- ings COLING-2000 , Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Probabilistic Corpus-Driven Model for Lexical Functional Analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bod",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings COLING-ACL'98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bod and R. Kaplan, 1998. \"A Probabilistic Corpus-Driven Model for Lexical Functional Analysis\", Proceedings COLING-ACL'98 , Montreal, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A DOP Model for Semantic Interpretation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bonnema",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bod",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Scha",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings ACL/EACL-97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bonnema, R. Bod and R. Scha, 1997. \"A DOP Model for Semantic Interpretation\", Proceedings ACL/EACL-97, Madrid, Spain.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extraction stochastique d'arbres d'analyse pour le mod\u00e8le DOP",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chappelier",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rajman",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings TALN'98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chappelier and M. Rajman, 1998. \"Extraction stochastique d'arbres d'analyse pour le mod\u00e8le DOP\", Proceedings TALN'98, Paris, France.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Analyse et d\u00e9sambiguisation: Une approche \u00e0 base de corpus (Data-Oriented Parsing) pour les r\u00e9presentations lexicales fonctionnelles",
"authors": [
{
"first": "B",
"middle": [],
"last": "Cormons",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Cormons, 1999. Analyse et d\u00e9sambiguisation: Une approche \u00e0 base de corpus (Data-Oriented Parsing) pour les r\u00e9presentations lexicales fonctionnelles. PhD thesis, Universit\u00e9 de Rennes, France.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Population Frequencies of Species and the Estimation of Population Parameters",
"authors": [
{
"first": "I",
"middle": [],
"last": "Good",
"suffix": ""
}
],
"year": 1953,
"venue": "Biometrika",
"volume": "40",
"issue": "",
"pages": "237--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Good, 1953. \"The Population Frequencies of Species and the Estimation of Population Parameters\", Biometrika 40, 237-264.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Estimators for Stochastic Unification-Based Grammars",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Canon",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings ACL'99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnson, S. Geman, S. Canon, Z. Chi and S. Riezler, 1999. \"Estimators for Stochastic Unification-Based Gram- mars\", Proceedings ACL'99, Maryland.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exploiting Auxiliary Distributions in Stochastic Unification-Based Grammars",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings ANLP-NAACL-2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnson and S. Riezler, 2000. \"Exploiting Auxiliary Distributions in Stochastic Unification-Based Gram - mars\", Proceedings ANLP-NAACL-2000, Seattle, Washington.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Lexical-Functional Grammar: A Formal System for Grammatical Representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaplan, and J. Bresnan, 1982. \"Lexical-Functional Grammar: A Formal System for Grammatical Representation\", in J. Bresnan (ed.), The Mental Representation of Grammatical Relations, The MIT Press, Cambridge, Mass.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Penn Treebank: Annotating Predicate Argument Structure",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Macintyre",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Schasberger",
"suffix": ""
}
],
"year": 1994,
"venue": "ARPA Human Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "110--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Marcus, G. Kim, M. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz and B. Schasberger, 1994. \"The Penn Treebank: Annotating Predicate Argument Structure\". In: ARPA Human Language Technology Workshop , 110-115.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Method for Disjunctive Constraint Satisfaction",
"authors": [
{
"first": "J",
"middle": [],
"last": "Maxwell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1991,
"venue": "Current Issues in Parsing Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Maxwell and R. Kaplan, 1991. \"A Method for Disjunctive Constraint Satisfaction\", in M. Tomita (ed.), Current Issues in Parsing Technology, Kluwer Academic Publishers.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning Stochastic Lexicalized Tree Grammars from HPSG",
"authors": [
{
"first": "G",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 1999,
"venue": "DFKI Technical Report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Neumann and D. Flickinger, 1999. \"Learning Stochastic Lexicalized Tree Grammars from HPSG\", DFKI Technical Report, Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Statistical Language Modeling Using Leaving-One-Out",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Wessel",
"suffix": ""
}
],
"year": 1997,
"venue": "Corpus-Based Methods in Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Ney, S. Martin and F. Wessel, 1997. \"Statistical Language Modeling Using Leaving-One-Out\", in S. Young & G. Bloothooft (eds.), Corpus-Based Methods in Language and Speech Processing, Kluwer Academic Publishers.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An optimized algorithm for Data Oriented Parsing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sima'an",
"suffix": ""
}
],
"year": 1995,
"venue": "Recent Advances in Natural Language Processing",
"volume": "136",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Sima'an, 1995. \"An optimized algorithm for Data Oriented Parsing\", in R. Mitkov and N. Nicolov (eds.), Recent Advances in Natural Language Processing 1995, volume 136 of Current Issues in Linguistic Theory . John Benjamins, Amsterdam.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning Efficient Disambiguation . PhD thesis, ILLC dissertation series number",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sima'an",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "1999--2001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Sima'an, 1999. Learning Efficient Disambiguation . PhD thesis, ILLC dissertation series number 1999-02.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A State-Transition Grammar for Data -Oriented Parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings European Chapter of the ACL'95",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Tugwell, 1995. \"A State-Transition Grammar for Data - Oriented Parsing\", Proceedings European Chapter of the ACL'95 , Dublin, Ireland.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Hybrid Architecture for Robust MT using LFG-DOP",
"authors": [
{
"first": "A",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Experimental and Theoretical Artificial Intelligence",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Way, 1999. \"A Hybrid Architecture for Robust MT using LFG-DOP\", Journal of Experimental and Theoretical Artificial Intelligence 11 (Special Issue on Memory - Based Language Processing)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "A representation for Kim eats"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "An LFG-DOP fragment obtained by Root"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Another LFG-DOP fragment"
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "A Frontier-generated fragment Finally, Bod & Kaplan present a third decomposition operation, Discard, defined to construct generalizations of the fragments supplied by Root and Frontier. Discard acts to delete combinations of attribute-value pairs subject to the following condition: Discard does not delete pairs whose values \u03c6-correspond to remaining c-structure nodes. Discard produces fragments such as in figure 5, where the subject's number in figure 3"
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Corpus representation for John fell"
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "shows the effect of the LFG-DOP composition operation using two fragments from this corpus, resulting in a representation for the new sentence Kim fell. Illustration of the composition operation"
},
"FIGREF9": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Cormons first converts LFG-representations into more compact indexed trees: each node in the c-structure is assigned an index which refers to the \u03c6-corresponding f-structure unit. For example, the representation infigure 6"
}
}
}
}