| { |
| "paper_id": "W96-0214", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T04:58:57.821290Z" |
| }, |
| "title": "Efficient Algorithms for Parsing the DOP Model *", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Harvard University", |
| "location": { |
| "addrLine": "33 Oxford St", |
| "postCode": "02138", |
| "settlement": "Cambridge", |
| "region": "MA" |
| } |
| }, |
| "email": "goodman@das.harvard.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Excellent results have been reported for Data-Oriented Parsing (DOP) of natural language texts (Bod, 1993c). Unfortunately, existing algorithms are both computationally intensive and difficult to implement. Previous algorithms are expensive due to two factors: the exponential number of rules that must be generated and the use of a Monte Carlo parsing algorithm. In this paper we solve the first problem by a novel reduction of the DOP model to:a small, equivalent probabilistic context-free grammar. We solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents, rather than the probability of a correct parse tree. Using ithe optimizations, experiments yield a 97% crossing brackets rate and 88% zero crossing brackets rate. This differs significantly from the results reported by Bod, and is comparable to results from a duplication of Pereira and Schabes's (1992) experiment on the same data. We show that Bod's results are at least partially due to an extremely fortuitous choice of test data, and partially due to using cleaner data than other researchers.", |
| "pdf_parse": { |
| "paper_id": "W96-0214", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Excellent results have been reported for Data-Oriented Parsing (DOP) of natural language texts (Bod, 1993c). Unfortunately, existing algorithms are both computationally intensive and difficult to implement. Previous algorithms are expensive due to two factors: the exponential number of rules that must be generated and the use of a Monte Carlo parsing algorithm. In this paper we solve the first problem by a novel reduction of the DOP model to:a small, equivalent probabilistic context-free grammar. We solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents, rather than the probability of a correct parse tree. Using ithe optimizations, experiments yield a 97% crossing brackets rate and 88% zero crossing brackets rate. This differs significantly from the results reported by Bod, and is comparable to results from a duplication of Pereira and Schabes's (1992) experiment on the same data. We show that Bod's results are at least partially due to an extremely fortuitous choice of test data, and partially due to using cleaner data than other researchers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The Data-Oriented Parsing (DOP) model has a short, interesting, and controversial history. It was introduced by Remko Scha (1990) , and was then studied by Rens Bod. Unfortunately, Bod (1993c , 1992 was not able to find an efficient exact * I would like to acknowledge support from National Science Foundation Grant IRI-9350192 and a National Science Foundation Graduate Student Fellowship. I would also like to thank Rens Bod, Stan Chen, Andrew Kehler, David Magerman, Wheeler Rural, Stuart Shieber, and Khalil Sima'an for helpful discussions, and comments on earlier drafts, and the comments of the anonymous reviewers. algorithm for parsing using the model; however he did discover and implement Monte Carlo approximations. He tested these algorithms on a cleaned up version of the ATIS corpus, and achieved some very exciting results, reportedly getting 96% of his test set exactly correct, a huge improvement over previous results. For instance, Bod (1993b) compares these results to Schabes (1993) , in which, for short sentences, 30% of the sentences have no crossing brackets (a much easier measure than exact match). Thus, Bod achieves an extraordinary &fold error rate reduction.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 129, |
| "text": "Scha (1990)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 156, |
| "end": 191, |
| "text": "Rens Bod. Unfortunately, Bod (1993c", |
| "ref_id": null |
| }, |
| { |
| "start": 192, |
| "end": 198, |
| "text": ", 1992", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 951, |
| "end": 962, |
| "text": "Bod (1993b)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 989, |
| "end": 1003, |
| "text": "Schabes (1993)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Not surprisingly, other researchers attempted to duplicate these results, but due to a lack of details of the parsing algorithm in his publications, these other researchers were not able to confirm the results (Magerman, Lalferty, personal communication). Even Bod's thesis (Bod, 1995a) does not contain enough information to replicate his results.", |
| "cite_spans": [ |
| { |
| "start": 274, |
| "end": 286, |
| "text": "(Bod, 1995a)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Parsing using the DOP model is especially difficult. The model can be summarized as a special kind of Stochastic Tree Substitution Grammar (STSG): given a bracketed, labelled training corpus, let every subtree of that corpus be an elementary tree, with a probability proportional to the number of occurrences of that subtree in the training corpus. Unfortunately, the number of trees is in general exponential in the size of the training corpus trees, producing an unwieldy grammar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we introduce a reduction of the DOP model to an exactly equivalent Probabilistic Context Free Grammar (PCFG) that is linear in the number of nodes in the training data. Next, we present an algorithm for parsing, which returns the parse that is expected to have the largest number of correct constituents. We use the reduction and algorithm to parse held out test data, comparing these results to a replication of Pereira and Schabes (1992) on the same data. These results are disappointing: the PCFG implementation of the DOP model performs about the same as the Pereira and Schabes method. We present an analysis of the runtime of our algorithm and Bod's. Finally, we analyze Bod's data, showing that some of the difference between our performance and his is due to a fortuitous choice of test data.", |
| "cite_spans": [ |
| { |
| "start": 440, |
| "end": 454, |
| "text": "Schabes (1992)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "This paper contains the first published replication of the full DOP model, i.e. using a parser which sums over derivations. It also contains algorithms implementing the model with significantly fewer resources than previously needed. Furthermore, for the first time, the DOP model is compared on the same data to a competing model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The DOP model itself is extremely simple and can be described as follows: for every sentence in a parsed training corpus, extract every subtree. In general, the number of subtrees will be very large, typically exponential in sentence length. Now, use these trees to form a Stochastic Tree Substitution Grammar (STSG). There are two ways to define a STSG: either as a Stochastic Tree Adjoining Grammar (Schabes, 1992) restricted to substitution operations, or as an extended PCFG in which entire trees may occur on the right hand side, instead of just strings of terminals and nonterminals.", |
| "cite_spans": [ |
| { |
| "start": 401, |
| "end": 416, |
| "text": "(Schabes, 1992)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Research", |
| "sec_num": null |
| }, |
| { |
| "text": "Given the tree of Figure 1 , we can use the DOP model to convert it into the STSG of Figure 2 . The numbers in parentheses represent the probabilities. These trees can be combined in various ways to parse sentences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 26, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 85, |
| "end": 94, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Previous Research", |
| "sec_num": null |
| }, |
| { |
| "text": "In theory, the DOP model has several advantages over other models. Unlike a PCFG, the use of trees allows capturing large contexts, making the model more sensitive. Since every subtree is included, even trivial ones corresponding to rules in a PCFG, novel sentences with unseen contexts 144 Unfortunately, the number of subtrees is huge; therefore Bod randomly samples 5% of the subtrees, throwing away the rest. This significantly speeds up parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Research", |
| "sec_num": null |
| }, |
| { |
| "text": "s (3) s s = s (~)_ ~ ~ ~ B (1) A C A D E B E B I I I I I I I I X X X X ~r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Research", |
| "sec_num": null |
| }, |
| { |
| "text": "There are two existing ways to parse using the DOP model. First, one can find the most probable derivation. That is, there can be many ways a given sentence could be derived from the STSG. Using the most probable derivation criterion, one simply finds the most probable way that a sentence could be produced. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Research", |
| "sec_num": null |
| }, |
| { |
| "text": "x x has probability ~ of being generated by the trivial derivation containing a single tree. This tree corresponds to the most probable derivation of xx.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "I I", |
| "sec_num": null |
| }, |
| { |
| "text": "One could try to find the most probable parse tree. For a given sentence and a given parse tree, there are many different derivations that could lead to that parse tree. The probability of the parse tree is the sum of the probabilities of the derivations. Given our example, there are two different ways to generate the parse tree S E B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "I I", |
| "sec_num": null |
| }, |
| { |
| "text": "x x each with probability -~, so that the parse tree has probability -~. This parse tree is most probable. Bod (1993c) shows how to approximate this most probable parse using a Monte Carlo algorithm. The algorithm randomly samples possible derivations, then finds the tree with the most sampled derivations. Bod shows that the most probable parse yields better performance than the most probable derivation on the exact match criterion. 1996) implemented a version of the DOP model, which parses efficiently by limiting the number of trees used and by using an efficient most probable derivation model. His experiments differed from ours and Bod's in many ways, including his use of a ditferent version of the ATIS corpus; the use of word strings, rather than part of speech strings; and the fact that he did not parse sentences containing unknown words, effectively throwing out the most difficult sentences. Furthermore, Sim a'an limited the number of substitution sites for his trees, effectively using a subset of the DOP model.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 118, |
| "text": "Bod (1993c)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 437, |
| "end": 442, |
| "text": "1996)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "I I", |
| "sec_num": null |
| }, |
| { |
| "text": "NP (\u00bd) DET N VP (\u00bd) V NP s NP VP V NP //'...... DET N vP (\u00bd) NP (\u00bd) V", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "I I", |
| "sec_num": null |
| }, |
| { |
| "text": "Unfortunately, Bod's reduction to a STSG is extremely expensive, even when throwing away 95% of the grammar. Fortunately, it is possible to find an equivalent PCFG that contains exactly eight PCFG rules for each node in the training data; thus it is O(n). Because this reduction is so much smaller, we do not discard any of the grammar when using it. The PCFG is equivalent in two senses: first it generates the same strings with the same probabilities; second, using an isomorphism defined below, it generates the same trees with the same probabilities, although one must sum over several PCFG trees for each STSG tree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reduction of DOP to PCFG", |
| "sec_num": null |
| }, |
| { |
| "text": "To show this reduction and equivalence, we must first define some terminology. We assign every node in every tree a unique number, which we will call its address. Let A@k denote the node at address k, where A is the non-terminal labeling that node. We will need to create one new nonterminal for each node in the training data. We will call this non-terminal Ak. We will call nonterminals of this form \"interior\" non-terminals, and the original non-terminals in the parse trees \"exterior\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reduction of DOP to PCFG", |
| "sec_num": null |
| }, |
| { |
| "text": "Let aj represent the number of subtrees headed by the node A@j. Let a represent the number of subtrees headed by nodes with non-terminal A, that is a = ~j aj.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reduction of DOP to PCFG", |
| "sec_num": null |
| }, |
| { |
| "text": "A@j B@k C@l How many subtrees does it have? Consider first the possibilities on the left branch. There are bk non-trivial subtrees headed by B@k, and there is also the trivial case where the left node is simply B. Thus there are bk \u00f7 1 different possibilities on the left branch. Similarly, for the right branch there are cl + 1 possibilities. We can create a subtree by choosing any possible left subtree and any possible right subtree. Thus, there are aj = (bk + 1)(c~ + 1) possible subtrees headed by A@j. In our example tree of Figure 1 , both noun phrases have exactly one subtree: np4 --nl>z --1; the verb phrase has 2 subtrees: vp3 = 2; and the sentence has 6: sl = 6. These numbers correspond to the number of subtrees in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 532, |
| "end": 540, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 730, |
| "end": 738, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "We will call a PCFG subderivation isomorphic to a STSG tree if the subderivation begins with an external non-terminal, uses internal nonterminals for intermediate steps, and ends with external non-terminals. For instance, consider the tree", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "NP VP PN PN V NP", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "taken from Figure 2 . The following PCFG subderivation is isomorphic: S ~ NP@I VP@2 PN PN VP@2 =~ PN PN V NP. We say that a PCFG derivation is isomorphic to a STSG derivation if there is a corresponding PCFG subderivation for every step in the STSG derivation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 19, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "We will give a simple small PCFG with the following surprising property: for every subtree in the training corpus headed by A, the grammar will generate an isomorphic subderivation with probability 1/a. In other words, rather than using the large, explicit STSG, we can use this small PCFG that generates isomorphic derivations, with identical probabilities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "The construction is as follows. For a node such as A@j B@k C@l we will generate the following eight PCFG rules, where the number in parentheses following a rule is its probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "Aj --~ SC (1/aj) A ~ BC (l/a) Aj ~ BkC (bh/aj) A ~ BkC (bk/a) Aj ~ BCi (ci/aj) A ~ BCz (cJa) Aj ~ B~Ci (bkcl/aj) A ~ BkCl (bkcl/a) (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "We will show that subderivations headed by A with external non-terminals at the roots and leaves, internal non-terminals elsewhere have probability 1/a. Subderivations headed by Aj with external non-terminals only at the leaves, internal non-terminals elsewhere, have probability 1/aj. The proof is by induction on the depth of the trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "For trees of depth 1, there are two cases:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "A A@j B C B C", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "Trivially, these trees have the required probabilities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "Now, assume that the theorem is true for trees of depth n or less. We show that it holds for trees of depth n + 1. There are eight cases, one for each of the eight rules. We show two of them. Let B@k the probability of the tree is ~ ~ ai ~. Similarly, for another case, trees headed by A B@k C the probability of the tree is b~ b~a = ~'1 The other six cases follow trivially with similar reasoning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "We call a PCFG derivation isomorphic to a STSG derivation if for every substitution in the STSG there is a corresponding subderivation in the PCFG. Figure 4 contains an example of isomorphic derivations, using two subtrees in the STSG and four productions in the PCFG.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 148, |
| "end": 156, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "We call a PCFG tree isomorphic to a STSG tree if they are identical when internal nonterminals are changed to external non-terminals. Our main theorem is that this construction produces PCFG trees isomorphic to the STSG trees with equal probability. If every subtree in the training corpus occurred exactly once, this would be trivial to prove. For every STSG subderivation, there would be an isomorphic PCFG sub-derivation, with equal probability. Thus for every STSG derivation, there would be an isomorphic PCFG derivation, with equal probability. Thus every STSG tree would be produced by the PCFG with equal probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "However, it is extremely likely that some subtrees, especially trivial ones like S NP VP will occur repeatedly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "If the STSG formalism were modified slightly, so that trees could occur multiple times, then our relationship could be made one to one. Consider a modified form of the DOP model, in which when subtrees occurred multiple times in the training corpus, their counts were not merged: both identical trees are added to the grammar. Each of these trees will have a lower probability than if their counts were merged. This would change the probabilities of the derivations; however the probabilities of parse trees would not change, since there would be correspondingly more derivations for each tree. Now, the desired one to one relationship holds: for every derivation in the new STSG there is an isomorphic derivation in the PCFG with equal probability. Thus, summing over all derivations of a tree in the STSG yields the same probability as summing over all the isomorphic derivations in the PCFG. Thus, every STSG tree would be produced by the PCFG with equal probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "It follows trivially from this that no extra trees are produced by the PCFG. Since the total probability of the trees produced by the STSG is 1, and the PCFG produces these trees with the same probability, no probability is \"left over\" for any other trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consider a node A~j of the form:", |
| "sec_num": null |
| }, |
| { |
| "text": "There are several different evaluation metrics one could use for finding the best parse. In the section covering previous research, we considered the most probable derivation and the most probable parse tree. There is one more metric we could consider. If our performance evaluation were based on the number of constituents correct, using measures similar to the crossing brackets measure, we would want the parse tree that was most likely to have the largest number of correct constituents. With this criterion and the example grammar of Figure 3 , the best parse tree would be", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 539, |
| "end": 548, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "147 S A A B I I x ~g", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "The probability that the S constituent is correct is 1.0, while the probability that the A constituent is correct is ~, and the probability that the B constituent is correct is }. Thus, this tree has on average 2 constituents correct. All other trees will have fewer constituents correct on average. We call the best parse tree under this criterion the Maximum Constituents Parse. Notice that this parse tree cannot even be produced by the grammar: each of its constituents is good, but it is not necessarily good when considered as a full tree. Bod (1993a Bod ( , 1995a shows that the most probable derivation does not perform as well as the most probable parse for the DOP model, getting 65% exact match for the most probable derivation, versus 96% correct for the most probable parse. This is not surprising, since each parse tree can be derived by many different derivations; the most probable parse criterion takes all possible derivations into account. Similarly, the Maximum Constituents Parse is also derived from the sum of many different derivations. Furthermore, although the Maximum Constituents Parse should not do as well on the exact match criterion, it should perform even better on the percent constituents correct criterion. We have previously performed a detailed comparison between the most likely parse, and the Maximum Constituents Parse for Probabilistic Context Free Grammars (Goodman, 1996) ; we showed that the two have very similax performance on a broad range of measures, with at most a 10% difference in error rate (i.e., a change from 10% error rate to 9% error rate.) We therefore think that it is reasonable to use a Maximum Constituents Parser to parse the DOP model.", |
| "cite_spans": [ |
| { |
| "start": 546, |
| "end": 556, |
| "text": "Bod (1993a", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 557, |
| "end": 570, |
| "text": "Bod ( , 1995a", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1400, |
| "end": 1415, |
| "text": "(Goodman, 1996)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "The parsing algorithm is a variation on the Inside-Outside algorithm, developed by Baker (1979) and discussed in detail by Lari and Young (1990) . However, while the Inside-Outside algorithm is a grammar re-estimation algorithm, the algorithm presented here is just a parsing algorithm. It is closely related to a similar algorithm used for Hidden Markov Models (Rabiner, 1989) for finding the most likely state at each time. However, unlike in the HMM case where the algorithm produces a simple state sequence, in the PCFG case a parse tree is produced, resulting in addi- A formal derivation of a very similar algorithm is given elsewhere (Goodman, 1996) ; only the intuition is given here. The algorithm can be summarized as follows. First, for each potential constituent, where a constituent is a non-terminal, a start position, and an end position, find the probability that that constituent is in the parse. After that, put the most likely constituents together to form a parse tree, using dynamic programming.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 95, |
| "text": "Baker (1979)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 123, |
| "end": 144, |
| "text": "Lari and Young (1990)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 362, |
| "end": 377, |
| "text": "(Rabiner, 1989)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 641, |
| "end": 656, |
| "text": "(Goodman, 1996)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "The probability that a potential constituent occurs in the correct parse tree, P(X * ws...wtlS ~ wl...wn), will be called g(s,t,X).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "In words, it is the probability that, given the sentence wl...w,, a symbol X generates ws...wt.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "We can compute this probability using elements of the Inside-Outside algorithm. First, compute the inside probabilities, e(s, t, X) = P(X =~ w,...wt). Second, compute the outside probabilities, /(s,t,X) = P(S ~ wl...w~-lXwt+l...wn). Third, compute the matrix g(s, t, ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "Once the matrix g(s, t, X) is computed, a dynamic programming algorithm can be used to determine the best parse, in the sense of maximizing the number of constituents expected correct. For a grammar with g nonterminals and training data of size T, the run time of the algorithm is O(Tn 2 + gn 3 + n a) since there are two layers of outer loops, each with run time at most n, and inner loops, over addresses (training data), nonterminals and n. However, this is dominated by the computation of the Inside and Outside probabilities, which takes time O(rna), for a grammar with r rules. Since there are eight rules for every node in the training data, this is O(Tn3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ".wn) = f(s, t, X) x e(s, t, X)/e(1, n, S)", |
| "sec_num": null |
| }, |
| { |
| "text": "By modifying the algorithm slightly to record the actual split used at each node, we can recover the best parse. The entry maxc[1, n] contains the expected number of correct constituents, given the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ".wn) = f(s, t, X) x e(s, t, X)/e(1, n, S)", |
| "sec_num": null |
| }, |
| { |
| "text": "We are grateful to Bod for supplying the data that he used for his experiments (Bod, 1995b , Bod, 1995a , Bod, 1993c . The original ATIS data from the Penn Tree Bank, version 0.5, is very noisy; it is difficult to even automatically read this data, due to inconsistencies between files. Researchers are thus left with the difficult decision as to how to clean the data. For this paper, we conducted two sets of experiments: one using a minimally cleaned set of data, 1 making our results comparable to previous results; the other using the ATIS data prepared by Bod, which contained much more significant revisions.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 90, |
| "text": "(Bod, 1995b", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 91, |
| "end": 103, |
| "text": ", Bod, 1995a", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 104, |
| "end": 116, |
| "text": ", Bod, 1993c", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": null |
| }, |
| { |
| "text": "Ten data sets were constructed by randomly splitting minimally edited ATIS (Hemphill et al., 1990) sentences into a 700 sentence training set, and 88 sentence test set, then discarding sentences of length > 30. For each of the ten sets, both the DOP algorithm outlined here and the grammar induction experiment of Pereira and Schabes were run. Crossing brackets, zero crossing brackets, and the paired differences are presented in Table 1 . All sentences output by the parser were made binary branching (see the section covering analysis of Bod's data), since otherwise the crossing brackets measures are meaningless (Magerman, 1994) .", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 98, |
| "text": "(Hemphill et al., 1990)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 617, |
| "end": 633, |
| "text": "(Magerman, 1994)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 431, |
| "end": 438, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": null |
| }, |
| { |
| "text": "1A diff file between the original ATIS data and the cleaned up version, in a form usable by the \"eft' program, is available by anonymous FTP from ftp://ftp.das.harvard.edu/pub/goodman/atis-ed/ tLtb.par-ed and ti_tb.pos-ed. Note that the number of changes made was small. The diff files sum to 457 bytes, versus 269,339 bytes for the original files, or less than 0.2%. A few sentences were not parsable; these were assigned right branching period high structure, a good heuristic (Brill, 1993) .", |
| "cite_spans": [ |
| { |
| "start": 479, |
| "end": 492, |
| "text": "(Brill, 1993)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": null |
| }, |
| { |
| "text": "We also ran experiments using Bod's data, 75 sentence test sets, and no limit on sentence length. However, while Bod provided us with his data, he did not provide us with the split into test and training that he used; as before we used ten random splits. The results are disappointing, as shown in Table 2 . They are noticeably worse than those of Bod, and again very comparable to those of Pereira and Schabes. Whereas Bod reported 96% exact match, we got only 86% using the less restriCtive zero crossing brackets criterion. It is not clear what exactly accounts for these differences. 2 It is also noteworthy that the results are much better on Bod's data than on the minimally edited data: crossing brackets rates of 96% and 97% on Bod's data versus 90% on minimally edited data. Thus it appears that part of Bod's extraordinary performance can be explained by the fact that his data is much cleaner than the data used by other researchers. DOP does do slightly better on most measures. We performed a statistical analysis using a t-test on the paired differences between DOP and Pereira and Schabes performance on each run. On ~Ideally, we would exactly reproduce these experiments using Bod's algorithm. Unfortunately, it was not possible to get a full specification of the algorithm. the minimally edited ATIS data, the differences were statistically insignificant, while on Bod's data the differences were statistically significant beyond the 98'th percentile. Our technique for finding statistical significance is more strenuous than most: we assume that since all test sentences were parsed with the same training data, all results of a single run are correlated. Thus we compare paired differences of entire runs, rather than of sentences or constituents. This makes it harder to achieve statistical significance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 298, |
| "end": 305, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Criteria", |
| "sec_num": null |
| }, |
| { |
| "text": "Notice also the minimum and maximum columns of the \"DOP-P&S\" lines, constructed by finding for each of the paired runs the difference between the DOP and the Pereira and Schabes algorithms. Notice that the minimum is usually negative, and the maximum is usually positive, meaning that on some tests DOP did worse than Pereira and Schabes and on some it did better. It is important to run multiple tests, especially with small test sets like these, in order to avoid misleading results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Criteria", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we examine the empirical runtime of our algorithm, and analyze Bod's. We also note that Bod's algorithm will probably be particularly inefficient on longer sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Timing Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "It takes about 6 seconds per sentence to run our algorithm on an HP 9000/715, versus 3.5 hours to run Bod's algorithm on a Sparc 2 (Bod, 1995b) . Factoring in that the HP is roughly four times faster than the Sparc, the new algorithm is about 500 times faster. Of course, some of this difference may be due to differences in implementation, so this estimate is fairly rough.", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 143, |
| "text": "(Bod, 1995b)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Timing Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "Furthermore, we believe Bod's analysis of his parsing algorithm is flawed. Letting G represent grammar size, and e represent maximum estimation error, Bod correctly analyzes his runtime as O(Gn3e-2). However, Bod then neglects analysis of this e -~ term, assuming that it is constant. Thus he concludes that his algorithm runs in polynomial time. However, for his algorithm to have some reasonable chance of finding the most probable parse, the number of times he must sample his data is at least inversely proportional to the conditional probability of that parse. For instance, if the maximum probability parse had probability 1/50, then he would need to sample at least 50 times to be reasonably sure of finding that parse. Now, we note that the conditional probability of the most probable parse tree will in general decline exponentially with sentence length. We assume that the number of ambiguities in a sentence will increase linearly with sentence length; if a five word sentence has on average one ambiguity, then a ten word sentence will have two, etc. A linear increase in ambiguity will lead to an exponential decrease in probability of the most probable parse.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Timing Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "Since the probability of the most probable parse decreases exponentially in sentence length, the number of random samples needed to find this most probable parse increases exponentially in sentence length. Thus, when using the Monte Carlo algorithm, one is left with the uncomfortable choice of exponentially decreasing the probability of finding the most probable parse, or exponentially increasing the runtime.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Timing Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "We admit that this is a somewhat informal argument. Still, the Monte Carlo algorithm has never been tested on sentences longer than those in the ATIS corpus; there is good reason to believe the algorithm will not work as well on longer sentences. Note that our algorithm has true runtime O(Tn3), as shown previously.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Timing Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "In the DOP model, a sentence cannot be given an exactly correct parse unless all productions in the correct parse occur in the training set. Thus, we can get an upper bound on performance by ex-amining the test corpus and finding which parse trees could not be generated using only productions in the training corpus. Unfortunately, while Bod provided us with his data, he did not specify which sentences were test and which were training. We can however find an upper bound on average case performance, as well as an upper bound on the probability that any particular level of performance could be achieved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Bod's Data", |
| "sec_num": null |
| }, |
| { |
| "text": "Bod randomly split his corpus into test and training. According to his thesis (Bod, 1995a, page 64) , only one of his 75 test sentences had a correct parse which could not be generated from the training data. This turns out to be very surprising. An analysis of Bod's data shows that at least some of the difference in performance between his results and ours must be due to an extraordinarily fortuitous choice of test data. It would be very interesting to see how our algorithm performed on Bod's split into test and training, but he has not provided us with this split. Bod did examine versions of DOP that smoothed, allowing productions which did not occur in the training set; however his reference to coverage is with respect to a version which does no smoothing.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 99, |
| "text": "(Bod, 1995a, page 64)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Bod's Data", |
| "sec_num": null |
| }, |
| { |
| "text": "In order to perform our analysis, we must determine certain details of Bod's parser which affect the probability of having most sentences correctly parsable. When using a chart parser, as Bod did, three problematic cases must be handled: e productions, unary productions, and n-ary (n > 2) productions. The first two kinds of productions can be handled with a probabilistic chart parser, but large and difficult matrix manipulations are required (Stolcke, 1993) ; these manipulations would be especially difficult given the size of Bod's grammar. Examining Bod's data, we find he removed e productions. We also assume that Bod made the same choice we did and eliminated unary productions, given the difficulty of correctly parsing them. Bod himself does not know which technique he used for n-ary productions, since the chart parser he used was written by a third party (Bod, personal communication) .", |
| "cite_spans": [ |
| { |
| "start": 446, |
| "end": 461, |
| "text": "(Stolcke, 1993)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 870, |
| "end": 899, |
| "text": "(Bod, personal communication)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Bod's Data", |
| "sec_num": null |
| }, |
| { |
| "text": "The n-ary productions can be parsed in a straightforward manner, by converting them to binary branching form; however, there are at least three different ways to convert them, as illustrated in Table 3 . In method \"Correct\", the nary branching productions are converted in such a way that no overgeneration is introduced. A set of special non-terminals is added, one for each partial right hand side. In method \"Continued\", a single new non-terminal is introduced for each original non-terminal. Because these non-terminals occur in multiple contexts, some overgeneration is introduced. However, this overgeneration is constrained, so that elements that tend to occur only at the beginning, middle, or end of the right hand side of a production cannot occur somewhere else. If the \"Simple\" method is used, then no new nonterminals are introduced; using this method, it is not possible to recover the n-ary branching structure from the resulting parse tree, and significant overgeneration occurs. Table 4 shows the undergeneration probabilities for each of these possible techniques for handling unary productions and n-ary productions. 3 The first number in each column is the probability that a sentence in the training data will have a production that occurs nowhere else. The second number is the probability that a test set of 75 sentences drawn from this database will have one ungeneratable sentence: 75p~4(1 -p).4", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 194, |
| "end": 201, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 996, |
| "end": 1003, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of Bod's Data", |
| "sec_num": null |
| }, |
| { |
| "text": "The table is arranged from least generous to most generous: in the upper left hand corner is a technique Bod might reasonably have used; in that case, the probability of getting the test set he described is lessthan one in a million. In the aA perl script for analyzing Bod's data is available by anonymous FTP from ftp://ftp.das.harvard,edu/pub/goodman/analyze.perl 4Actually, this is a slight overestimate for a few reasons, including the fact that the 75 sentences are drawn without replacement. Also, consider a sentence with a production that occurs only in one other sentence in the corpus; there is some probability that both sentences will end up fin the test data, causing both to be ungeneratable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Bod's Data", |
| "sec_num": null |
| }, |
| { |
| "text": "lower right corner we give Bod the absolute maximum benefit of the doubt: we assume he used a parser capable of parsing unary branching productions, that he used a very overgenerating grammar, and that he used a loose definition of \"Exact Match.\" Even in this case, there is only about a 1.5% chance of getting the test set Bod describes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "151", |
| "sec_num": null |
| }, |
| { |
| "text": "We have given efficient techniques for parsing the DOP model. These results are significant since the DOP model has perhaps the best reported parsing accuracy; previously the full DOP model had not been replicated due to the difficulty and computational complexity of the existing algorithms. We have also shown that previous results were partially due to an unlikely choice of test data, and partially due to the heavy cleaning of the data, which reduced the difficulty of the task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": null |
| }, |
| { |
| "text": "Of course, this research raises as many questions as it answers. Were previous results due only to the choice of test data, or are the differences in implementation partly responsible? In that case, there is significant future work required to understand which differences account for Bod's exceptional performance. This will be complicated by the fact that sufficient details of Bod's implementation are not available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research also shows the importance of testing on more than one small test set, as well as the importance of not making cross-corpus comparisons; if a new corpus is required, then previous algorithms should be duplicated for comparison.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Trainable grammars for speech recognition", |
| "authors": [ |
| { |
| "first": ";", |
| "middle": [ |
| "J K" |
| ], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "Proceedings of the Spring Conference of the", |
| "volume": "", |
| "issue": "", |
| "pages": "547--550", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baker, 1979] J.K. Baker. 1979. Trainable gram- mars for speech recognition. In Proceedings of the Spring Conference of the Acoustical Society of America, pages 547-550, Boston, MA, June. [Bod, 1992] Rens Bod. 1992. Mathematical prop- erties of the data oriented parsing model. Paper presented at the Third Meeting on Mathematics of Language (MOL3), Austin Texas.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Data-oriented parsing as a general framework for stochastic language processing", |
| "authors": [ |
| { |
| "first": "Rens", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings Third International Workshop on Parsing Technologies, Tilburg/Durbury", |
| "volume": "", |
| "issue": "", |
| "pages": "37--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Bod, 1993a] Rens Bod. 1993a. Data-oriented parsing as a general framework for stochas- tic language processing. In K. Sikkel and A. Nijholt, editors, Parsing Natural Language. Twente, The Netherlands. [Bod, 1993b] Rens Bod. 1993b. Monte Carlo parsing. In Proceedings Third Inter- national Workshop on Parsing Technologies, Tilburg/Durbury. [Bod, 1993c] Rens Bod. 1993c. Using an anno- tated corpus as a stochastic grammar. In Pro- ceedings of the Sixth Conference of the European Chapter of the ACL, pages 37-44.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Enriching Linguistics with Statistics: Performance Models of Natural Language. University of Amsterdam ILLC Dissertation Series 1995-14", |
| "authors": [ |
| { |
| "first": "Rens", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Bod, 1995a] Rens Bod. 1995a. Enriching Lin- guistics with Statistics: Performance Models of Natural Language. University of Amsterdam ILLC Dissertation Series 1995-14. Academische Pers, Amsterdam.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The problem of computing the most probable tree in dataoriented parsing and stochastic tree grammars", |
| "authors": [ |
| { |
| "first": "Rens", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the Seventh Conference of the European Chapter of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": ", 1995b] Rens Bod. 1995b. The problem of computing the most probable tree in data- oriented parsing and stochastic tree grammars. In Proceedings of the Seventh Conference of the European Chapter of the ACL.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A Corpus-Based Approach to Language Learning", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brill, 1993] Eric Brill. 1993. A Corpus-Based Ap- proach to Language Learning. Ph.D. thesis, Uni- versity of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Parsing algorithms and metrics", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 34th Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Goodman, 1996] Joshua Goodman. 1996. Pars- ing algorithms and metrics. In Proceedings of the 34th Annual Meeting of the ACL. To ap- pear.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The ATIS spoken language systems pilot corpus", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Hemphill", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "DARPA Speech and Natural Language Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Hemphill et al., 1990] Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In DARPA Speech and Natural Lan- guage Workshop, Hidden Valley, Pennsylvania, June. Morgan Kaufmann.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The estimation of stochastic context-free 152 grammars using the inside-outside algorithm", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Lari", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [ |
| "K" |
| ], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Lari", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computer Speech and Language", |
| "volume": "4", |
| "issue": "", |
| "pages": "35--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Lari and Young, 1990] K. Lari and S.J. Young. 1990. The estimation of stochastic context-free 152 grammars using the inside-outside algorithm. Computer Speech and Language, 4:35-56.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Pereira and Schabes, 1992] Fernando Pereira and Yves Schabes. 1992. Inside-Outside reestimation from partially bracketed corpora", |
| "authors": [ |
| { |
| "first": ";", |
| "middle": [ |
| "L R" |
| ], |
| "last": "David Magerman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rabiner", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Natural Language Parsing as Statistical Pattern Recognition", |
| "volume": "77", |
| "issue": "", |
| "pages": "128--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Magerman, 1994] David Magerman. 1994. Nat- ural Language Parsing as Statistical Pattern Recognition. Ph.D. thesis, Stanford University University, February. [Pereira and Schabes, 1992] Fernando Pereira and Yves Schabes. 1992. Inside-Outside rees- timation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the ACL, pages 128-135, Newark, Delaware. [Rabiner, 1989] L.R. Rabiner. 1989. A tutorial on hidden Markov models and selected applica- tions in speech recognition. Proceedings of the IEEE, 77(2), February.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Language theory and language technology; competence and performance", |
| "authors": [ |
| { |
| "first": ";", |
| "middle": [ |
| "R" |
| ], |
| "last": "Scha", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Scha", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computertoepassingen in de Neerlandistiek. Landelijke Vereniging van Neerlandici (LVVN-jaarboek)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scha, 1990] R. Scha. 1990. Language theory and language technology; competence and perfor- mance. In Q.A.M. de Kort and G.L.J. Leerdam, editors, Computertoepassingen in de Neerlan- distiek. Landelijke Vereniging van Neerlandici (LVVN-jaarboek), Almere. In Dutch.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Parsing the Wall Street Journal with the Inside-Outside algorithm", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Schabes", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the Sixth Conference of the European Chapter of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "341--347", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Schabes et al., 1993] Yves Schabes, Michal Roth, and Randy Osborne. 1993. Parsing the Wall Street Journal with the Inside-Outside algo- rithm. In Proceedings of the Sixth Conference of the European Chapter of the ACL, pages 341- 347.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Stochastic lexicalized tree-adjoining grammars", |
| "authors": [], |
| "year": 1992, |
| "venue": "Proceedings of the l$th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Schabes, 1992] Y. Schabes. 1992. Stochastic lexi- calized tree-adjoining grammars. In Proceedings of the l$th International Conference on Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Efficient disambiguation by means of stochastic tree substitution grammars", |
| "authors": [], |
| "year": 1995, |
| "venue": "Current Issues in Linguistic Theory", |
| "volume": "136", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Sima'an, 1996] Khalil Sima'an. 1996. Efficient disambiguation by means of stochastic tree sub- stitution grammars. In R. Mitkov and N. Ni- colov, editors, Recent Advances in NLP 1995, volume 136 of Current Issues in Linguistic The- ory. John Benjamins, Amsterdam.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "An efficient probabilistic context-free parsing algorithm that computes prefix probabilities", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stolcke, 1993] Andreas Stolcke. 1993. An ef- ficient probabilistic context-free parsing algo- rithm that computes prefix probabilities. Tech- nical Report TR-93-065, International Com- puter Science Institute, Berkeley, CA.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Training corpus tree for DOP example", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "Simple Example STSG can still be parsed.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Figure 3shows a simple example STSG. For the string xx, what is the most probable derivation? The parse tree S A C", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "] := g(s, t, X); loop over addresses k let X := non-terminal at k; let sum[X] := sum[X] + g(s,t,X_k); loop over non-terminals X let max_X := art max of sum IX] loop over r such that s <= r < t let best_split", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": ":~ wI...w,-1Xwt+I...w,)P(X ~ w,...wt) P(S ~ wl..", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "text": "Figure 5 shows pseudocode for a simplified form of 148 this algorithm.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF4": { |
| "html": null, |
| "content": "<table><tr><td/><td>Correct</td><td>Continued</td><td>Simple</td></tr><tr><td>no unary</td><td colspan=\"3\">0.78 0.0000002 0.88 0.0009484 0.90 0.0041096</td></tr><tr><td>unary</td><td colspan=\"3\">0.80 0.0000011 0.90 0.0037355 0.92 0.0150226</td></tr></table>", |
| "text": "Transformations from N-ary to Binary Branching Structures", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF5": { |
| "html": null, |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |