{ "paper_id": "P96-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:03:01.108793Z" }, "title": "Parsing Algorithms and Metrics", "authors": [ { "first": "Joshua", "middle": [], "last": "Goodman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harvard University", "location": { "addrLine": "33 Oxford St", "postCode": "02138", "settlement": "Cambridge", "region": "MA" } }, "email": "goodman@das.harvard.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many different metrics exist for evaluating parsing results, including Viterbi, Crossing Brackets Rate, Zero Crossing Brackets Rate, and several others. However, most parsing algorithms, including the Viterbi algorithm, attempt to optimize the same metric, namely the probability of getting the correct labelled tree. By choosing a parsing algorithm appropriate for the evaluation metric, better performance can be achieved. We present two new algorithms: the \"Labelled Recall Algorithm,\" which maximizes the expected Labelled Recall Rate, and the \"Bracketed Recall Algorithm,\" which maximizes the Bracketed Recall Rate. Experimental results are given, showing that the two new algorithms have improved performance over the Viterbi algorithm on many criteria, especially the ones that they optimize.", "pdf_parse": { "paper_id": "P96-1024", "_pdf_hash": "", "abstract": [ { "text": "Many different metrics exist for evaluating parsing results, including Viterbi, Crossing Brackets Rate, Zero Crossing Brackets Rate, and several others. However, most parsing algorithms, including the Viterbi algorithm, attempt to optimize the same metric, namely the probability of getting the correct labelled tree. By choosing a parsing algorithm appropriate for the evaluation metric, better performance can be achieved. We present two new algorithms: the \"Labelled Recall Algorithm,\" which maximizes the expected Labelled Recall Rate, and the \"Bracketed Recall Algorithm,\" which maximizes the Bracketed Recall Rate. Experimental results are given, showing that the two new algorithms have improved performance over the Viterbi algorithm on many criteria, especially the ones that they optimize.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In corpus-based approaches to parsing, one is given a treebank (a collection of text annotated with the \"correct\" parse tree) and attempts to find algorithms that, given unlabelled text from the treebank, produce as similar a parse as possible to the one in the treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Various methods can be used for finding these parses. Some of the most common involve inducing Probabilistic Context-Free Grammars (PCFGs), and then parsing with an algorithm such as the Labelled Tree (Viterbi) Algorithm, which maximizes the probability that the output of the parser (the \"guessed\" tree) is the one that the PCFG produced. This implicitly assumes that the induced PCFG does a good job modeling the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are many different ways to evaluate these parses. The most common include the Labelled Tree Rate (also called the Viterbi Criterion or Exact Match Rate), Consistent Brackets Recall Rate (also called the Crossing Brackets Rate), Consistent Brackets Tree Rate (also called the Zero Crossing Brackets Rate), and Precision and Recall. Despite the variety of evaluation metrics, nearly all researchers use algorithms that maximize performance on the Labelled Tree Rate, even in domains where they are evaluating using other criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose that by creating algorithms that optimize the evaluation criterion, rather than some related criterion, improved performance can be achieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 2, we define most of the evaluation metrics used in this paper and discuss previous approaches. Then, in Section 3, we discuss the Labelled Recall Algorithm, a new algorithm that maximizes performance on the Labelled Recall Rate. In Section 4, we discuss another new algorithm, the Bracketed Recall Algorithm, that maximizes performance on the Bracketed Recall Rate (closely related to the Consistent Brackets Recall Rate). Finally, we give experimental results in Section 5 using these two algorithms in appropriate domains, and compare them to the Labelled Tree (Viterbi) Algorithm, showing that each algorithm generally works best when evaluated on the criterion that it optimizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we first define basic terms and symbols. Next, we define the different metrics used in evaluation. Finally, we discuss the relationship of these metrics to parsing algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "2" }, { "text": "Let Wa denote word a of the sentence under consideration. Let w b denote WaW~+l...Wb-lWb; in particular let w~ denote the entire sequence of terminals (words) in the sentence under consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Definitions", "sec_num": "2.1" }, { "text": "In this paper we assume all guessed parse trees are binary branching. Let a parse tree T be defined as a set of triples (s, t, X)--where s denotes the position of the first symbol in a constituent, t denotes the position of the last symbol, and X represents a terminal or nonterminal symbol--meeting the following three requirements:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Definitions", "sec_num": "2.1" }, { "text": "\u2022 The sentence was generated by the start symbol, S. Formally, (1, n, S) E T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Definitions", "sec_num": "2.1" }, { "text": "\u2022 Every word in the sentence is in the parse tree. Formally, for every s between 1 and n the triple (s,s, ws) E T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Definitions", "sec_num": "2.1" }, { "text": "\u2022 The tree is binary branching and consistent. Formally, for every (s,t, X) in T, s \u00a2 t, there is exactly one r, Y, and Z such that s < r < t and (s,r,Y) E T and (r+ 1,t,Z) e T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Definitions", "sec_num": "2.1" }, { "text": "Let Tc denote the \"correct\" parse (the one in the treebank) and let Ta denote the \"guessed\" parse (the one output by the parsing algorithm). Let Na denote [Tal, the number of nonterminals in the guessed parse tree, and let Nc denote [Tel, the number of nonterminals in the correct parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic Definitions", "sec_num": "2.1" }, { "text": "There are various levels of strictness for determining whether a constituent (element of Ta) is \"correct.\" The strictest of these is Labelled Match. A constituent (s,t, X) E Te is correct according to Labelled Match if and only if (s, t, X) E To. In other words, a constituent in the guessed parse tree is correct if and only if it occurs in the correct parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "2.2" }, { "text": "The next level of strictness is Bracketed Match. Bracketed match is like labelled match, except that the nonterminal label is ignored. Formally, a constituent (s, t, X) ETa is correct according to Bracketed Match if and only if there exists a Y such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "2.2" }, { "text": "(s,t,Y) E To.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "2.2" }, { "text": "The least strict level is Consistent Brackets (also called Crossing Brackets). Consistent Brackets is like Bracketed Match in that the label is ignored. It is even less strict in that the observed (s,t,X) need not be in Tc--it must simply not be ruled out by any (q, r, Y) e To. A particular triple (q, r, Y) rules out (s,t, X) if there is no way that (s,t,X) and (q, r, Y) could both be in the same parse tree. In particular, if the interval (s, t) crosses the interval (q, r), then (s, t, X) is ruled out and counted as an error. Formally, we say that (s, t) crosses (q, r) if and only ifs 2) as in figure 3. The resulting trees were treated as the \"Correct\" trees in the evaluation. Only trees with forty or fewer symbols were used in this experiment. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Induction by Counting", "sec_num": "5.2.1" }, { "text": "A grammar was then induced in a straightforward way from these trees, simply by giving one count for each observed production. No smoothing was done. There were 1805 sentences and 38610 nonterminals in the test data. Table 2 shows the results of running all three algorithms, evaluating against five criteria. Notice that for each algorithm, for the criterion that it optimizes it is the best algorithm. That is, the Labelled Tree Algorithm is the best for the Labelled Tree Rate, the Labelled Recall Algorithm is the best for the Labelled Recall Rate, and the Bracketed Recall Algorithm is the best for the Bracketed Recall Rate.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Matching parsing algorithms to evaluation criteria is a powerful technique that can be used to improve performance. In particular, the Labelled Recall Algorithm can improve performance versus the Labelled Tree Algorithm on the Consistent Brackets, Labelled Recall, and Bracketed Recall criteria. Similarly, the Bracketed Recall Algorithm improves performance (versus Labelled Tree) on Consistent Brackets and Bracketed Recall criteria. Thus, these algorithms improve performance not only on the measures that they were designed for, but also on related criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.2.2" }, { "text": "Furthermore, in some cases these techniques can make parsing fast when it was previously impractical. We have used the technique outlined in this paper in other work (Goodman, 1996) to efficiently parse the DOP model; in that model, the only previously known algorithm which summed over all the (Bod, 1993) . However, by maximizing the Labelled Recall criterion, rather than the Labelled Tree criterion, it was possible to use a much simpler algorithm, a variation on the Labelled Recall Algorithm. Using this technique, along with other optimizations, we achieved a 500 times speedup. In future work we will show the surprising result that the last element of Table 3 , maximizing the Bracketed Tree criterion, equivalent to maximizing performance on Consistent Brackets Tree (Zero Crossing Brackets) Rate in the binary branching case, is NP-complete. Furthermore, we will show that the two algorithms presented, the Labelled Recall Algorithm and the Bracketed Recall Algorithm, are both special cases of a more general algorithm, the General Recall Algorithm. Finally, we hope to extend this work to the n-ary branching case.", "cite_spans": [ { "start": 166, "end": 181, "text": "(Goodman, 1996)", "ref_id": "BIBREF3" }, { "start": 295, "end": 306, "text": "(Bod, 1993)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 661, "end": 668, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5.2.2" } ], "back_matter": [ { "text": "I would like to acknowledge support from National Science Foundation Grant IRI-9350192, National Science Foundation infrastructure grant CDA 94-01024, and a National Science Foundation Graduate Student Fellowship. I would also like to thank Stanley Chen, Andrew Kehler, Lillian Lee, and Stuart Shieber for helpful discussions, and comments on earlier drafts, and the anonymous reviewers for their comments. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Trainable grammars for speech recognition", "authors": [ { "first": "J", "middle": [ "K" ], "last": "Baker", "suffix": "" } ], "year": 1979, "venue": "Proceedings of the Spring Conference of the", "volume": "", "issue": "", "pages": "547--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baker, J.K. 1979. Trainable grammars for speech recognition. In Proceedings of the Spring Confer- ence of the Acoustical Society of America, pages 547-550, Boston, MA, June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using an annotated corpus as a stochastic grammar", "authors": [ { "first": "Rens", "middle": [], "last": "Bod", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Sixth Conference of the European Chapter of the ACL", "volume": "", "issue": "", "pages": "37--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bod, Rens. 1993. Using an annotated corpus as a stochastic grammar. In Proceedings of the Sixth Conference of the European Chapter of the ACL, pages 37-44.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Corpus-Based Approach to Language Learning", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, Eric. 1993. A Corpus-Based Approach to Lan- guage Learning. Ph.D. thesis, University of Penn- sylvania.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Efficient algorithms for parsing the DOP model", "authors": [ { "first": "Joshua", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goodman, Joshua. 1996. Efficient algorithms for parsing the DOP model. In Proceedings of the", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Given the matrix g(s, t, X) it is a simple matter of dynamic programming to determine the parse that maximizes the Labelled Recall criterion. Define MAXC(s, t) = n~xg(s, t, X)+ max (MAXC(s, r) + MAXC(r + 1,t))", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "...... i-.......................... :---.............", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "In the case where the parses are binary branching, the two metrics are the same. This criterion is also called the Zero Crossing Brackets Rate.", "html": null, "content": "
Following are the definitions of the six metrics
used in this paper for evaluating binary branching
trees:
(1) Labelled Recall Rate = L/Nc.
(2) Labelled Tree Rate = 1 if L = ATe. It is also
called the Viterbi Criterion.
(3) Bracketed Recall Rate = B/Nc.
(4) Bracketed Tree Rate = 1 if B = Nc.
(5) Consistent Brackets Recall Rate = C/NG. It is
often called the Crossing Brackets Rate. In the
case where the parses are binary branching, this
criterion is the same as the Bracketed Recall
Rate.
(6) Consistent Brackets Tree Rate = 1 if C = No.
This metric is closely related to the Bracketed
TheTree Rate. preceding six metrics each correspond to cells
in the following table:
II Recall ITree
Consistent Brackets
n Tal : the number of constituents
in Ta that are correct according to Labelled
Match.
B = I{(s,t,X) : (s,t,X) ETa and for some
Y (s,t,Y) E Tc}]: the number of constituents
in Ta that are correct according to Bracketed
Match.
C = I{(s, t, X) ETa : there is no (v, w, Y) E Tc
crossing (s,t)}[ : the number of constituents in
TG correct according to Consistent Brackets.
", "type_str": "table", "num": null }, "TABREF3": { "text": "", "html": null, "content": "
: Metrics and Corresponding Algorithms
", "type_str": "table", "num": null }, "TABREF5": { "text": "Grammar Induced by Counting: Three Algorithms Evaluated on Five Criteria possible derivations was a slow Monte Carlo algorithm", "html": null, "content": "", "type_str": "table", "num": null } } } }