File size: 47,097 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
{
    "paper_id": "P96-1024",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:03:01.108793Z"
    },
    "title": "Parsing Algorithms and Metrics",
    "authors": [
        {
            "first": "Joshua",
            "middle": [],
            "last": "Goodman",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Harvard University",
                "location": {
                    "addrLine": "33 Oxford St",
                    "postCode": "02138",
                    "settlement": "Cambridge",
                    "region": "MA"
                }
            },
            "email": "goodman@das.harvard.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Many different metrics exist for evaluating parsing results, including Viterbi, Crossing Brackets Rate, Zero Crossing Brackets Rate, and several others. However, most parsing algorithms, including the Viterbi algorithm, attempt to optimize the same metric, namely the probability of getting the correct labelled tree. By choosing a parsing algorithm appropriate for the evaluation metric, better performance can be achieved. We present two new algorithms: the \"Labelled Recall Algorithm,\" which maximizes the expected Labelled Recall Rate, and the \"Bracketed Recall Algorithm,\" which maximizes the Bracketed Recall Rate. Experimental results are given, showing that the two new algorithms have improved performance over the Viterbi algorithm on many criteria, especially the ones that they optimize.",
    "pdf_parse": {
        "paper_id": "P96-1024",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Many different metrics exist for evaluating parsing results, including Viterbi, Crossing Brackets Rate, Zero Crossing Brackets Rate, and several others. However, most parsing algorithms, including the Viterbi algorithm, attempt to optimize the same metric, namely the probability of getting the correct labelled tree. By choosing a parsing algorithm appropriate for the evaluation metric, better performance can be achieved. We present two new algorithms: the \"Labelled Recall Algorithm,\" which maximizes the expected Labelled Recall Rate, and the \"Bracketed Recall Algorithm,\" which maximizes the Bracketed Recall Rate. Experimental results are given, showing that the two new algorithms have improved performance over the Viterbi algorithm on many criteria, especially the ones that they optimize.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "In corpus-based approaches to parsing, one is given a treebank (a collection of text annotated with the \"correct\" parse tree) and attempts to find algorithms that, given unlabelled text from the treebank, produce as similar a parse as possible to the one in the treebank.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Various methods can be used for finding these parses. Some of the most common involve inducing Probabilistic Context-Free Grammars (PCFGs), and then parsing with an algorithm such as the Labelled Tree (Viterbi) Algorithm, which maximizes the probability that the output of the parser (the \"guessed\" tree) is the one that the PCFG produced. This implicitly assumes that the induced PCFG does a good job modeling the corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There are many different ways to evaluate these parses. The most common include the Labelled Tree Rate (also called the Viterbi Criterion or Exact Match Rate), Consistent Brackets Recall Rate (also called the Crossing Brackets Rate), Consistent Brackets Tree Rate (also called the Zero Crossing Brackets Rate), and Precision and Recall. Despite the variety of evaluation metrics, nearly all researchers use algorithms that maximize performance on the Labelled Tree Rate, even in domains where they are evaluating using other criteria.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We propose that by creating algorithms that optimize the evaluation criterion, rather than some related criterion, improved performance can be achieved.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In Section 2, we define most of the evaluation metrics used in this paper and discuss previous approaches. Then, in Section 3, we discuss the Labelled Recall Algorithm, a new algorithm that maximizes performance on the Labelled Recall Rate. In Section 4, we discuss another new algorithm, the Bracketed Recall Algorithm, that maximizes performance on the Bracketed Recall Rate (closely related to the Consistent Brackets Recall Rate). Finally, we give experimental results in Section 5 using these two algorithms in appropriate domains, and compare them to the Labelled Tree (Viterbi) Algorithm, showing that each algorithm generally works best when evaluated on the criterion that it optimizes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this section, we first define basic terms and symbols. Next, we define the different metrics used in evaluation. Finally, we discuss the relationship of these metrics to parsing algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "2"
            },
            {
                "text": "Let Wa denote word a of the sentence under consideration. Let w b denote WaW~+l...Wb-lWb; in particular let w~ denote the entire sequence of terminals (words) in the sentence under consideration.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Definitions",
                "sec_num": "2.1"
            },
            {
                "text": "In this paper we assume all guessed parse trees are binary branching. Let a parse tree T be defined as a set of triples (s, t, X)--where s denotes the position of the first symbol in a constituent, t denotes the position of the last symbol, and X represents a terminal or nonterminal symbol--meeting the following three requirements:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Definitions",
                "sec_num": "2.1"
            },
            {
                "text": "\u2022 The sentence was generated by the start symbol, S. Formally, (1, n, S) E T.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Definitions",
                "sec_num": "2.1"
            },
            {
                "text": "\u2022 Every word in the sentence is in the parse tree. Formally, for every s between 1 and n the triple (s,s, ws) E T.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Definitions",
                "sec_num": "2.1"
            },
            {
                "text": "\u2022 The tree is binary branching and consistent. Formally, for every (s,t, X) in T, s \u00a2 t, there is exactly one r, Y, and Z such that s < r < t and (s,r,Y) E T and (r+ 1,t,Z) e T.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Definitions",
                "sec_num": "2.1"
            },
            {
                "text": "Let Tc denote the \"correct\" parse (the one in the treebank) and let Ta denote the \"guessed\" parse (the one output by the parsing algorithm). Let Na denote [Tal, the number of nonterminals in the guessed parse tree, and let Nc denote [Tel, the number of nonterminals in the correct parse tree.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic Definitions",
                "sec_num": "2.1"
            },
            {
                "text": "There are various levels of strictness for determining whether a constituent (element of Ta) is \"correct.\" The strictest of these is Labelled Match. A constituent (s,t, X) E Te is correct according to Labelled Match if and only if (s, t, X) E To. In other words, a constituent in the guessed parse tree is correct if and only if it occurs in the correct parse tree.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "2.2"
            },
            {
                "text": "The next level of strictness is Bracketed Match. Bracketed match is like labelled match, except that the nonterminal label is ignored. Formally, a constituent (s, t, X) ETa is correct according to Bracketed Match if and only if there exists a Y such that",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "2.2"
            },
            {
                "text": "(s,t,Y) E To.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "2.2"
            },
            {
                "text": "The least strict level is Consistent Brackets (also called Crossing Brackets). Consistent Brackets is like Bracketed Match in that the label is ignored. It is even less strict in that the observed (s,t,X) need not be in Tc--it must simply not be ruled out by any (q, r, Y) e To. A particular triple (q, r, Y) rules out (s,t, X) if there is no way that (s,t,X) and (q, r, Y) could both be in the same parse tree. In particular, if the interval (s, t) crosses the interval (q, r), then (s, t, X) is ruled out and counted as an error. Formally, we say that (s, t) crosses (q, r) if and only ifs<q<t <rorq<s<r<t. If Tc is binary branching, then Consistent Brackets and Bracketed Match are identical. The following symbols denote the number of constituents that match according to each of these criteria. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation Metrics",
                "sec_num": "2.2"
            },
            {
                "text": "C/NG 1 if C = Nc Brackets B/Nc 1 if B = Nc Labels L/Nc 1 if L = Arc",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L = ITc",
                "sec_num": null
            },
            {
                "text": "Despite this long list of possible metrics, there is only one metric most parsing algorithms attempt to maximize, namely the Labelled Tree Rate. That is, most parsing algorithms assume that the test corpus was generated by the model, and then attempt to evaluate the following expression, where E denotes the expected value operator:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximizing Metrics",
                "sec_num": "2.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "Ta = argmTaXE ( 1 ifL = gc)",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Maximizing Metrics",
                "sec_num": "2.3"
            },
            {
                "text": "This is true of the Labelled Tree Algorithm and stochastic versions of Earley's Algorithm (Stolcke, 1993) , and variations such as those used in Picky parsing (Magerman and Weir, 1992) . Even in probabilistic models not closely related to PCFGs, such as Spatter parsing (Magerman, 1994) , expression (1) is still computed. One notable exception is Brill's Transformation-Based Error Driven system (Brill, 1993) , which induces a set of transformations designed to maximize the Consistent Brackets Recall Rate. However, Brill's system is not probabilistic. Intuitively, if one were to match the parsing algorithm to the evaluation criterion, better performance should be achieved. Ideally, one might try to directly maximize the most commonly used evaluation criteria, such as Consistent Brackets Recall (Crossing Brackets) Rate. Unfortunately, this criterion is relatively difficult to maximize, since it is time-consuming to compute the probability that a particular constituent crosses some constituent in the correct parse. On the other hand, the Bracketed Recall and Bracketed Tree Rates are easier to handle, since computing the probability that a bracket matches one in the correct parse is inexpensive. It is plausible that algorithms which optimize these closely related criteria will do well on the analogous Consistent Brackets criteria.",
                "cite_spans": [
                    {
                        "start": 90,
                        "end": 105,
                        "text": "(Stolcke, 1993)",
                        "ref_id": null
                    },
                    {
                        "start": 159,
                        "end": 184,
                        "text": "(Magerman and Weir, 1992)",
                        "ref_id": null
                    },
                    {
                        "start": 270,
                        "end": 286,
                        "text": "(Magerman, 1994)",
                        "ref_id": null
                    },
                    {
                        "start": 397,
                        "end": 410,
                        "text": "(Brill, 1993)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximizing Metrics",
                "sec_num": "2.3"
            },
            {
                "text": "When building an actual system, one should use the metric most appropriate for the problem. For instance, if one were creating a database query system, such as an ATIS system, then the Labelled Tree (Viterbi) metric would be most appropriate. A single error in the syntactic representation of a query will likely result in an error in the semantic representation, and therefore in an incorrect database query, leading to an incorrect result. For instance, if the user request \"Find me all flights on Tuesday\" is misparsed with the prepositional phrase attached to the verb, then the system might wait until Tuesday before responding: a single error leads to completely incorrect behavior. Thus, the Labelled Tree criterion is appropriate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Which Metrics to Use",
                "sec_num": "2.4"
            },
            {
                "text": "On the other hand, consider a machine assisted translation system, in which the system provides translations, and then a fluent human manually edits them. Imagine that the system is given the foreign language equivalent of \"His credentials are nothing which should be laughed at,\" and makes the single mistake of attaching the relative clause at the sentential level, translating the sentence as \"His credentials are nothing, which should make you laugh.\" While the human translator must make some changes, he certainly needs to do less editing than he would if the sentence were completely misparsed. The more errors there are, the more editing the human translator needs to do. Thus, a criterion such as the Labelled Recall criterion is appropriate for this task, where the number of incorrect constituents correlates to application performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Which Metrics to Use",
                "sec_num": "2.4"
            },
            {
                "text": "Consider writing a parser for a domain such as machine assisted translation. One could use the Labelled Tree Algorithm, which would maximize the expected number of exactly correct parses. However, since the number of correct constituents is a better measure of application performance for this domain than the number of correct trees, perhaps one should use an algorithm which maximizes the Labelled Recall criterion, rather than the Labelled Tree criterion.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Labelled Recall Parsing",
                "sec_num": "3"
            },
            {
                "text": "The Labelled Recall Algorithm finds that tree TG which has the highest expected value for the La-belled Recall Rate, L/Nc (where L is the number of correct labelled constituents, and Nc is the number of nodes in the correct parse). This can be written as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Labelled Recall Parsing",
                "sec_num": "3"
            },
            {
                "text": "Ta = arg n~xE(L/Nc) (2)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Labelled Recall Parsing",
                "sec_num": "3"
            },
            {
                "text": "It is not immediately obvious that the maximization of expression (2) is in fact different from the maximization of expression (1), but a simple example illustrates the difference. The following grammar generates four trees with equal probability:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Labelled Recall Parsing",
                "sec_num": "3"
            },
            {
                "text": "S ~ A C 0.25 S ~ A D 0.25 S --* EB 0.25 S --~ FB 0.25 A, B, C, D, E, F ~ xx 1.0 The four trees are S S X XX X X XX X (3) S S E B F B X XX X X XX X",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Labelled Recall Parsing",
                "sec_num": "3"
            },
            {
                "text": "For the first tree, the probabilities of being correct are S: 100%; A:50%; and C: 25%. Similar counting holds for the other three. Thus, the expected value of L for any of these trees is 1.75.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Labelled Recall Parsing",
                "sec_num": "3"
            },
            {
                "text": "On the other hand, the optimal Labelled Recall parse is S X XX X This tree has 0 probability according to the grammar, and thus is non-optimal according to the Labelled Tree Rate criterion. However, for this tree the probabilities of each node being correct are S: 100%; A: 50%; and B: 50%. The expected value of L is 2.0, the highest of any tree. This tree therefore optimizes the Labelled Recall Rate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Labelled Recall Parsing",
                "sec_num": "3"
            },
            {
                "text": "We now derive an algorithm for finding the parse that maximizes the expected Labelled Recall Rate. We do this by expanding expression (2) out into a probabilistic form, converting this into a recursive equation, and finally creating an equivalent dynamic programming algorithm. We begin by rewriting expression (2), expanding out the expected value operator, and removing the which is the same for all TG, and so plays no NC ' role in the maximization.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm",
                "sec_num": "3.1"
            },
            {
                "text": "This can be further expanded to (4)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ta = argmTaX~,P(Tc l w~) ITnTcl Tc",
                "sec_num": null
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "Ta = arg mTax E P(Tc I w~)E1 if (s,t,X) 6 Tc Tc (,,t,X)eT",
                        "eq_num": "(5)"
                    }
                ],
                "section": "Ta = argmTaX~,P(Tc l w~) ITnTcl Tc",
                "sec_num": null
            },
            {
                "text": "Now, given a PCFG with start symbol S, the following equality holds:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ta = argmTaX~,P(Tc l w~) ITnTcl Tc",
                "sec_num": null
            },
            {
                "text": "P(s . 1,4)= E P(Tc I ~7)( 1 if (s, t, X) 6 Tc) (6) Tc",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ta = argmTaX~,P(Tc l w~) ITnTcl Tc",
                "sec_num": null
            },
            {
                "text": "By rearranging the summation in expression (5) and then substituting this equality, we get",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ta = argmTaX~,P(Tc l w~) ITnTcl Tc",
                "sec_num": null
            },
            {
                "text": "Ta =argm~x E P(S =~ s-t... (,,t,X)eT (7)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ta = argmTaX~,P(Tc l w~) ITnTcl Tc",
                "sec_num": null
            },
            {
                "text": "At this point, it is useful to introduce the Inside and Outside probabilities, due to Baker (1979) , and explained by Lari and Young (1990) . The Inside probability is defined as e(s,t,X) = P(X =~ w~) and the Outside probability is f(s, t, X) = P(S =~ ",
                "cite_spans": [
                    {
                        "start": 86,
                        "end": 98,
                        "text": "Baker (1979)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 118,
                        "end": 139,
                        "text": "Lari and Young (1990)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ta = argmTaX~,P(Tc l w~) ITnTcl Tc",
                "sec_num": null
            },
            {
                "text": "Now, the definition of a Labelled Recall Parse can be rewritten as ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "P(S wE) = f(s, t, X) x e(s, t, X)/e(1, n, S)",
                "sec_num": null
            },
            {
                "text": "It is clear that MAXC(1, n) contains the score of the best parse according to the Labelled Recall criterion. This equation can be converted into the dynamic programming algorithm shown in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 188,
                        "end": 196,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Figure h Labelled Recall Algorithm",
                "sec_num": null
            },
            {
                "text": "For a grammar with r rules and k nonterminals, the run time of this algorithm is O(n 3 + kn 2) since there are two layers of outer loops, each with run time at most n, and an inner loop, over nonterminals and n. However, this is dominated by the computation of the Inside and Outside probabilities, which takes time O(rna).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Figure h Labelled Recall Algorithm",
                "sec_num": null
            },
            {
                "text": "By modifying the algorithm slightly to record the actual split used at each node, we can recover the best parse. The entry maxc[1, n] contains the expected number of correct constituents, given the model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Figure h Labelled Recall Algorithm",
                "sec_num": null
            },
            {
                "text": "The Labelled Recall Algorithm maximizes the expected number of correct labelled constituents. However, many commonly used evaluation metrics, such as the Consistent Brackets Recall Rate, ignore labels.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bracketed Recall Parsing",
                "sec_num": "4"
            },
            {
                "text": "Similarly, some grammar induction algorithms, such as those used by Pereira and Schabes (1992) do not produce meaningful labels. In particular, the Pereira and Schabes method induces a grammar from the brackets in the treebank, ignoring the labels. While the induced grammar has labels, they are not related to those in the treebank. Thus, although the Labelled Recall Algorithm could be used in these domains, perhaps maximizing a criterion that is more closely tied to the domain will produce better results. Ideally, we would maximize the Consistent Brackets Recall Rate directly. However, since it is time-consuming to deal with Consistent Brackets, we instead use the closely related Bracketed Recall Rate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bracketed Recall Parsing",
                "sec_num": "4"
            },
            {
                "text": "For the Bracketed Recall Algorithm, we find the parse that maximizes the expected Bracketed Recall Rate, B/Nc. (Remember that B is the number of brackets that are correct, and Nc is the number of constituents in the correct parse.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bracketed Recall Parsing",
                "sec_num": "4"
            },
            {
                "text": "180 TG = arg rn~x E(B/Nc) (9)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bracketed Recall Parsing",
                "sec_num": "4"
            },
            {
                "text": "Following a derivation similar to that used for the Labelled Recall Algorithm, we can rewrite equation 9as",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bracketed Recall Parsing",
                "sec_num": "4"
            },
            {
                "text": "Ta=argm~x ~ ~_P(S:~ ,-1.~ ,~ wl (s,t)ET X (I0)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bracketed Recall Parsing",
                "sec_num": "4"
            },
            {
                "text": "The algorithm for Bracketed Recall parsing is extremely similar to that for Labelled Recall parsing. The only required change is that we sum over the symbols X to calculate max_g, rather than maximize over them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bracketed Recall Parsing",
                "sec_num": "4"
            },
            {
                "text": "We describe two experiments for testing these algorithms. The first uses a grammar without meaningful nonterminal symbols, and compares the Bracketed Recall Algorithm to the traditional Labelled Tree (Viterbi) Algorithm. The second uses a grammar with meaningful nonterminal symbols and performs a three-way comparison between the Labelled Recall, Bracketed Recall, and Labelled Tree Algorithms. These experiments show that use of an algorithm matched appropriately to the evaluation criterion can lead to as much as a 10% reduction in error rate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Results",
                "sec_num": "5"
            },
            {
                "text": "In both experiments the grammars could not parse some sentences, 0.5% and 9%, respectively. The unparsable data were assigned a right branching structure with their rightmost element attached high. Since all three algorithms fail on the same sentences, all algorithms were affected equally.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Results",
                "sec_num": "5"
            },
            {
                "text": "The experiment of Pereira and Schabes (1992) was duplicated. In that experiment, a grammar was trained from a bracketed form of the TI section of the ATIS corpus 1 using a modified form of the Inside-Outside Algorithm. Pereira and Schabes then used the Labelled Tree Algorithm to select the best parse for sentences in held out test data. The experiment was repeated here, except that both the Labelled Tree and Labelled Recall Algorithm were run for each sentence. In contrast to previous research, we repeated the experiment ten times, with different training set, test set, and initial conditions each time. Table 1 shows the results of running this experiment, giving the minimum, maximum, mean, and standard deviation for three criteria, Consistent Brackets Recall, Consistent Brackets Tree, and 1For our experiments the corpus was slightly cleaned up. A diff file for \"ed\" between the original ATIS data and the cleaned-up version is available from ftp://ftp.das.harvard.edu/pub/goodman/atised/ ti_tb.par-ed and ti_tb.pos-ed.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 611,
                        "end": 618,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment with Grammar Induced by Pereira and Schabes Method",
                "sec_num": "5.1"
            },
            {
                "text": "The number of changes made was small, less than 0.2% Bracketed Recall. We also display these statistics for the paired differences between the algorithms. The only statistically significant difference is that for Consistent Brackets Recall Rate, which was significant to the 2% significance level (paired t-test). Thus, use of the Bracketed Recall Algorithm leads to a 10% reduction in error rate. In addition, the performance of the Bracketed Recall Algorithm was also qualitatively more appealing. Figure 2 shows typical results. Notice that the Bracketed Recall Algorithm's Consistent Brackets Rate (versus iteration) is smoother and more nearly monotonic than the Labelled Tree Algorithm's. The Bracketed Recall Algorithm also gets off to a much faster start, and is generally (although not always) above the Labelled Tree level. For the Labelled Tree Rate, the two are usually very comparable.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 500,
                        "end": 508,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment with Grammar Induced by Pereira and Schabes Method",
                "sec_num": "5.1"
            },
            {
                "text": "The replication of the Pereira and Schabes experiment was useful for testing the Bracketed Recall Algorithm. However, since that experiment induces a grammar with nonterminals not comparable to those in the training, a different experiment is needed to evaluate the Labelled Recall Algorithm, one in which the nonterminals in the induced grammar are the same as the nonterminals in the test set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment with Grammar Induced by Counting",
                "sec_num": "5.2"
            },
            {
                "text": "For this experiment, a very simple grammar was induced by counting, using a portion of the Penn Tree Bank, version 0.5. In particular, the trees were first made binary branching by removing epsilon productions, collapsing singleton productions, and converting n-ary productions (n > 2) as in figure 3. The resulting trees were treated as the \"Correct\" trees in the evaluation. Only trees with forty or fewer symbols were used in this experiment. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Grammar Induction by Counting",
                "sec_num": "5.2.1"
            },
            {
                "text": "A grammar was then induced in a straightforward way from these trees, simply by giving one count for each observed production. No smoothing was done. There were 1805 sentences and 38610 nonterminals in the test data. Table 2 shows the results of running all three algorithms, evaluating against five criteria. Notice that for each algorithm, for the criterion that it optimizes it is the best algorithm. That is, the Labelled Tree Algorithm is the best for the Labelled Tree Rate, the Labelled Recall Algorithm is the best for the Labelled Recall Rate, and the Bracketed Recall Algorithm is the best for the Bracketed Recall Rate.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 217,
                        "end": 224,
                        "text": "Table 2",
                        "ref_id": "TABREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Conclusions and Future Work",
                "sec_num": "6"
            },
            {
                "text": "Matching parsing algorithms to evaluation criteria is a powerful technique that can be used to improve performance. In particular, the Labelled Recall Algorithm can improve performance versus the Labelled Tree Algorithm on the Consistent Brackets, Labelled Recall, and Bracketed Recall criteria. Similarly, the Bracketed Recall Algorithm improves performance (versus Labelled Tree) on Consistent Brackets and Bracketed Recall criteria. Thus, these algorithms improve performance not only on the measures that they were designed for, but also on related criteria.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5.2.2"
            },
            {
                "text": "Furthermore, in some cases these techniques can make parsing fast when it was previously impractical. We have used the technique outlined in this paper in other work (Goodman, 1996) to efficiently parse the DOP model; in that model, the only previously known algorithm which summed over all the (Bod, 1993) . However, by maximizing the Labelled Recall criterion, rather than the Labelled Tree criterion, it was possible to use a much simpler algorithm, a variation on the Labelled Recall Algorithm. Using this technique, along with other optimizations, we achieved a 500 times speedup. In future work we will show the surprising result that the last element of Table 3 , maximizing the Bracketed Tree criterion, equivalent to maximizing performance on Consistent Brackets Tree (Zero Crossing Brackets) Rate in the binary branching case, is NP-complete. Furthermore, we will show that the two algorithms presented, the Labelled Recall Algorithm and the Bracketed Recall Algorithm, are both special cases of a more general algorithm, the General Recall Algorithm. Finally, we hope to extend this work to the n-ary branching case.",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 181,
                        "text": "(Goodman, 1996)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 295,
                        "end": 306,
                        "text": "(Bod, 1993)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 661,
                        "end": 668,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "5.2.2"
            }
        ],
        "back_matter": [
            {
                "text": "I would like to acknowledge support from National Science Foundation Grant IRI-9350192, National Science Foundation infrastructure grant CDA 94-01024, and a National Science Foundation Graduate Student Fellowship. I would also like to thank Stanley Chen, Andrew Kehler, Lillian Lee, and Stuart Shieber for helpful discussions, and comments on earlier drafts, and the anonymous reviewers for their comments. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Trainable grammars for speech recognition",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "K"
                        ],
                        "last": "Baker",
                        "suffix": ""
                    }
                ],
                "year": 1979,
                "venue": "Proceedings of the Spring Conference of the",
                "volume": "",
                "issue": "",
                "pages": "547--550",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Baker, J.K. 1979. Trainable grammars for speech recognition. In Proceedings of the Spring Confer- ence of the Acoustical Society of America, pages 547-550, Boston, MA, June.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Using an annotated corpus as a stochastic grammar",
                "authors": [
                    {
                        "first": "Rens",
                        "middle": [],
                        "last": "Bod",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Proceedings of the Sixth Conference of the European Chapter of the ACL",
                "volume": "",
                "issue": "",
                "pages": "37--44",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bod, Rens. 1993. Using an annotated corpus as a stochastic grammar. In Proceedings of the Sixth Conference of the European Chapter of the ACL, pages 37-44.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A Corpus-Based Approach to Language Learning",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Brill",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Brill, Eric. 1993. A Corpus-Based Approach to Lan- guage Learning. Ph.D. thesis, University of Penn- sylvania.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Efficient algorithms for parsing the DOP model",
                "authors": [
                    {
                        "first": "Joshua",
                        "middle": [],
                        "last": "Goodman",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of the",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Goodman, Joshua. 1996. Efficient algorithms for parsing the DOP model. In Proceedings of the",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "Given the matrix g(s, t, X)  it is a simple matter of dynamic programming to determine the parse that maximizes the Labelled Recall criterion. Define MAXC(s, t) = n~xg(s, t, X)+ max (MAXC(s, r) + MAXC(r + 1,t))",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "FIGREF1": {
                "text": "...... i-.......................... :---.............",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "TABREF0": {
                "text": "In the case where the parses are binary branching, the two metrics are the same. This criterion is also called the Zero Crossing Brackets Rate.",
                "html": null,
                "content": "<table><tr><td colspan=\"3\">Following are the definitions of the six metrics</td></tr><tr><td colspan=\"3\">used in this paper for evaluating binary branching</td></tr><tr><td colspan=\"2\">trees:</td></tr><tr><td colspan=\"2\">(1) Labelled Recall Rate = L/Nc.</td></tr><tr><td colspan=\"3\">(2) Labelled Tree Rate = 1 if L = ATe. It is also</td></tr><tr><td/><td>called the Viterbi Criterion.</td></tr><tr><td colspan=\"2\">(3) Bracketed Recall Rate = B/Nc.</td></tr><tr><td colspan=\"2\">(4) Bracketed Tree Rate = 1 if B = Nc.</td></tr><tr><td colspan=\"3\">(5) Consistent Brackets Recall Rate = C/NG. It is</td></tr><tr><td/><td colspan=\"2\">often called the Crossing Brackets Rate. In the</td></tr><tr><td/><td colspan=\"2\">case where the parses are binary branching, this</td></tr><tr><td/><td colspan=\"2\">criterion is the same as the Bracketed Recall</td></tr><tr><td/><td>Rate.</td></tr><tr><td colspan=\"3\">(6) Consistent Brackets Tree Rate = 1 if C = No.</td></tr><tr><td/><td colspan=\"2\">This metric is closely related to the Bracketed</td></tr><tr><td>The</td><td colspan=\"2\">Tree Rate. preceding six metrics each correspond to cells</td></tr><tr><td colspan=\"2\">in the following table:</td></tr><tr><td/><td>II Recall I</td><td>Tree</td></tr><tr><td/><td>Consistent Brackets</td></tr><tr><td>n Tal : the number of constituents</td><td/></tr><tr><td>in Ta that are correct according to Labelled</td><td/></tr><tr><td>Match.</td><td/></tr><tr><td>B = I{(s,t,X) : (s,t,X) ETa and for some</td><td/></tr><tr><td>Y (s,t,Y) E Tc}]: the number of constituents</td><td/></tr><tr><td>in Ta that are correct according to Bracketed</td><td/></tr><tr><td>Match.</td><td/></tr><tr><td>C = I{(s, t, X) ETa : there is no (v, w, Y) E Tc</td><td/></tr><tr><td>crossing (s,t)}[ : the number of constituents in</td><td/></tr><tr><td>TG correct according to Consistent Brackets.</td><td/></tr></table>",
                "type_str": "table",
                "num": null
            },
            "TABREF3": {
                "text": "",
                "html": null,
                "content": "<table><tr><td>: Metrics and Corresponding Algorithms</td></tr></table>",
                "type_str": "table",
                "num": null
            },
            "TABREF5": {
                "text": "Grammar Induced by Counting: Three Algorithms Evaluated on Five Criteria possible derivations was a slow Monte Carlo algorithm",
                "html": null,
                "content": "<table/>",
                "type_str": "table",
                "num": null
            }
        }
    }
}