File size: 73,478 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
{
    "paper_id": "P85-1018",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:39:22.431411Z"
    },
    "title": "Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based Formalisms",
    "authors": [
        {
            "first": "Stuart",
            "middle": [
                "M"
            ],
            "last": "Shieber",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Stanford University",
                "location": {}
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Advancml Research Projects Agency under C,mtraet NOOO39-g4-K-0n78 with the Naval Electronics Systems Ckm~mand. The views and ronchtsi~ms contained in this &Jcument should not be interpreted a.s representative of the official p~dicies, either expressed or implied, of the D~'fen~p Research Projects Agency or the United States governmont. The author is indebted to Fernando Pereira and Ray Perrault for their comments on ea, riier drafts o[ this paper.",
    "pdf_parse": {
        "paper_id": "P85-1018",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Advancml Research Projects Agency under C,mtraet NOOO39-g4-K-0n78 with the Naval Electronics Systems Ckm~mand. The views and ronchtsi~ms contained in this &Jcument should not be interpreted a.s representative of the official p~dicies, either expressed or implied, of the D~'fen~p Research Projects Agency or the United States governmont. The author is indebted to Fernando Pereira and Ray Perrault for their comments on ea, riier drafts o[ this paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Grammar formalisms based on the encoding of grammatical information in complex-valued feature systems enjoy some currency both in linguistics and natural-language-processing research. Such formalisms can be thought of by analogy to context-free grammars as generalizing the notion of nonterminal symbol from a finite domain of atomic elements to a possibly infinite domain of directed graph structures nf a certain sort. Unfortunately, in moving to an infinite nonterminal domain, standard methods of parsing may no longer be applicable to the formalism. Typically, the problem manifests itself ,as gross inefficiency or ew, n nonterminat icm of the alg~,rit hms. In this paper, we discuss a solution to the problem of extending parsing algorithms to formalisms with possibly infinite nonterminal domains, a solution based on a general technique we call restriction. As a particular example of such an extension, we present a complete, correct, terminating extension of Earley's algorithm that uses restriction to perform top-down filtering. Our implementation of this algorithm demonstrates the drastic elimination of chart edges that can be achieved by this technique. Fit,all.v, we describe further uses for the technique--including parsing other grammar formalisms, including definite.clause grammars; extending other parsing algorithms, including LR methods and syntactic preference modeling algorithms; anti efficient indexing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Grammar formalisms ba.sed on the encircling of grantmalical information in complex-valued fealure systems enjoy some currency both in linguistics and natural-languageprocessing research. Such formalisms can be thought of by analogy to context-free grammars a.s generalizing the notion of nonterminai symbol from a finite domain of atomic elements to a possibly infinite domain of directed graph structures of a certain sort. Many of tile sm'fa,',,-bast,,I grammatical formalisms explicitly dvfin,,,I ,,r pr~\"~Ul~p,~'.,'.l in linguistics can be characterized in this way ,,.~.. It.xi ,Ifunctional grammar (I,F(;} [5] , generalizt,I I,hr:~,' ~l rlt,'l ur,. grammar (GPSG) [.1], even categorial systems such ,as M,,ntague grammar [81 and Ades/Steedman grammar Ill --,~s can several of the grammar formalisms being used in naturallanguage processing research--e.g., definite clause grammar (DCG) [9] , and PATR-II [13] .",
                "cite_spans": [
                    {
                        "start": 604,
                        "end": 615,
                        "text": "(I,F(;} [5]",
                        "ref_id": null
                    },
                    {
                        "start": 892,
                        "end": 895,
                        "text": "[9]",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 910,
                        "end": 914,
                        "text": "[13]",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Unfortunately, in moving to an infinite nonlermiual de,main, standard methods of parsing may no h,ngvr t~, applicable to the formalism. ~k~r instance, the application of techniques for preprocessing of grantmars in ,,rder t,, gain efficiency may fail to terminate, ~ in left-c,~rner and LR algorithms. Algorithms performing top-dc~wn prediction (e.g. top-down backtrack parsing, Earley's algorithm) may not terminate at parse time. Implementing backtracking regimens~useful for instance for generating parses in some particular order, say, in order of syntactic preference--is in general difficult when LR-style and top-down backtrack techniques are eliminated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "[n this paper, we discuss a s~dul.ion to the pr~,blem of extending parsing algorithms to formalisms with possibly infinite nonterminal domains, a solution based on an operation we call restriction. In Section 2, we summarize traditional proposals for solutions and problems inherent in them and propose an alternative approach to a solution using restriction. In Section 3, we present some technical background including a brief description of the PATR-II formalism~ which is used as the formalism interpreted by the parsing algorithms~and a formal definition of restriction for PATR-II's nonterminal domain. In Section 4, we develop a correct, complete and terminating extension of Earley's algorithm for the PATR-II formalism using the restriction notion. Readers uninterested in the technical details of the extensions may want to skip these latter two sections, referring instead to Section 4.1 for an informal overview of the algorithms. Finally, in Section 5, we discuss applications of the particular algorithm and the restriction technique in general.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Problems with efficiently parsing formalisms based on potentially infinite nonterminal domains have manifested themselves in many different ways. Traditional solutions have involved limiting in some way the class of grammars that can be parsed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Traditional Solutions and an Alternative Approach",
                "sec_num": "2"
            },
            {
                "text": "The limitations can be applied to the formalism by, for instance, adding a context-free \"backbone.\" If we require that a context-free subgrammar be implicit in every grammar, the subgrammar can be used for parsing and the rest of the grammar used az a filter during or aRer parsing. This solution has been recommended for functional unification grammars (FI,G) by Martin Kay [61; its legacy can be seen in the context-free skeleton of LFG, and the Hewlett-Packard GPSG system [31, and in the cat feature requirement in PATR-[I that is described below.",
                "cite_spans": [
                    {
                        "start": 375,
                        "end": 379,
                        "text": "[61;",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting the formalism",
                "sec_num": "2.1"
            },
            {
                "text": "However, several problems inhere in this solution of mandating a context-free backbone.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting the formalism",
                "sec_num": "2.1"
            },
            {
                "text": "First, the move from context-free to complex-feature-based formalisms wan motivated by the desire to structure the notion of nonterminal. Many analyses take advantage of this by eliminating mention of major category information from particular rules a or by structuring the major category itself (say into binary N and V features plus a bar-level feature as in ~-based theories). F.rcing the primacy and atomicity of major category defeats part of the purpose of structured category systems.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting the formalism",
                "sec_num": "2.1"
            },
            {
                "text": "Sec, m,l. and perhaps more critically, because only certain ,ff the information in a rule is used to guide the parse, say major category information, only such information can be used to filter spurious hypotheses by top-down filtering. Note that this problem occurs even if filtering by the rule information is used to eliminate at the earliest possible time constituents and partial constituents proposed during parsing {as is the case in the PATR-II implementation and the ~Se~'. [or instance, the coordination and copular \"be\" aaalyses from GPSG [4 I, the nested VP analysis used in some PATR-ll grammars 11.5 I, or almost all categorial analyse~, in which general roles of combination play the role o1' specific phlr~se-stroctur\u00a2 roles.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting the formalism",
                "sec_num": "2.1"
            },
            {
                "text": "Earley algorithm given below; cf. the Xerox LFG system}. Thus, if information about subcategorization is left out of the category information in the context-free skeleton, it cannot be used to eliminate prediction edges. For example, if we find a verb that subcategorizes for a noun phrase, but the grammar rules allow postverbal NPs, PPs, Ss, VPs, and so forth, the parser will have no way to eliminate the building of edges corresponding to these categories. Only when such edges attempt to join with the V will the inconsistency be found. Similarly, if information about filler-gap dependencies is kept extrinsic to the category information, as in a slash category in GPSG or an LFG annotation concerning a matching constituent for a I~ specification, there will be no way to keep from hypothesizing gaps at any given vertex. This \"gap-proliferation\" problem has plagued many attempts at building parsers for grammar formalisms in this style.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting the formalism",
                "sec_num": "2.1"
            },
            {
                "text": "In fact, by making these stringent requirements on what information is used to guide parsing, we have to a certain extent thrown the baby out with the bathwater. These formalisms were intended to free us from the tyranny of atomic nonterminal symbols, but for good performance, we are forced toward analyses putting more and more information in an atomic category feature. An example of this phenomenon can be seen in the author's paper on LR syntactic preference parsing [14] . Because the LALR table building algorithm does not in general terminate for complex-featurebased grammar formalisms, the grammar used in that paper was a simple context-free grammar with subcategorization and gap information placed in the atomic nonterminal symbol.",
                "cite_spans": [
                    {
                        "start": 472,
                        "end": 476,
                        "text": "[14]",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting the formalism",
                "sec_num": "2.1"
            },
            {
                "text": "On the other hand, the grammar formalism can be left unchanged, but particular grammars dew,loped that happen not to succumb to the problems inhere, at in the g,,neral parsing problem for the formalism. The solution mentioned above of placing more information in lilt, category symbol falls into this class. Unpublished work by Kent Witwnburg and by Robin Cooper has attempted to solve the gap proliferation problem using special grammars.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting grammars and parsers",
                "sec_num": "2.2"
            },
            {
                "text": "In building a general tool for grammar testing and debugging, however, we would like to commit as little ,as possible to a particular grammar or style of grammar.: Furthermore, the grammar designer should not be held down in building an analysis by limitations of the algorithms. Thus a solution requiring careful crMting of grammars is inadequate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting grammars and parsers",
                "sec_num": "2.2"
            },
            {
                "text": "Finally, specialized parsing alg~withms can be designed that make use of information about the p;trtictd;tr grammar being parsed to eliminate spurious edges or h vpotheses. Rather than using a general parsing algorithm on a 'See [121 for further discl~sioa of thi~ matter. limited formalism, Ford, Bresnan, and Kaplan [21 chose a specialized algorithm working on grammars in the full LFG formalism to model syntactic preferences. Current work at Hewlett-Packard on parsing recent variants of GPSG seems to take this line as well.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting grammars and parsers",
                "sec_num": "2.2"
            },
            {
                "text": "Again, we feel that the separation of burden is inappropriate in such an attack, especially in a grammar-development context. Coupling the grammar design and parser design problems in this way leads to the linguistic and technological problems becoming inherently mixed, magnifying the difficulty of writing an adequate grammar/parser system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Limiting grammars and parsers",
                "sec_num": "2.2"
            },
            {
                "text": "An Alternative:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2.3",
                "sec_num": null
            },
            {
                "text": "Using Restriction",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2.3",
                "sec_num": null
            },
            {
                "text": "Instead, we would like a parsing algorithm that placed no restraints on the grammars it could handle as long as they could be expressed within the intended formalism. Still, the algorithm should take advantage of that part of the arbitrarily large amount of information in the complex-feature structures that is significant for guiding parsing with the particular grammar. One of the aforementioned solutions is to require the grammar writer to put all such significant information in a special atomic symbol--i.e., mandate a context-free backbone. Another is to use all of the feature structure information--but this method, as we shall see, inevitably leads to nonterminating algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2.3",
                "sec_num": null
            },
            {
                "text": "A compromise is to parameterize the parsing algorithm by a small amount of grammar-dependent information that tells the algorithm which of the information in the feature structures is significant for guiding the parse. That is, the parameter determines how to split up the infinite nonterminal domain into a finite set of equivalence classes that can be used for parsing. By doing so, we have an optimal compromise: Whatever part of the feature structure is significant we distinguish in the equivalence classes by setting the parameter appropriately, so the information is used in parsing. But because there are only a finite number of equivalence ciasses, parsing algorithms guided in this way will terminate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2.3",
                "sec_num": null
            },
            {
                "text": "The technique we use to form equivalence classes is restrietion, which involves taking a quotient of the domain with respect to a rcstrietor. The restrictor thus serves as the sole repository, of grammar-dependent information in the algorithm. By tuning the restrictor, the set of equivalence classes engendered can be changed, making the algorithm more or less efficient at guiding the parse. But independent of the restrictor, the algorithm will be correct, since it is still doing parsing over a finite domain of \"nonterminals,\" namely, the elements of the restricted domain. This idea can be applied to solve many of the problems engendered by infinite nonterminal domains, allowing preprocessing of grammars as required by LR and LC algorithms, allowing top-down filtering or prediction as in Earley and top-down backtrack parsing, guaranteeing termination, etc.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "2.3",
                "sec_num": null
            },
            {
                "text": "Before discussing the use of restriction in parsing algorithms, we present some technical details, including a brief introduction to the PATR-II grammar formalism, which will serve as the grammatical formalism that the presented algorithms will interpret. PATR-II is a simple grammar formalism that can serve as the least common denominator of many of the complex-feature-based and unification-based formalisms prevalent in linguistics and computational linguistics. As such it provides a good testbed for describing algorithms for complex-feature-based formalisms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Technical Preliminaries",
                "sec_num": "3"
            },
            {
                "text": "The PATR-II nonterminal domain is a lattice of directed, acyclic, graph structures (dags). s Dags can be thought of similar to the reentrant f-structures of LFG or functional structures of FUG, and we will use the bracketed notation associated with these formalisms for them. For example. the following is a dag {D0) in this notation, with reentrancy indicated with coindexing boxes: ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The PATR-II nonterminal domain",
                "sec_num": "3.1"
            },
            {
                "text": "Dags come in two varieties, complez (like the one above) and atomic (like the dags h and c in the example). Con~plex dags can be viewed a.s partial functions from labels to dag values, and the notation D(l) will therefore denote the value associated with the label l in the dag D. In the same spirit.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "hl]",
                "sec_num": null
            },
            {
                "text": "we can refer to the domain of a dag (dora(D)). A dag with an empty domain is often called an empty dag or variable. A path in a dag is a sequence of label names (notated, e.g..",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "hl]",
                "sec_num": null
            },
            {
                "text": "(d e ,f)), which can be used to pick out a particular subpart of the dag by repeated application {in this case. the dag [g : hi). We will extend the notation D(p) in the obvious way to include the subdag of D picked ~,tlt b.v a path p. We will also occasionally use the square brackets as l he dag c~mstructor function, so that [f : DI where D is an expression denoting a dag will denote the dag whose f feature has value D.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "hl]",
                "sec_num": null
            },
            {
                "text": "There is a natural lattice structure for dags based on The following examples illustrate the notion of unification:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsumption and Unification",
                "sec_num": "3.2"
            },
            {
                "text": "to tb:cllot :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsumption and Unification",
                "sec_num": "3.2"
            },
            {
                "text": ",lb:cl] [ a: {b:cl]u d - d",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsumption and Unification",
                "sec_num": "3.2"
            },
            {
                "text": "The unification of two dags is not always well-defined. In the rases where no unification exists, the unificati,,n is said to fail. For example the following pair of dags fail to unify with each other:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsumption and Unification",
                "sec_num": "3.2"
            },
            {
                "text": "d d: [b d] =fail",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Subsumption and Unification",
                "sec_num": "3.2"
            },
            {
                "text": "Now. consider the notion of restriction of a dag, using the term almost in its technical sense of restricting the domain ,)f ,x function. By viewing dags as partial functions from labels to dag values, we can envision a process ,~f restricting the ,l~mlain of this function to a given set of labels. Extending this process recursively to every level of the dag, we have the ,'-ncept of restriction used below. Given a finite, sperifi-,'ati,,n ~ (called a restrictor) of what the allowable domain at ,,:u'h node of a dag is, we can define a functional, g', that yields the dag restricted by the given restrictor.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Restriction in the PATR-II nontermir,.al domain",
                "sec_num": "3.3"
            },
            {
                "text": "Formally, we define restriction as follows. Given a relation between paths and labels, and a dag D, we define D~ to be the most specific dag LF C D such that for every path p either D'(p) is undefined, or if(p) is atomic, or for every ! E dom(D'(p)}, pOl. That is, every path in the restricted dag is either undefined, atomic, or specifically allowed by the restrictor.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Restriction in the PATR-II nontermir,.al domain",
                "sec_num": "3.3"
            },
            {
                "text": "The restriction process can be viewed as putting dags into equivalence classes, each equivalence class being the largest set of dags that all are restricted to the same dag {which we will call its canonical member). It follows from the definition that in general O~O C_ D. Finally, if we disallow infinite relations as restrictors (i.e., restrictors must not allow values for an infinite number of distinct paths) as we will do for the remainder of the discussion, we are guaranteed to have only a finite number of equivalence classes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Restriction in the PATR-II nontermir,.al domain",
                "sec_num": "3.3"
            },
            {
                "text": "Actually, in the sequel we will use a particularly simple subclass of restrictors that are generable from sets of paths.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Restriction in the PATR-II nontermir,.al domain",
                "sec_num": "3.3"
            },
            {
                "text": "Given a set of paths s, we can define \u2022 such that pOI if and only if p is a prefix of some p' E s. Such restrictors can be understood as ~throwing away\" all values not lying on one of the given paths. This subclass of restrictors is sut~cient for most applications. However, tile algorithms that we will present apply to the general class as well. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Restriction in the PATR-II nontermir,.al domain",
                "sec_num": "3.3"
            },
            {
                "text": "PATR-ll rules describe how to combine a sequence ,,f constituents. X, ..... X,, to form a constituent X0, stating mutual constraints on the dags associated with tile n + 1 constituents as unifications of various parts of the dags. For instance, we might have the following rule: By notational convention, we can eliminate unifications for the special feature cat {the atomic major category feature) recording this information implicitly by using it in the \"name\" of the constituent, e.g.,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PATR-II grammar rules",
                "sec_num": "3.4"
            },
            {
                "text": "If we require that this notational convention always be used (in so doing, guaranteeing that each constituent have an atomic major category associated with it}, we have thereby mandated a context-free backbone to the grammar, and can then use standard context-free parsing algorithms to parse sentences relative to grammars in this formalism. Limiting to a context-free-based PATR-II is the solution that previous implementations have incorporated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "S--NP VP: (NP agreement) = (VP agreement).",
                "sec_num": null
            },
            {
                "text": "Before proceeding to describe parsing such a context-freebased PATR-II, we make one more purely notational change. Rather than associating with each grammar rule a set of unifications, we instead associate a dag that incorporates all of those unifications implicitly, i.e., a rule is associated with a dug D, such that for all unifications of the form p = q in The two notational conventions--using sets of unifications instead of dags, and putting the eat feature information implicitly in the names of the constituents--allow us to write rules in the more compact and familiar.format above, rather than this final cumbersome way presupposed by the algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "S--NP VP: (NP agreement) = (VP agreement).",
                "sec_num": null
            },
            {
                "text": "We now develop a concrete example of the use of restriction in parsing by extending Earley's algorithm to parse grammars in the PATR-[I formalism just presented.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using Restriction to Extend Earley's Algorithm for PATR-II",
                "sec_num": "4"
            },
            {
                "text": "Earley's algorithm ia a bottom-up parsing algorithm that uses top-down prediction to hypothesize the starting points of possible constituents. Typically, the prediction step determines which categories of constituent can start at a given point in a sentence. But when most of the information is not in an atomic category symbol, such prediction is relatively useless and many types of constituents are predicted that could never be involved in a completed parse. This standard Earley's algorithm is presented in Section 4.2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "An overview of the algorithms",
                "sec_num": "4.1"
            },
            {
                "text": "By extending the algorithm so that the prediction step determines which dags can start at a given point, we can use the information in the features to be more precise in the predictions and eliminate many hypotheses. However. because there are a potentially infinite number of such feature structures, the prediction step may never terminate. This extended Earley's algorithm is presented in Section 4.3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "An overview of the algorithms",
                "sec_num": "4.1"
            },
            {
                "text": "We compromise by having the prediction step determine which restricted dags can start at a given point. If the restrictor is chosen appropriately, this can be as constraining as predicting on the basis of the whole feature structure, yet prediction is guaranteed to terminate because the domain -f restricted feature structures is finite. This final extension ,,f Earley's algorithm is presented in Section -t.4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "An overview of the algorithms",
                "sec_num": "4.1"
            },
            {
                "text": "We start with the Earley algorithm for context-free-based PATR-II on which the other algorithms are based. The algorithm is described in a chart-parsing incarnation, vertices numbered from 0 to n for an n-word sentence TL, I For each vertex i do the following steps until no more items can be added: Informally, this involves predicting top-down all r~tles whose left-hand-side categor~j matches the eatego~ of some constituent being looked for. SOue edge subsumes another edge if and only if the fit'at three elements of the edges are identical and the fourth element o{ the first edge subsumes that of the second edge.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing a context-free-based PATR-II",
                "sec_num": "4.2"
            },
            {
                "text": "Predictor",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parsing a context-free-based PATR-II",
                "sec_num": "4.2"
            },
            {
                "text": "Informally, this involves forming a nsw partial phrase whenever the category of a constituent needed b~l one partial phrase matches the category of a completed phrase and the dug associated with the completed phrase can be unified in appropriately. # 0 and w~ -a, then for all items {h, i-1, Xo --* a.a~3, D] add the item [h, i, Xo --* oa.B, D] .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 249,
                        "end": 344,
                        "text": "# 0 and w~ -a, then for all items {h, i-1, Xo --* a.a~3, D] add the item [h, i, Xo --* oa.B, D]",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Parsing a context-free-based PATR-II",
                "sec_num": "4.2"
            },
            {
                "text": "Informally, this involves aliomin9 lezical items to be inserted into partial phrases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scanner step: If i",
                "sec_num": null
            },
            {
                "text": "Notice that the Predictor Step in particular assumes the availability of the eat feature for top-down prediction. Consequently, this algorithm applies only to PATR-II with a context-free base.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Scanner step: If i",
                "sec_num": null
            },
            {
                "text": "A first attempt at extending the algorithm to make use of morn than just a single atomic-valued cat feature {or less if no .~u,'h feature is mandated} is to change the Predictor",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Removing the Context-Free Base: An Inadequate Extension",
                "sec_num": "4.3"
            },
            {
                "text": "Step so that instead of checking the predicted rule for a lefthand side that matches its cat feature with the predicting subphr,'~e, we require that the whole left.hand-side subdag unifies with the subphrase being predicted from. Formally, This step predicts top-down all rules whose left-hand side matches the dag of some constituent bein 9 looked for.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Removing the Context-Free Base: An Inadequate Extension",
                "sec_num": "4.3"
            },
            {
                "text": "Completer step: As before.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Removing the Context-Free Base: An Inadequate Extension",
                "sec_num": "4.3"
            },
            {
                "text": "Scanner step: As before.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Removing the Context-Free Base: An Inadequate Extension",
                "sec_num": "4.3"
            },
            {
                "text": "[[owever. this extension does not preserve termination. Consi,h,r a %ountin~' grammar that records in the dag the numb,,r of terminals in the string, s and so forth ad infinitum.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Removing the Context-Free Base: An Inadequate Extension",
                "sec_num": "4.3"
            },
            {
                "text": "What is needed is a way of ~forgetting\" some of the structure we are using for top-down prediction. But this is just what restriction gives us, since a restricted dag always subsumes the original, i.e.. it has strictly less information. Takin~ advantage of this properly, we can change the Predi,'ri~n",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Removing the Context-free Base: An Adequate Extension",
                "sec_num": "4.4"
            },
            {
                "text": "Step to restrict the top-down infurulation bef~,re unif> in~ it into the rule's dag. Another round of prediction yields this same edge so the process terminates immediately, duck Because the predicted edge is more general than {i.e., subsumes) all the infinite nutuber ,,f edges it replaced that were predicted under the nonterminating extension, it preserves completeness. On the other hand. because the predicted edge is not more general than the rule itself, it permits no constituents that violate the constraints of the rule: therefore, it preserves correctness.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Removing the Context-free Base: An Adequate Extension",
                "sec_num": "4.4"
            },
            {
                "text": "Finally, because restriction has a finite range, the prediction step can only occur a finite number of times before building an edge identical to one already built; therefore, it preserves ter,nination. The following table gives s,)me data ~ugge~t.ive of the el'feet of the restrictor on parsing etliciency, it shows the total mlnlber (,f active and passive edges added to the <'hart for five sent,,ncos of up to eleven words using four different restrictors. The first allowed only category information to be ,ist,d in prodiction, thus generating th,, same l)eh:wi<)r .as the un< 1  33  33  20  16 I  52  2  85  50  29  21 I  75  3  219  124  72  45  79  4  319  319  98  71  78  5  812  516  157  100 !i 88 Several facts should be kept in mind about the data above. First, for sentences with no Wh-movement or relative clauses, no gaps were ever predicted. In other words, the top-down filtering is in some sense maximal with respect to gap hypothesis. Second, the subcategorization information used in top-down filtering removed all hypotheses of constituents except for those directly subcategorized [or. Finally, the grammar used contained constructs that would cause nontermination in the unrestricted extension of Earley's algorithm.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 581,
                        "end": 708,
                        "text": "1  33  33  20  16 I  52  2  85  50  29  21 I  75  3  219  124  72  45  79  4  319  319  98  71  78  5  812  516  157  100 !i 88",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Removing the Context-free Base: An Adequate Extension",
                "sec_num": "4.4"
            },
            {
                "text": "This technique of restriction of complex-feature structures into a finite set of equivalence cla~ses can be used for a wide variety of purposes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Applications of Restriction",
                "sec_num": "5.2"
            },
            {
                "text": "First. parsing Mg<,rithnls such ~ tile ;d~<)ve (:all be modified for u~e by grain<nat (ortnalintus other than P.\\TR-ll. In particular, definite-clause grammars are amenable to this technique, anti it <:an be IIsed to extend the Earley deduction of Pereira \u2022 \";*'<'(rod. rt,~ll'i<'ti(.ll <';tlt l)e llmt'+l If> ~'llh;lllt'+' ,+l h,'r I+;~l'>ill~, :dgorithuls. Ig>r eX;lllll)le, tilt, ancillary fllllttic~ll to c.tlq)uto 1.1{ <'l.sure--whMi. like Ihe Earh,y alg-rithm..,itht,r du.,.+ not use feature information, or fails to terminate--,-an be modified in the same way as the Ea.rh,y I)re<lict~r step to ternlinate while still using significant feature inf<,rmati(m. LR parsing techniques <'an therel+y I)e Ilsed f,,r ellicient par'dn~ +J conll)h,x-fe:)+ture-lmn.,<l fiwnlalislun. .\\l,,r(' -,l)*','ulaliv+,ly. , ' Finally, restriction can be ilsed ill are:~.s of i)arshlg oth+,r than top-down prediction and liltering. For inslance, in many parsing schemes, edges are indexed by a categ<,ry symbol for elficient retrieval. In the case of Earley's Mgorithm. active edges can be indexed bv the category of the ,'onstituent following the dot in the dotted rule. tlowever, this again forces the primacy and atomicity of major category information. Once again, restriction can be used to solve the problem. Indexing by the restriction of the dag associated with the need p.grmits efficient retrieval that can be tuned to the particular grammar, yet does not affect the completeness or correctness of the algorithm. The indexing can be done by discrimination nets, or specialized hashing functions akin to the partial-match retrieval techniques designed for use in Prolog implementations [16] .",
                "cite_spans": [
                    {
                        "start": 248,
                        "end": 255,
                        "text": "Pereira",
                        "ref_id": null
                    },
                    {
                        "start": 810,
                        "end": 811,
                        "text": "'",
                        "ref_id": null
                    },
                    {
                        "start": 1680,
                        "end": 1684,
                        "text": "[16]",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Applications of Restriction",
                "sec_num": "5.2"
            },
            {
                "text": "We have presented a general technique of restriction with many applications in the area of manipulating complexfeature-based grammar formalisms. As a particular example, we presented a complete, correct, terminating extension of Earley's algorithm that uses restriction to perform top-down filtering. Our implementation demonstrates the drastic elimination of chart edges that can be achieved by this technique. Finally, we described further uses for the technique--including parsing other grammar formalisms, including definite-clause grammars; extending other parsing algorithms, including LR methods and syntactic preference modeling algorithms; and efficient indexing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            }
        ],
        "back_matter": [
            {
                "text": "This research has been made possible in part by a gift from the Sys* terns Development Fonndation. and was also supported by the DefenseWe feel that the restriction technique has great potential to make increasingly powerful grammar formalisms computationally feasible.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "acknowledgement",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "On theorder of words",
                "authors": [
                    {
                        "first": "A",
                        "middle": [
                            "E"
                        ],
                        "last": "Ades",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "J"
                        ],
                        "last": "Steedman",
                        "suffix": ""
                    }
                ],
                "year": 1982,
                "venue": "Linguistics and Philosophy",
                "volume": "4",
                "issue": "4",
                "pages": "517--558",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ades, A. E. and M. J. Steedman. On theorder of words. Linguistics and Philosophy, 4(4):517-558, 1982.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The Mental Representation of Grammatical Relations",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Ford",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Bresnan",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Kaplan",
                        "suffix": ""
                    }
                ],
                "year": 1982,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ford, M., J. Bresnan, and R. Kaplan. A competence- based theory of syntactic closure. In J. Bresnan, editor, The Mental Representation of Grammatical Relations, MIT Press, Cambridge, Massachusetts, 1982.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Processing English with a generalized phrase structure grammar",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Gawron",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "King",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Lamping",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Loebner",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [
                            "A"
                        ],
                        "last": "Paulson",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "K"
                        ],
                        "last": "Pullum",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [
                            "A"
                        ],
                        "last": "Sag",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Wasow",
                        "suffix": ""
                    }
                ],
                "year": 1982,
                "venue": "Proeecdinos of the ~Oth Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "16--18",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gawron, J. M., J. King, J. Lamping, E. Loebner, E. A. Paulson, G. K. Pullum, I. A. Sag, and T. Wasow. Processing English with a generalized phrase structure grammar. In Proeecdinos of the ~Oth Annual Meet- ing of the Association for Computational Linguistics, pages 74-81, University of Toronto. Toronto, Ontario, Canada, 16-18 June 1982.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Generalized Phrase Structure Grammar",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Gazdar",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "K"
                        ],
                        "last": "Puilum",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [
                            "A"
                        ],
                        "last": "Sag",
                        "suffix": ""
                    }
                ],
                "year": 1985,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gazdar, G., E. Klein, G. K. Puilum, and I. A. Sag. Generalized Phrase Structure Grammar. Blackwell Publishing, Oxford, England, and Harvard University Press, Cambridge, M~ssachusetts, 1985.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Lexical-functional grammar: a formal system for grammatical representation",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Kaplan",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Bresnan",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kaplan, R. and J. Bresnan. Lexical-functional gram- mar: a formal system for grammatical representation. [n J. Bresnan, editor, The Mental Representation o/ Grammatical Relations, MIT Press, Cambridge, Mas- sachusetts, 1983.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "An algorithm for compiling parsing tables from a grammar",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Kay",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kay, M. An algorithm for compiling parsing tables from a grammar. 1980. Xerox Pale Alto Research Center. Pale Alto, California.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "BUP: a bottom-up parser embeddad in Prolog",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Matsumoto",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Tanaka",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Hira'kawa",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ii",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Miyoshi",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Yasukawa",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "New Generation Computing",
                "volume": "1",
                "issue": "",
                "pages": "145--158",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matsumoto, Y., H. Tanaka, H. Hira'kawa. II. Miyoshi. and H. Yasukawa. BUP: a bottom-up parser embed- dad in Prolog. New Generation Computing, 1:145-158, 1983.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "The proper treatment of quantification in ordinary English",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Montague",
                        "suffix": ""
                    }
                ],
                "year": 1974,
                "venue": "Formal Philosophy",
                "volume": "",
                "issue": "",
                "pages": "188--221",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Montague, R. The proper treatment of quantification in ordinary English. In R. H. Thomason. editor. Formal Philosophy, pages 188-221, Yale University Press. New Haven, Connecticut, 1974.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Logic for natural language anal.vsis",
                "authors": [
                    {
                        "first": "F",
                        "middle": [
                            "C N"
                        ],
                        "last": "Pereira",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "Artificial Intelligence Center, SRI International",
                "volume": "275",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pereira, F. C. N. Logic for natural language anal.vsis. Technical Note 275, Artificial Intelligence Center, SRI International, Menlo Park, California, 1983.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "The semantics of grammar formalisms seen as computer languages",
                "authors": [
                    {
                        "first": "F",
                        "middle": [
                            "C N"
                        ],
                        "last": "Pereira",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Shieber",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "Proceedings of the Tenth International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pereira, F. C. N. and S. M. Shieber. The semantics of grammar formalisms seen as computer languages. In Proceedings of the Tenth International Conference on Computational Linguistics, Stanford University, Stan- ford, California, 2-7 July 198,t.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Parsing as deduction",
                "authors": [
                    {
                        "first": "F",
                        "middle": [
                            "C N"
                        ],
                        "last": "Pereira",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "H D"
                        ],
                        "last": "Warren",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "Proceedinas o/ the elst Annual Meetinff of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "137--144",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pereira, F. C. N. and D. H. D. Warren. Parsing as deduction. In Proceedinas o/ the elst Annual Meet- inff of the Association for Computational Linguistics. pages 137-144, Massachusetts Institute of Technology..",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Criteria for designing computer facilities for linguistic analysis",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Shieber",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shieber, S. M. Criteria for designing computer facilities for linguistic analysis. To appear in Linguistics.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "The design of a computer language for linguistic information",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Shieber",
                        "suffix": ""
                    }
                ],
                "year": 1984,
                "venue": "Proceedings of the Tenth International Conference on Computational Lingui,sties",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shieber, S. M. The design of a computer language for linguistic information. In Proceedings of the Tenth International Conference on Computational Lingui,s- ties, Stanford University, Stanford. California. 2-7 July 1984.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Sentence disambiguation by a shiftreduce parsing technique",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Shieber",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "Proceedinqs of the ~l.~t Annual Martin O of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "15--17",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shieber, S. M. Sentence disambiguation by a shift- reduce parsing technique. [n Proceedinqs of the ~l.~t Annual Martin O of the Association for Computational Linguistics, pages 1i5--118, Massachusetts Institute of Technology, Cambridge, Massachusetts, 15-17 June 1983.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "The formalism and implementation of PATR-II",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Shieber",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Uszkoreit",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [
                            "C N"
                        ],
                        "last": "Pereira",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "J"
                        ],
                        "last": "Robinson",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Tyson",
                        "suffix": ""
                    }
                ],
                "year": 1983,
                "venue": "Re,earth on Interactive Acquisition and Use of Knowledge, SRI International. Menio Park",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shieber, S. M., H. Uszkoreit, F. C. N. Pereira, J. J. Robinson, and M. Tyson. The formalism and im- plementation of PATR-II. In Re,earth on Interactive Acquisition and Use of Knowledge, SRI International. Menio Park, California, 1983.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Indexing Prol.g clauses via superimposed code words and lield encoded words",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "J"
                        ],
                        "last": "Wise",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "M W"
                        ],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Powors",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1984,
                "venue": "Pvoeeedincs of the 198. i International Svm. posture on Logic Prowammin\u00a2",
                "volume": "",
                "issue": "",
                "pages": "6--9",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wise, M. J. and D. M. W, Powors. Indexing Prol.g clauses via superimposed code words and lield encoded words. In Pvoeeedincs of the 198. i International Svm. posture on Logic Prowammin\u00a2, pages 203-210, IEEE Computer Society Press, Atlantic City, New Jersey, 6-9 February 1984.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "text": "contained in the dags. Intuitively viewed, a dag D subsumes a dag D' {notated D ~/T) if D contains a subset of the information in (i.e., is more general than)/Y. Thus variables subsume all other dags, atomic or complex, because as the trivial case, they contain no information at all. A complex dag D subsumes a complex dag De if and only if D(i) C D'(I) for all l E dora(D) and LF(P) =/Y(q) for all paths p and q such that D(p) = D(q). An atomic dag neither subsumes nor is subsumed by any different atomic dag. Finally, given two dags D' and D\", the unification of the dags is the most general dag D such that LF ~ D and D a C_ D. We notate this D = D ~ U D\".",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF2": {
                "text": "Using our previous example, consider a restrictor 4~0 generated from the set of paths {(a b), (d e f),(d i j f)}. That is, pool for all p in the listed paths and all their prefixes. Then given the previous dag Do, D0~O0 is a: [b: e l Restriction has thrown away all the infi~rmatiou except the direct values of (a b), (d e f), and (d i j f). (Note however that because the values for paths such as (d e f 9) were thrown away, (D0~'\u00a2o)((d e f)) is a variahh,.)",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF3": {
                "text": "Xo -\" Xt .\\': : (.\\,, ,'sO = >' (.\\', rat) = .X l' (.\\': cat) = I'P (X, agreement) = (.\\'~ agreement).",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF4": {
                "text": "the rule. D,(p) = D,(q). Similarly, unifications of the form p = a where a is atomic would require that D,(p) = a. For the rule mentioned above, such a dug would be Thus a rule can be thought of as an ordered pair (P, D) whore P is a production of the form X0 --XI -.. X, and D is a dug with top-level features Xo,..., X, and with atomic values for the eat feature of each of the top-level subdags.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF5": {
                "text": "'', Wn. An item of the form [h, i, A --a.~, D I designates an edge in the chart from vertex h to i with dotted rule A --a.3 and dag D. The chart is initialized with an edge [0, 0, X0 --.a, DI for each rule (X0 --a, D) where D((.% cat)) = S.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF6": {
                "text": "step: For each item ending at i c,f the form [h, i, Xo --a.Xj~, D I and each rule ,ff the form (-\\'o --~, E) such that E((Xo cat)) = D((Xi cat)), add an edge of the form [i, i,.I( 0 --.3,, E] if this edge is not subsumed by another edge.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF7": {
                "text": "we have Predictor step: For each item ending at i of the form ih. i. Xo --a.Xj~, DI and each rule of the form (Xo \"~. E). add an edge of the form [i, i, X0 --.7, Ell {X0 : D(Xj)II if the unification succeeds and this edge is not subsumed by another edge.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF8": {
                "text": "SSimilar problems occur in natural language grammars when keeping lists of, say, subcategorized constituents or galm to be found.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF9": {
                "text": "Predictor step: For each item ending at i of the f(~rm Ih, i, .% --c,..Y~;L DI and each rule of the form,{.\\'0 --\"t, E}, add an edge of the form ft. i..V0 --.'~. E u {D{Xi)I~4~}] if the unification succeeds and this odge is not subsumed by another edge. This step predicts top-do,,n flit rules ,'h,.~r lefl.ha,d side matrhes the restricted (lag of .~ott:e r,o.~tilttcol fitting looked for. Completer step: AS before. Se~m, er step: As before.This algorithm on the previous grammar, using a restrictor that allows through only the cat feature of a dag, operates a.s before, but predicts the first time around the more general edge",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF10": {
                "text": "The alg.rithnl just described liras been imph,meuted and in-(',>rp()rat,,<l into the PATR-II Exp(,rinwntal Syst(,m at SRI Itlt,.rnali(,)lal. a gr:lmmar deveh)pment :m(l tt,~,ting envirt)nm,.))t fi,l' I'\\TILII ~rammars writt(.u in Z(.t:llisl) for the Syrnl)+)li('~ 3(;(ll).",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF11": {
                "text": "'xte:M('(} Earh,y's algorithl,,. The -,<'('(,n,{ a<{d,,,l su{w:tle-m+,rizati+.n illf-rrllalion in a<l(lili(,n t<)lh(,(-:H+,~<)ry: Thethird a<hl-d lill.+r-gap +h,l.'ndency infornlaliou a.s well ~,<+ Ihat the ~:tp pr.lif<.rati<,n pr-hlem wa.s r<,m<)ved. The lin:d restri<'tor ad,lo.I v<,rb form informati.n. The last c<flutnn shows the p,,r('entag+, of edges that were elin,inated by using this final restrh-tor. Prediction % Sentence eat] + s.bcat I + gap t \u00f7 form elim.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF12": {
                "text": "and Warren [i 1 I. Pereira has use<l a similar technique to improve the ellh'iency of the BI'P (bottomup h,ft-corner) parser [71 for DCC;. I,F(; and t;PSC parsers can nlake use of the top-down filteringdevic,,a~wvll. [:f'(; p,'tl~ot'~ n|ight be [mill th;tl d() ll(d. r<,<[11il'i. ;+. c<~llt+,,,;-l'ri,~. backl><.m,.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF13": {
                "text": "++cheme~. l'(+r s,'hed.lin~ I,I{ l>:irnt.r:.-+ h~ yi..hl l,;~r.,,.-, i. l>rvl \"-or+,m-e ,~r+h'r t.i:~hl I., it,,,lilie~l fi,r .',.mld.,x-f,,:lqur.-l,;r~.,,l fl)rlllaliP,.llln, alld et,'cn t1111t,<[ Iw lll+,:)+tln .d + lilt + l.(,,+.tl'ivt~+r.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            }
        }
    }
}