File size: 88,350 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
{
    "paper_id": "P94-1017",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:19:13.037840Z"
    },
    "title": "AN OPTIMAL TABULAR PARSING ALGORITHM",
    "authors": [
        {
            "first": "Mark-Jan",
            "middle": [],
            "last": "Nederhof",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Nijmegen",
                "location": {
                    "postCode": "6525 ED",
                    "settlement": "Nijmegen",
                    "country": "The Netherlands"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In this paper we relate a number of parsing algorithms which have been developed in very different areas of parsing theory, and which include deterministic algorithms, tabular algorithms, and a parallel algorithm. We show that these algorithms are based on the same underlying ideas. By relating existing ideas, we hope to provide an opportunity to improve some algorithms based on features of others. A second purpose of this paper is to answer a question which has come up in the area of tabular parsing, namely how to obtain a parsing algorithm with the property that the table will contain as little entries as possible, but without the possibility that two entries represent the same subderivation.",
    "pdf_parse": {
        "paper_id": "P94-1017",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In this paper we relate a number of parsing algorithms which have been developed in very different areas of parsing theory, and which include deterministic algorithms, tabular algorithms, and a parallel algorithm. We show that these algorithms are based on the same underlying ideas. By relating existing ideas, we hope to provide an opportunity to improve some algorithms based on features of others. A second purpose of this paper is to answer a question which has come up in the area of tabular parsing, namely how to obtain a parsing algorithm with the property that the table will contain as little entries as possible, but without the possibility that two entries represent the same subderivation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Left-corner (LC) parsing is a parsing strategy which has been used in different guises in various areas of computer science. Deterministic LC parsing with k symbols of lookahead can handle the class of LC(k) grammars. Since LC parsing is a very simple parsing technique and at the same time is able to deal with left recursion, it is often used as an alternative to top-down (TD) parsing, which cannot handle left recursion and is generally less efficient.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "Nondeterministic LC parsing is the foundation of a very efficient parsing algorithm [7] , related to Tomita's algorithm and Earley's algorithm. It has one disadvantage however, which becomes noticeable when the grammar contains many rules whose right-hand sides begin with the same few grammars symbols, e.g.",
                "cite_spans": [
                    {
                        "start": 84,
                        "end": 87,
                        "text": "[7]",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "A ~ c~f~l I ~f~2 I ... where ~ is not the empty string. After an LC parser has recognized the first symbol X of such an c~, it will as next step predict all aforementioned rules. This amounts to much nondeterminism, which is detrimental both to the time-complexity and the space-complexity. *Supported by the Dutch Organisation for Scientific Research (NWO), under grant 00-62-518",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "A first attempt to solve this problem is to use predictive LR (PLR) parsing. PLR parsing allows simultaneous processing of a common prefix c~, provided that the left-hand sides of the rules are the same. However, in case we have e.g. the rules A --* c~t31 and B --~ ~/32, where again ~ is not the empty string but now A ~ B, then PLR parsing will not improve the efficiency. We therefore go one step further and discuss extended LR (ELR) and common-prefix (CP) parsing, which are algorithms capable of simultaneous processing of all common prefixes. ELR and CP parsing are the foundation of tabular parsing algorithms and a parallel parsing algorithm from the existing literature, but they have not been described in their own right.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "To the best of the author's knowledge, the various parsing algorithms mentioned above have not been discussed together in the existing literature. The main purpose of this paper is to make explicit the connections between these algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "A second purpose of this paper is to show that CP and ELR parsing are obvious solutions to a problem of tabular parsing which can be described as follows. For each parsing algorithm working on a stack there is a realisation using a parse table, where the parse table allows sharing of computation between different search paths. For example, Tomita's algorithm [18] can be seen as a tabular realisation of nondeterministic LR parsing.",
                "cite_spans": [
                    {
                        "start": 361,
                        "end": 365,
                        "text": "[18]",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "At this point we use the term state to indicate the symbols occurring on the stack of the original algorithm, which also occur as entries in the parse table of its tabular realisation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "In general, powerful algorithms working on a stack lead to efficient tabular parsing algorithms, provided the grammar can be handled almost deterministically. In case the stack algorithm is very nondeterministic for a certain grammar however, sophistication which increases the number of states may lead to an increasing number of entries in the parse table of the tabular realization. This can be informally explained by the fact that each state represents the computation of a number of subderivations. If the number of states is increased then it is inevitable that at some point some states represent an overlapping collection of subderivations, which may lead to work being repeated during parsing. Furthermore, the parse forest (a compact representation of all parse trees) which is output by a tabular algorithm may in this case not be optimally dense.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "We conclude that we have a tradeoff between the case that the grammar allows almost deterministic parsing and the case that the stack algorithm is very nondeterministic for a certain grammar. In the former case, sophistication leads to less entries in the table, and in the latter case, sophistication leads to more entries, provided this sophistication is realised by an increase in the number of states. This is corroborated by empirical data from [1, 4] , which deal with tabular LR parsing.",
                "cite_spans": [
                    {
                        "start": 450,
                        "end": 453,
                        "text": "[1,",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 454,
                        "end": 456,
                        "text": "4]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "As we will explain, CP and ELR parsing are more deterministic than most other parsing algorithms for many grammars, but their tabular realizations can never compute the same subderivation twice. This represents an optimum in a range of possible parsing algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "This paper is organized as follows. First we discuss nondeterministic left-corner parsing, and demonstrate how common prefixes in a grammar may be a source of bad performance for this technique.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "Then, a multitude of parsing techniques which exhibit better treatment of common prefixes is discussed. These techniques, including nondeterministic PLR, ELR, and CP parsing, have their origins in theory of deterministic, parallel, and tabular parsing. Subsequently, the application to parallel and tabular parsing is investigated more closely.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "Further, we briefly describe how rules with empty right-hand sides complicate the parsing process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The ideas described in this paper can be generalized to head-driven parsing, as argued in [9] .",
                "cite_spans": [
                    {
                        "start": 90,
                        "end": 93,
                        "text": "[9]",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "We will take some liberty in describing algorithms from the existing literature, since using the original descriptions would blur the similarities of the algorithms to one another. In particular, we will not treat the use of lookahead, and we will consider all algorithms working on a stack to be nondeterministic. We will only describe recognition algorithms. Each of the algorithms can however be easily extended to yield parse trees as a side-effect of recognition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The notation used in the sequel is for the most part standard and is summarised below.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "A context-free grammar G = (T, N, P, S) consists of two finite disjoint sets N and T of nonterminals and terminals, respectively, a start symbol S E N, and a finite set of rules P. Every rule has the form A --* c~, where the left-hand side (lhs) A is an element from N and the right-hand side (rhs) a is an element from V*, where V denotes (NUT). P can also be seen as a relation on N \u00d7 V*.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "We use symbols A, B, C,... to range over N, symbols a, b, c,... to range over T, symbols X, ]I, Z to range over V, symbols c~, [3, 7,-. . to range over V*, and v, w, x,... to range over T*. We let e denote the empty string. The notation of rules A --* al, A --* a2,.., with the same lhs is often simplified to A ~ c~1]a21... A rule of the form A --~ e is called an epsilon rule.",
                "cite_spans": [
                    {
                        "start": 127,
                        "end": 135,
                        "text": "[3, 7,-.",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "We assume grammars do not have epsilon rules unless stated otherwise. The relation P is extended to a relation ~ on V* \u00d7 V* as usual. The reflexive and transitive closure of ~ is denoted by --**.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "We define: B L A if and only if A --* Be for some a. The reflexive and transitive closure of / is denoted by /*, and is called the left-corner relation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "We say two rules A --* al and B --* a2 have a common prefix [ The initial configuration is (Init, w), where Init E Alph is a distinguished stack symbol, and w is the input.",
                "cite_spans": [
                    {
                        "start": 60,
                        "end": 61,
                        "text": "[",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "relation ~-. Thus, (F,v) ~-(F',v') denotes that (F',v')",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The steps of an automaton are specified by means of the",
                "sec_num": null
            },
            {
                "text": "is obtainable from (F, v) by one step of the automaton. The reflexive and transitive closure of ~-is denoted by F-*. The input w is accepted if (Init, w) F-* (Fin, e),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The steps of an automaton are specified by means of the",
                "sec_num": null
            },
            {
                "text": "where Fin E Alph is a distinguished stack symbol.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The steps of an automaton are specified by means of the",
                "sec_num": null
            },
            {
                "text": "For the definition of left-corner (LC) recognition [7] we need stack symbols (items) of the form",
                "cite_spans": [
                    {
                        "start": 51,
                        "end": 54,
                        "text": "[7]",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LC parsing",
                "sec_num": null
            },
            {
                "text": "[A --~ a \u2022 [3],",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LC parsing",
                "sec_num": null
            },
            {
                "text": "where A --~ c~[3 is a rule, and a \u00a2 e. (Remember that we do not allow epsilon rules.) The informal meaning of an item is \"The part before the dot has just been recognized, the first symbol after the dot is to be recognized next\". For technical reasons we also need the items [S' ~ ..S] and [S' --~ S .], where S' is a fresh symbol. Formally:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LC parsing",
                "sec_num": null
            },
            {
                "text": "I LC = {[A --* a \u2022 f]l A --* af \u2022 Pt A(c~ \u00a2 eVA --S')}",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LC parsing",
                "sec_num": null
            },
            {
                "text": "where pt represents the augmented set of rules, consisting of the rules in P plus the extra rule S t --~ S.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LC parsing",
                "sec_num": null
            },
            {
                "text": "[S t --* S .]. Transitions are allowed according to the following clauses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Algorithm 1 (Left-corner) ALe= (T,I Lc, Init,~-, Fin), Init = IS' ---* \u2022 S], Fin =",
                "sec_num": null
            },
            {
                "text": "* f \u2022 C'/], av) ~- (F[B --~/3 \u2022 CT][A ~ a \u2022 ~], v) where there is A --* ac~ \u2022 P~ such that A [* C 2. (F[A --~ a \u2022 aft], av) ~-(F[A --* c~a \u2022/3], v) 3. (FIB ~ [3 \u2022 C'/][d ~ ~ .], v) (rib ~ f \u2022 C'/][D ---, A \u2022 6], v)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "where",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "there is D ~ A5 \u2022 pt such that D L* C 4. (FIB --* [3 \u2022 A'/][A ---* a .], v) ~-(FIB ~ fA \u2022 '/], v)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "The conditions using the left-corner relation Z* in the first and third clauses together form a feature which is called top-down (TD) filtering. TD filtering makes sure that subderivations that are being computed bottomup may eventually grow into subderivations with the required root. TD filtering is not necessary for a correct algorithm, but it reduces nondeterminism, and guarantees the correct-prefix property, which means that in case of incorrect input the parser does not read past the first incorrect character.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "Example 1 Consider the grammar with the following rules:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "E ---* E+T[TTE[T T ~ T*FIT**F IF F ---* a",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "It is easy to see that E / E,T Z E,T L T, F / T. The relation L* contains g but from the reflexive closure it also contains F L* F and from the transitive closure it also contains F L* E.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "The recognition of a * a is realised by: [] LC parsing with k symbols of lookahead can handle deterministically the so called LC(k) grammars. This class of grammars is formalized in [13] . 1 How LC parsing can be improved to handle common su~xes efficiently is discussed in [6] ; in this paper we restrict our attention to common prefixes.",
                "cite_spans": [
                    {
                        "start": 182,
                        "end": 186,
                        "text": "[13]",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 274,
                        "end": 277,
                        "text": "[6]",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "[E' --* \u2022 E-I- a,a 1 [E'--~ \u2022E][F--*a\u2022] *a 2 [E'--*\u2022E][T~F\u2022] *a 3 [E'--~QE][T--*T.*F] *a 4 [E'~ \u2022E][T~T.\u2022F] a 5 [E'~.EI[T--*T.\u2022F][F---*ae] 6 [E' ---* \u2022 E][T ---* T * F \u2022] 7 [E'~\u2022E][E~T\u2022] 8 [E'~E\u2022]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(FIB --",
                "sec_num": "1."
            },
            {
                "text": "In this section we investigate a number of algorithms which exhibit a better treatment of common prefixes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PLR, ELR, and CP parsing",
                "sec_num": null
            },
            {
                "text": "Predictive LR (PLR) parsing with k symbols of lookahead was introduced in [17] as an algorithm which yields efficient parsers for a subset of the LR(k) grammars [16] and a superset of the LC(k) grammars. How deterministic PLR parsing succeeds in handling a larger class of grammars (the PLR(k) grammars) than the LC(k) grammars can be explained by identifying PLR parsing 1In [17] a different definition of the LC(k) grammars may be found, which is not completely equivalent.",
                "cite_spans": [
                    {
                        "start": 74,
                        "end": 78,
                        "text": "[17]",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 161,
                        "end": 165,
                        "text": "[16]",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 376,
                        "end": 380,
                        "text": "[17]",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "for some grammar G with LC parsing for some grammar G t which results after applying a transformation called left-factoring.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "Left-factoring consists of replacing two or more rules A ~ a/31 [a/32[... with a common prefix a by the rules A ~ hA' and A' --* ~311f~2[..., where A' is a fresh nonterminal. The effect on LC parsing is that a choice between rules is postponed until after all symbols of a are completely recognized. Investigation of the next k symbols of the remaining input may then allow a choice between the rules to be made deterministically.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "The PLR algorithm is formalised in [17] by transforming a PLR(k) grammar into an LL(k) grammar and then assuming the standard realisation of LL(k) parsing. When we consider nondeterministic top-down parsing instead of LL(k) parsing, then we obtain the new formulation of nondeterministic PLR(0) parsing below.",
                "cite_spans": [
                    {
                        "start": 35,
                        "end": 39,
                        "text": "[17]",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "We first need to define another kind of item, viz. of the form [A --* ~] such that there is at least one rule of the form A --* a/3 for some ft. Formally: ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "I PLR = {[A ---* ~] [ A --* a/3",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "[E' ~ ] a * a [E' ][F a] \u2022 a [E' --~ ][T ---* F] * a [E' --* ][T --* T] * a [E' --* ][T ~ T .] a : [E' E]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "Comparing these configurations with those reached by the LC recognizer, we see that here after An extended context-free grammar has right-hand sides consisting of arbitrary regular expressions over V. This requires an LR parser for an extended grammar (an ELR parser) to behave differently from normal LR parsers.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "The behaviour of a normal LR parser upon a reduction with some rule A --* a is very simple: it pops la[ states from the stack, revealing, say, state Q; it then pushes state goto(Q, A). (We identify a state with its corresponding set of items.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "For extended grammars the behaviour upon a reduction cannot be realised in this way since the regular expression of which the rhs is composed may describe strings of various lengths, so that it is unknown how many states need to be popped.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "In [11] this problem is solved by forcing the parser to decide at each call goto(Q, X) whether a) X is one more symbol of an item in Q of which some symbols have already been recognized, or whether b) X is the first symbol of an item which has been introduced in Q by means of the closure function.",
                "cite_spans": [
                    {
                        "start": 3,
                        "end": 7,
                        "text": "[11]",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "In the second case, a state which is a variant of goto(Q,X) is pushed on top of state Q as usual. In the first case, however, state Q on top of the stack is replaced by a variant of goto(Q, X). This is safe since we will never need to return to Q if after some more steps we succeed in recognizing some rule corresponding with one of the items in Q. A consequence of the action in the first case is that upon reduction we need to pop only one state off the stack. Further work in this area is reported in [5] , which treats nondeterministic ELR parsing and therefore does not regard it as an obstacle if a choice between cases a) and b) cannot be uniquely made.",
                "cite_spans": [
                    {
                        "start": 505,
                        "end": 508,
                        "text": "[5]",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "We are not concerned with extended context-free grammars in this paper. However, a very interesting algorithm results from ELR parsing if we restrict its application to ordinary context-free grammars. (We will maintain the name \"extended LR\" to stress the origin of the algorithm.) This results in the new nondeterministic ELR(0) algorithm that we describe below, derived from the formulation of ELK parsing in [5] . ",
                "cite_spans": [
                    {
                        "start": 411,
                        "end": 414,
                        "text": "[5]",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predictive LR parsing",
                "sec_num": null
            },
            {
                "text": "For ELR parsing however, we need two goto functions, goto I and goto2, one for kernel items (i.e. those in I LC) and one for nonkernel items (the others). These are defined by",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "goto(q,x) = closure({[A ---* aX \u2022/3] I [A ~ a \u2022 X/3] E Q})",
                "sec_num": null
            },
            {
                "text": "(a # e VA = S')})",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gotol(Q,X) = closure({[A --* aX \u2022 fl] I [A ---* (~ \u2022 X/3] E Q A",
                "sec_num": null
            },
            {
                "text": "At each shift (where X is some terminal) and each reduce with some rule A --* a (where X is A) we may nondeterministically apply gotol, which corresponds with case a), or goto2, which corresponds with case b). Of course, one or both may not be defined on Q and X, because gotoi(Q, X) may be @, for i E {1, 2}. Now remark that when using goto I and goto2, each reachable set of items contains only items of the form A --* a \u2022/3, for some fixed string a, plus some nonkernel items. We will ignore the nonkernel items since they can be derived from the kernel items by means of the closure function. Pseudo ELR parsing can be more easily realised than full ELR parsing, but the correct-prefix property can no longer be guaranteed. Pseudo ELR parsing is the foundation of a tabular algorithm in [20] .",
                "cite_spans": [
                    {
                        "start": 791,
                        "end": 795,
                        "text": "[20]",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "goto2(Q,X ) = closure({[A ~ X \u2022/3] I [A --* \u2022 X/3] 6 Q A A # S'})",
                "sec_num": null
            },
            {
                "text": "One of the more complicated aspects of the ELR algorithm is the treatment of the sets of nonterminals in the left-hand sides of items. A drastically simplified algorithm is the basis of a tabular algorithm in [21] .",
                "cite_spans": [
                    {
                        "start": 209,
                        "end": 213,
                        "text": "[21]",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common-prefix parsing",
                "sec_num": null
            },
            {
                "text": "Since in [21] the algorithm itself is not described but only its tabular realisation, 2 we take the liberty of giving this algorithm our own name: common-prefix (CP) parsing, since it treats all rules with a common prefix simultaneously, a The simplification consists of omitting the sets of nonterminals in the left-hand sides of items: ",
                "cite_spans": [
                    {
                        "start": 9,
                        "end": 13,
                        "text": "[21]",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common-prefix parsing",
                "sec_num": null
            },
            {
                "text": "I Cp = {[--* s] [ A ~ s/3",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common-prefix parsing",
                "sec_num": null
            },
            {
                "text": "V[-~/3][4_, s], v) F-(V[--*/3A], v)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common-prefix parsing",
                "sec_num": null
            },
            {
                "text": "where there are A --* s, B --~/3A'7 E pt The simplification which leads to the CP algorithm inevitably causes the correct-prefix property to be lost. Example 4 Consider again the grammar from Example 1. It is clear that a\u00f7a T ais not acorrect string according to this grammar. The CP algorithm may go through the following sequence of configurations: 2An attempt has been made in [19] but this paper does not describe the algorithm in its full generality.",
                "cite_spans": [
                    {
                        "start": 380,
                        "end": 384,
                        "text": "[19]",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common-prefix parsing",
                "sec_num": null
            },
            {
                "text": "3The original algorithm in [21] applies an optimization concerning unit rules, irrelevant to our discussion. We see that in",
                "cite_spans": [
                    {
                        "start": 27,
                        "end": 31,
                        "text": "[21]",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common-prefix parsing",
                "sec_num": null
            },
            {
                "text": "Step 9 the first incorrect symbol T is read, but recognition then continues. Eventually, the recognition process is blocked in some unsuccessful configuration, which is guaranteed to happen for any incorrect input 4. In general however, after reading the first incorrect symbol, the algorithm may perform an unbounded number of steps before it halts. (Imagine what happens for input of the forma+aTa\u00f7a+a+...+a.) []",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common-prefix parsing",
                "sec_num": null
            },
            {
                "text": "Nondeterministic push-down automata can be realised efficiently using parse tables [1] . A parse table consists of sets Ti,j of items, for 0 < i < j _~ n, where al ...an represents the input. The idea is that an item is only stored in a set Ti,j if the item represents recognition of the part of the input ai+l \u2022 \u2022 \u2022 aj. We will first discuss a tabular form of CP parsing, since this is the most simple parsing technique discussed above. We will then move on to the more difficult ELR technique. Tabular PLR parsing is fairly straightforward and will not be discussed in this paper.",
                "cite_spans": [
                    {
                        "start": 83,
                        "end": 86,
                        "text": "[1]",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tabular parsing",
                "sec_num": null
            },
            {
                "text": "Tabular CP parsing CP parsing has the following tabular realization: For an example, see Figure 1 . Tabular CP parsing is related to a variant of CYK parsing with TD filtering in [5] . A form of tabular 4unless the grammar is cyclic, in which case the parser may not terminate, both on correct and on incorrect input from a certain input position. For input position i these nonterminals D are given by",
                "cite_spans": [
                    {
                        "start": 179,
                        "end": 182,
                        "text": "[5]",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 89,
                        "end": 97,
                        "text": "Figure 1",
                        "ref_id": "FIGREF9"
                    }
                ],
                "eq_spans": [],
                "section": "Tabular parsing",
                "sec_num": null
            },
            {
                "text": "Provided each set Si is computed just after completion of the i-th column of the table, the first and third clauses can be simplified to: With minor differences, the above tabular ELR algorithm is described in [21] . A tabular version of pseudo ELR parsing is presented in [20] . Some useful data structures for practical implementation of tabular and non-tabular PLR, ELR and CP parsing are described",
                "cite_spans": [
                    {
                        "start": 210,
                        "end": 214,
                        "text": "[21]",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 273,
                        "end": 277,
                        "text": "[20]",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Si = {D ] 3j3[A ~ fl] E Td,i 3B --, tiC\"/e Pt[B E A A D Z* C]}",
                "sec_num": null
            },
            {
                "text": "Finding an optimal tabular algorithm In [14] Schabes derives the LC algorithm from LR parsing similar to the way that ELR parsing can be derived from LR parsing. The LC algorithm is obtained by not only splitting up the goto function into goto 1 and goto 2 but also splitting up goto~ even further, so that it nondeterministically yields the closure of one single kernel item. (This idea was described earlier in [5] , and more recently in [10] . ) Schabes then argues that the LC algorithm can be determinized (i.e. made more deterministic) by manipulating the goto functions. One application of this idea is to take a fixed grammar and choose different goto functions for different parts of the grammar, in order to tune the parser to the grammar.",
                "cite_spans": [
                    {
                        "start": 40,
                        "end": 44,
                        "text": "[14]",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 413,
                        "end": 416,
                        "text": "[5]",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 440,
                        "end": 444,
                        "text": "[10]",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 447,
                        "end": 448,
                        "text": ")",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "In this section we discuss a different application of this idea: we consider various goto functions which are global, i.e. which are the same for all parts of a grammar. One example is ELR parsing, as its goto~ function can be seen as a determinized version of the goto 2 function of LC parsing. In a similar way we obtain PLR parsing. Traditional LR parsing is obtained by taking the full determinization, i.e. by taking the normal goto function which is not split up. 6 6Schabes more or less also argues that LC itself can be obtained by determinizing TD parsing. (In lieu of TD parsing he mentions Earley's algorithm, which is its tabular realisation.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "We conclude that we have a family consisting of LC, PLR, ELR, and LR parsing, which are increasingly deterministic. In general, the more deterministic an algorithm is, the more parser states it requires. For example, the LC algorithm requires a number of states (the items in I Lc) which is linear in the size of the grammar. By contrast, the LR algorithm requires a number of states (the sets of items) which is exponential in the size of the grammar [2] .",
                "cite_spans": [
                    {
                        "start": 452,
                        "end": 455,
                        "text": "[2]",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "The differences in the number of states complicates the choice of a tabular algorithm as the one giving optimal behaviour for all grammars. If a grammar is very simple, then a sophisticated algorithm such as LR may allow completely deterministic parsing, which requires a linear number of entries to be added to the parse table, measured in the size of the grammar.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "If, on the other hand, the grammar is very ambiguous such that even LR parsing is very nondeterministic, then the tabular realisation may at worst add each state to each set Tij, so that the more states there are, the more work the parser needs to do. This favours simple algorithms such as LC over more sophisticated ones such as LR. Furthermore, if more than one state represents the same subderivation, then computation of that subderivation may be done more than once, which leads to parse forests (compact representations of collections of parse trees) which are not optimally dense [1, 12, 7] .",
                "cite_spans": [
                    {
                        "start": 588,
                        "end": 591,
                        "text": "[1,",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 592,
                        "end": 595,
                        "text": "12,",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 596,
                        "end": 598,
                        "text": "7]",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "Schabes proposes to tune a parser to a grammar, or in other words, to use a combination of parsing techniques in order to find an optimal parser for a certain grammar. 7 This idea has until now not been realised. However, when we try to find a single parsing algorithm which performs well for all grammars, then the tabular ELR algorithm we have presented may be a serious candidate, for the following reasons:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "\u2022 For M1 i, j, and a at most one item of the form [A --, ct] is added to Tij. Therefore, identical subderivations are not computed more than once. (This is a consequence of our optimization in Algorithm 6.) Note that this also holds for the tabular CP algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "\u2022 ELR parsing guarantees the correct-prefix property, contrary to the CP algorithm. This prevents computation of all subderivations which are useless with regard to the already processed input.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "\u2022 ELR parsing is more deterministic than LC and PLR parsing, because it allows shared processing of all common prefixes. It is hard to imagine a practical parsing technique more deterministic than ELR parsing which also satisfies the previous two properties.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "In particular, we argue in [8] that refinement of the LR technique in such a way that the first property above holds whould require an impractically large number of LR states.",
                "cite_spans": [
                    {
                        "start": 27,
                        "end": 30,
                        "text": "[8]",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "7This is reminiscent of the idea of \"optimal cover\" [5] .",
                "cite_spans": [
                    {
                        "start": 52,
                        "end": 55,
                        "text": "[5]",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "in [S],",
                "sec_num": null
            },
            {
                "text": "Epsilon rules cause two problems for bottom-up parsing. The first is non-termination for simple realisations of nondeterminism (such as backtrack parsing) caused by hidden left recursion [7] . The second problem occurs when we optimize TD filtering e.g. using the sets Si: it is no longer possible to completely construct a set Si before it is used, because the computation of a derivation deriving the empty string requires Si for TD filtering but at the same time its result causes new elements to be added to S~. Both problems can be overcome [8] .",
                "cite_spans": [
                    {
                        "start": 187,
                        "end": 190,
                        "text": "[7]",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 546,
                        "end": 549,
                        "text": "[8]",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Epsilon rules",
                "sec_num": null
            },
            {
                "text": "We have discussed a range of different parsing algorithms, which have their roots in compiler construction, expression parsing, and natural language processing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": null
            },
            {
                "text": "We have shown that these algorithms can be described in a common framework. We further discussed tabular realisations of these algorithms, and concluded that we have found an optimal algorithm, which in most cases leads to parse tables containing fewer entries than for other algorithms, but which avoids computing identical subderivations more than once.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The author acknowledges valuable correspondence with Klaas Sikkel, Ran6 Leermakers, Franqois Barth61emy, Giorgio Satta, Yves Schabes, and Fr6d@ric Voisin.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            },
            {
                "text": "CP parsing without top-down filtering (i.e. without the checks concerning the left-corner relation /*) is the main algorithm in [21] .Without the use of top-down filtering, the references to [---~/9] in Clauses 1 and 3 are clearly not of much use any more. When we also remove the use of these items, then these clauses become:Consider again the grammar from Example 1 and the (incorrect) input a + a T a. After execution of the tabular common-prefix algorithm, the table is as given here. The sets Tj,i are given at the j-th row and i-th column. The items which correspond with those from Example 4 are labelled with (0), (1),... These labels also indicate the order in which these items are added to the table.",
                "cite_spans": [
                    {
                        "start": 128,
                        "end": 132,
                        "text": "[21]",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "annex",
                "sec_num": null
            },
            {
                "text": "for a = aiwhere there is A --* ac~ \u2022 ptCP parsing However, for certain i there may be many [A ~ /9] \u2022 Tj,c-1, for some j, and each may give rise to a different A' which is non-empty. In this way, Clause 1 may add several items [A' --~ a] to Tc-I,C, some possibly with overlapping sets A'. Since items represent computation of subderivations, the algorithm may therefore compute the same subderivation several times.In the resulting algorithm, no set Tc,j depends on any set Tg,h with g < i. In [15] this fact is used to construct a parallel parser with n processors Po,..., Pn-1, with each Pi processing the sets Ti,j for all j > i. The flow of data is strictly from right to left, i.e. items computed by Pc are only passed on to P0,..., Pc-1.",
                "cite_spans": [
                    {
                        "start": 494,
                        "end": 498,
                        "text": "[15]",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Add [--+ a] to Tc-I,C",
                "sec_num": "1."
            },
            {
                "text": "The tabular form of ELR parsing allows an optimization which constitutes an interesting example of how a tabular algorithm can have a property not shared by its nondeterministic origin. 5 First note that we can compute the columns of a parse table strictly from left to right, that is, for fixed i we can compute all sets Tj,c before we compute the sets Tj,C-F1 \u2022 If we formulate a tabular ELR algorithm in a naive way analogously to Algorithm 5, as is done in [5] , then for example the first clause is given by: ",
                "cite_spans": [
                    {
                        "start": 186,
                        "end": 187,
                        "text": "5",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 461,
                        "end": 464,
                        "text": "[5]",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tabular ELR parsing",
                "sec_num": null
            },
            {
                "text": "A A A Z* C]} is non-empty 5This is reminiscent of the admissibility tests [3] , which are applicable to tabular realisations of logical push-down automata, but not to these automata themselves. .., n, in this order, perform one of the following steps until no more items can be added.",
                "cite_spans": [
                    {
                        "start": 74,
                        "end": 77,
                        "text": "[3]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(~,B --+ /9C~ \u2022 Pt[B \u2022",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Tj,i where there is A --+ a E pt with A E A', and A\" = {D",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Add ; A] To Tj",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ttl",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "6",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Add [A\" --. A] to Tj,i for [A' --* a]E Tj,i where there is A --+ a E pt with A E A', and A\" = {D [ 3h3[A --* /9] E TtL,j3D ----, A6, B ----, /9C',/ E pt[B E A A D Z* C]} is non-empty",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "\u2022 Th,j where there is A --* a \u2022 pt with A \u2022 A', and A\" = {B \u2022 A ] B --~/9A7 \u2022 pt} is non-empty Report recognition of the input if",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Add ; /Ga] To Th",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Tj",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Add [A\" --./gA] to Th,i for [A' --* a] E Tj,/ and [A --,/9] \u2022 Th,j where there is A --* a \u2022 pt with A \u2022 A', and A\" = {B \u2022 A ] B --~/9A7 \u2022 pt} is non-empty Report recognition of the input if [{S'} --* S] \u2022 T0,,~. Informally, the top-down filtering in the first and third clauses is realised by investigating all left corners D of nonterminals C (i.e. D Z* C) which are expected References",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The structure of shared forests in ambiguous parsing",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Billot",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Lang",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "27th Annual Meeting of the ACL",
                "volume": "",
                "issue": "",
                "pages": "143--151",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Billot and B. Lang. The structure of shared forests in ambiguous parsing. In 27th Annual Meet- ing of the ACL, 143-151, 1989.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "The computational complexity of GLR parsing",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Johnson",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Generalized LR Parsing",
                "volume": "3",
                "issue": "",
                "pages": "35--42",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Johnson. The computational complexity of GLR parsing. In M. Tomita, editor, Generalized LR Parsing, chapter 3, 35-42. Kluwer Academic Publishers, 1991.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Complete evaluation of Horn clauses: An automata theoretic approach",
                "authors": [
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Lang",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "",
                "volume": "913",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "B. Lang. Complete evaluation of Horn clauses: An automata theoretic approach. Rapport de Recherche 913, Institut National de Recherche en Informatique et en Automatique, Rocquencourt, France, November 1988.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "An empirical comparison of generalized LR tables",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Lankhorst",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Tomita's Algorithm: Extensions and Applications, Proc. of the first Twente Workshop on Language Technology",
                "volume": "",
                "issue": "",
                "pages": "91--68",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Lankhorst. An empirical comparison of gener- alized LR tables. In R. Heemels, A. Nijholt, and K. Sikkel, editors, Tomita's Algorithm: Extensions and Applications, Proc. of the first Twente Work- shop on Language Technology, 87-93. University of Twente, September 1991. Memoranda Informatica 91-68.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "How to cover a grammar",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Leermakers",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "27th Annual Meeting of the ACL",
                "volume": "",
                "issue": "",
                "pages": "135--142",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Leermakers. How to cover a grammar. In 27th Annual Meeting of the ACL, 135-142, 1989.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "A recursive ascent Earley parser",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Leermakers",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Information Processing Letters",
                "volume": "41",
                "issue": "2",
                "pages": "87--91",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Leermakers. A recursive ascent Earley parser. Information Processing Letters, 41(2):87- 91, February 1992.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Generalized left-corner parsing",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "J"
                        ],
                        "last": "Nederhof",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Sixth Conference of the European Chapter of the ACL",
                "volume": "",
                "issue": "",
                "pages": "305--314",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M.J. Nederhof. Generalized left-corner parsing. In Sixth Conference of the European Chapter of the ACL, 305-314, 1993.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A multidisciplinary approach to a parsing algorithm",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "J"
                        ],
                        "last": "Nederhof",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Natural Language Parsing: Methods and Formalisms, Proc. of the sixth Twente Workshop on Language Technology",
                "volume": "",
                "issue": "",
                "pages": "85--98",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M.J. Nederhof. A multidisciplinary approach to a parsing algorithm. In K. Sikkel and A. Ni- jholt, editors, Natural Language Parsing: Methods and Formalisms, Proc. of the sixth Twente Work- shop on Language Technology, 85-98. University of Twente, 1993.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "An extended theory of head-driven parsing",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "J"
                        ],
                        "last": "Nederhof",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Satta",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M.J. Nederhof and G. Satta. An extended theory of head-driven parsing. In this proceedings.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Generalized LR parsing and attribute evaluation",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Oude",
                        "middle": [],
                        "last": "Luttighuis",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Sikkel",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Third International Workshop on Parsing Technologies",
                "volume": "",
                "issue": "",
                "pages": "219--233",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. Oude Luttighuis and K. Sikkel. Generalized LR parsing and attribute evaluation. In Third Inter- national Workshop on Parsing Technologies, 219- 233, Tilburg (The Netherlands) and Durbuy (Bel- gium), August 1993.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Parsing extended LR(k) grammars",
                "authors": [
                    {
                        "first": "P",
                        "middle": [
                            "W"
                        ],
                        "last": "Purdom",
                        "suffix": ""
                    },
                    {
                        "first": "Jr",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "A"
                        ],
                        "last": "Brown",
                        "suffix": ""
                    }
                ],
                "year": 1981,
                "venue": "Acta Informatica",
                "volume": "15",
                "issue": "",
                "pages": "115--127",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P.W. Purdom, Jr. and C.A. Brown. Parsing extended LR(k) grammars. Acta Informatica, 15:115-127, 1981.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Parser Generation for Interactive Environments",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Rekers",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Rekers. Parser Generation for Interactive Envi- ronments. PhD thesis, University of Amsterdam, 1992.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Deterministic left corner parsing",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "J"
                        ],
                        "last": "Rosenkrantz",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [
                            "M"
                        ],
                        "last": "Lewis",
                        "suffix": ""
                    }
                ],
                "year": 1970,
                "venue": "IEEE Conference Record of the 11th Annual Symposium on Switching and Automata Theory",
                "volume": "",
                "issue": "",
                "pages": "139--152",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D.J. Rosenkrantz and P.M. Lewis II. Deterministic left corner parsing. In IEEE Conference Record of the 11th Annual Symposium on Switching and Automata Theory, 139-152, 1970.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Polynomial time and space shiftreduce parsing of arbitrary context-free grammars",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "29th Annual Meeting of the ACL",
                "volume": "",
                "issue": "",
                "pages": "106--113",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Schabes. Polynomial time and space shift- reduce parsing of arbitrary context-free grammars. In 29th Annual Meeting of the ACL, 106-113, 1991.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "A parallel bottomup Tomita parser",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Sikkel",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Lankhorst",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "1. Konferenz \"Verarbeitung Natiirlicher Sprache",
                "volume": "",
                "issue": "",
                "pages": "238--247",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "K. Sikkel and M. Lankhorst. A parallel bottom- up Tomita parser. In 1. Konferenz \"Verarbeitung Natiirlicher Sprache\", 238-247, Nfirnberg, October 1992. Springer-Verlag.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "LR(k) and LL(k) Parsing",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Sippu",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Soisalon-Soininen",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Parsing Theory",
                "volume": "H",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Sippu and E. Soisalon-Soininen. Parsing The- ory, Vol. H: LR(k) and LL(k) Parsing, EATCS Monographs on Theoretical Computer Science, volume 20. Springer-Verlag, 1990.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "A method for transforming grammars into LL(k) form",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Soisalon-Soininen",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Ukkonen",
                        "suffix": ""
                    }
                ],
                "year": 1979,
                "venue": "Acta Informatica",
                "volume": "12",
                "issue": "",
                "pages": "339--369",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E. Soisalon-Soininen and E. Ukkonen. A method for transforming grammars into LL(k) form. Acta Informatica, 12:339-369, 1979.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Efficient Parsing for Natural Language",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Tomita",
                        "suffix": ""
                    }
                ],
                "year": 1986,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Tomita. Efficient Parsing for Natural Lan- guage. Kluwer Academic Publishers, 1986.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "CIGALE: A tool for interactive grammar construction and expression parsing",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Voisin",
                        "suffix": ""
                    }
                ],
                "year": 1986,
                "venue": "Science of Computer Programming",
                "volume": "7",
                "issue": "",
                "pages": "61--86",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. Voisin. CIGALE: A tool for interactive grammar construction and expression parsing. Science of Computer Programming, 7:61-86, 1986.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "A bottom-up adaptation of Earley's parsing algorithm",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Voisin",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Programming Languages Implementation and Logic Programming, International Workshop",
                "volume": "348",
                "issue": "",
                "pages": "146--160",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. Voisin. A bottom-up adaptation of Earley's parsing algorithm. In Programming Languages Implementation and Logic Programming, Interna- tional Workshop, LNCS 348, 146-160, Orl@ans, France, May 1988. Springer-Verlag.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "A new, bottom-up, general parsing algorithm",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Voisin",
                        "suffix": ""
                    },
                    {
                        "first": "J.-C",
                        "middle": [],
                        "last": "Raoult",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "BIGRE",
                "volume": "70",
                "issue": "",
                "pages": "221--235",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. Voisin and J.-C. Raoult. A new, bottom-up, general parsing algorithm. BIGRE, 70:221-235, September 1990.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "if c~1 = [3\"/1 and a2 = [3'/2, for some '/1 and '/2, where [3 \u00a2 e. A recognition algorithm can be specified by means of a push-down automaton A = (T, Alph, Init, ~-, Fin), which manipulates configurations of the form (F,v), where F E Alph* is the stack, constructed from left to right, and v \u2022 T* is the remaining input."
            },
            "FIGREF1": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Note that since the automaton does not use any lookahead, Step 3 may also have replaced [T ---* F \u2022] by any other item besides [T --* T \u2022 \u2022 F] whose rhs starts with T and whose lhs satisfies the condition of topdown filtering with regard to E, i.e. by [T --~ T \u2022 **F], [E ~ T. T El, or [E ~ T \u2022]."
            },
            "FIGREF2": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "\u2022 pt A (a # e V A = S')} Informally, an item [A --* ~ I PLa a \u2022 represents one or more items [A --~ cr \u2022/3] \u2022 I e. Algorithm 2 (Predictive LR) A PLR = (T, I PLR, Init, F-, Fin), Init = [S' --~ ], Fin = [S t --~ S], and F-defined by: 1. (F[B --~/3], av) F-(rib -~/3][A -~ ~],,) where there are A --~ as, B ---* tiC7 \u2022 pt such that AL*C 2. (F[A --* a], av) F-(r[A --, ~a], v) where there is A ~ haft \u2022 P+ 3. (FIB--*/3][A -* a], v) b (rOB--,/3][0--, A], v) where A --* cr \u2022 Ptand where there are D A~f, B --~ f?C7 \u2022 pt such that D/* C 4. (F[B --*/3][A --, a],v) ~-(F[B --*/~A], v) where A --~ a \u2022 pT and where there is B --~/3A7 \u2022 pt Example 2 Consider the grammar from Example 1. Using Predictive LR, recognition of a * a is realised by:"
            },
            "FIGREF3": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Step 3 the stack element IT --~ T] represents both [T ~ T \u2022 * F] and [T --* T \u2022 **F], so that nondeterminism is reduced. Still some nondeterminism remains, since Step 3 could also have replaced [T --* F] by [Z --* T], which represents both [E --* T-T E] and [E --~ T \u2022]."
            },
            "FIGREF4": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "First, we define a set of items as I = {[A --* c~ \u2022/3] I A --* 4/3 E pt} Note that I LC C I. If we define for each Q G I: closure(Q) -= QU{[A--*.a]I[B--*/3.CT]EQAAZ*C} then the goto function for LR(0) parsing is defined by"
            },
            "FIGREF5": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "This suggests representing each set of items by a new kind of item of the form [{Az, A2,..., A,~} --* a], which represents all items A --* a \u2022 /3 for some /3 and A E {A1, A2,..., An}. Formally: I ELR .~ {[A ---+ a] ] 0 C A G {A I A --* aft E pt} A (4 # E v a = {s'})} where we use the symbol A to range over sets of nonterminals. Algorithm 3 (Extended LR) A ELR = (T, I ELR, Init, t-, Fin), Init = [{S'} --* ], Fin = [{S'} --* S], and t-defined by: 1. (rid -./31, (rid -./3][a' -. a],v) where A' = {A I 3A ~ aa, S --~ flC'y 6 pt[B E A A A Z* C]} is non-empty 2. (rid a], (rid' where A' = { A E A [ A ---* daft E pt } is non-empty 3. (F[A --* fl][A' --. a],v) t-(F[A --*/3][A\" --. A],v)where there is A --* a E pt with A E A', and A\" -~{D 130 ---* A6, B --*/3C7 E Pt[B 6 A A D Z* C]}is non-empty 4. (F[A --. fl][A' ---, a],v) }-(F[A\" --* flA],v)where there is A --* a E pt with A E A', and A\" = {B E A I B --*/3A',/E pt} is non-empty Note that Clauses 1 and 3 correspond with goto 2 and that Clauses 2 and 4 correspond with goto 1.Example 3 Consider again the grammar from Example 1. Using the ELR algorithm, recognition of a * a is realised by: [{E'} -* ] } --* ][{T} --* F] a [{E'} --* ][{T, E} --* T] a [{E'} --* ][{T} --* T *] a [{E'} ---* E] Comparing these configurations with those reached by the PLR recognizer, we see that here after Step 3 the stack element [{T, E} ~ T] represents both [T ---* T \u2022 \u2022 F] and [T --, T \u2022 * * F], but also [E --* T .] and [E -~ T \u2022 T E], so that nondeterminism is even further reduced. [] A simplified ELR algorithm, which we call the pseudo ELR algorithm, results from avoiding reference to A in Clauses 1 and 3. In Clause 1 we then have a simplified definition of A ~, viz. A ~ = {A [ 3A --* as, B ---* tiC'7 E Pt[a l* C]}, and in the same way we have in Clause 3 the new definition A\" = {D [ 3D ~ AS, B --~ ~C~( E Pt[D [* C]}."
            },
            "FIGREF6": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Common-prefix) A t;r = (T, I cP, Init, ~-, Fin), Init = [--*], Fin = [---+ S], and I-defined by: i. (F[---* /3], av) ~ (F[---* /3][4_. a], v) where there are A --~ as, B --~/3C'7 E pt such that AL*C 2. (r[-~ a], av) ~ (r[-~ sa], v) where there is A --~ sa~3 E pt 3. (F[--~/3][4_. s], v) F-(F[--~ fl][--. A], v) where there are A --* a, D -* A6, B --* /3C'7 E pt such that D/* C 4. ("
            },
            "FIGREF8": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Tabular common-prefix) P c Sets Tij of the table are to be subsets of I . Start with an empty table. Add [-*] to T0,0. Perform one of the following steps until no more items can be added. 1. Add [--~ a] to T~-i,i for a = al and [--*/3] E Tj,i-i where there are A --* an, B --* /3C'7 E P? such that A/*C 2. Add [-~ sa] to Tj,i for a = ai and [--* a] E Tj,l-i where there is A --* an/3 E pt 3. Add [--* A] to Tj# for [--* a] e Tj,i and [-*/3] E Th,j where there are A --~ s, D --* AS, B --* /3C'7 E pt such that D/* C 4. Add [--~/3A] to Th,i for [--* s] E Tj,i and [---~/3] E Th,j where there are A --* s, B --*/3A 7 E pt Report recognition of the input if [--~ S] E T0,n."
            },
            "FIGREF9": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Add [A' ~ a] to Ti-l,i for a = a i where A' = {A [ A --~ aa E pt} M Si-1 is non-empty 3. Add [A\" ---, A] to Tj,i for [A' --, ~] E Tj,i where there is A --, a E pt with A E A', and A\" = {D [ D ~ A5 E pt} N Sj is non-empty which may lead to more practical implementations.Note that we may have that the tabular ELR algorithm manipulates items of the form [A --~ a] which would not occur in any search path of the nondeterministic ELR algorithm, because in general such a A is the union of many sets A' of items [A ~ --~ a] which would be manipulated at the same input position by the nondeterministic algorithm in different search paths."
            }
        }
    }
}