File size: 114,320 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
{
    "paper_id": "J82-3003",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:07:16.398531Z"
    },
    "title": "Using Semantics in Non-Context-Free Parsing of Montague Grammar 1",
    "authors": [
        {
            "first": "David",
            "middle": [
                "Scott"
            ],
            "last": "Warren",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "SUNY at Stony Brook Long Island",
                "location": {
                    "postCode": "11794",
                    "region": "NY"
                }
            },
            "email": ""
        },
        {
            "first": "Joyce",
            "middle": [],
            "last": "Friedman",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Michigan",
                "location": {
                    "settlement": "Ann Arbor",
                    "region": "MI"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In natural language processing, the question of the appropriate interaction of syntax and semantics during sentence analysis has long been of interest. Montague grammar with its fully formalized syntax and semantics provides a complete, well-defined context in which these questions can be considered. This paper describes how semantics can be used during parsing to reduce the combinatorial explosion of syntactic ambiguity in Montague grammar. A parsing algorithm, called semantic equivalence parsing, is presented and examples of its operation are given. The algorithm is applicable to general non-context-free grammars that include a formal semantic component. The second portion of the paper places semantic equivalence parsing in the context of the very general definition of an interpreted language as a homomorphism between syntactic and semantic algebras (Montague 1970). The particular version of Montague grammar used here is that of PTQ, with which the reader is assumed to be conversant. The syntactic component of PTQ is an essentially context-free grammar, augmented by some additional rules of a different form. The non",
    "pdf_parse": {
        "paper_id": "J82-3003",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In natural language processing, the question of the appropriate interaction of syntax and semantics during sentence analysis has long been of interest. Montague grammar with its fully formalized syntax and semantics provides a complete, well-defined context in which these questions can be considered. This paper describes how semantics can be used during parsing to reduce the combinatorial explosion of syntactic ambiguity in Montague grammar. A parsing algorithm, called semantic equivalence parsing, is presented and examples of its operation are given. The algorithm is applicable to general non-context-free grammars that include a formal semantic component. The second portion of the paper places semantic equivalence parsing in the context of the very general definition of an interpreted language as a homomorphism between syntactic and semantic algebras (Montague 1970). The particular version of Montague grammar used here is that of PTQ, with which the reader is assumed to be conversant. The syntactic component of PTQ is an essentially context-free grammar, augmented by some additional rules of a different form. The non",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "The close interrelation between syntax and semantics in Montague grammar provides a good framework in which to consider the interaction of syntax and semantics in sentence analysis. Several different approaches are possible in this framework and they can be developed rigorously for comparison. In this paper we develop an approach called semantic equivalence parsing that introduces logical translation into the ongoing parsing process. We compare this with our earlier directed process implementation in which syntactic parsing is completed prior to translation to logical form.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "Part I of the paper gives an algorithm that parses a class of grammars that contains both essentially context-free rules and non-context-free rules as in Montague's 1973 PTQ. Underlying this algorithm is a 1 A preliminary version of this paper was presented at the symposium on Modelling Human Parsing Strategies at the University of Texas at Austin, March [24] [25] [26] 1981 . The work of the first author was supported in part by NSF grant IST 80-10834. nondeterministic syntactic program expressed as an ATN. The algorithm introduces equivalence parsing, which is a general execution method for nondeterministic programs that is based on a recall table, a generalization of the well-formed substring table. Semantic equivalence, based on logical equivalence of formulas obtained as translations, is used. We discuss the consequences of incorporating semantic processing into the parser and give examples of both syntactic and semantic parsing. In Part II the semantic parsing algorithm is related to earlier tabular context-free recognition methods. Relating our algorithm to its predecessors gives a new way of viewing the technique. The algorithmic description is then replaced by a description in terms of refined grammars. Finally we suggest how this notion might be generalized to the full class of Montague grammars. context-free aspects arise in the treatment of quantifier scope and pronouns and their antecedents. Syntactically each antecedent is regarded as substituted into a place marked by a variable. This is not unlike the way fillers are inserted into gaps in Gazdar's 1979 treatment. However, Montague's use of variables allows complicated interactions between different variable-antecedent pairs. Each substitution rule substitutes a term phrase (NP) for one or more occurrences of a free variable in a phrase (which may be a sentence, common noun phrase, or intransitive verb phrase). The first occurrence of the variable is replaced by the phrase; later occurrences are replaced by appropriate pronouns. The translation of the resulting phrase expresses the coreferentiality of the noun phrase and the pronouns. With substitution, but without pronouns, the only function of substitution is to determine quantifier scope.",
                "cite_spans": [
                    {
                        "start": 154,
                        "end": 174,
                        "text": "Montague's 1973 PTQ.",
                        "ref_id": null
                    },
                    {
                        "start": 357,
                        "end": 361,
                        "text": "[24]",
                        "ref_id": null
                    },
                    {
                        "start": 362,
                        "end": 366,
                        "text": "[25]",
                        "ref_id": null
                    },
                    {
                        "start": 367,
                        "end": 371,
                        "text": "[26]",
                        "ref_id": null
                    },
                    {
                        "start": 372,
                        "end": 376,
                        "text": "1981",
                        "ref_id": null
                    },
                    {
                        "start": 1580,
                        "end": 1604,
                        "text": "Gazdar's 1979 treatment.",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "One computational approach to processing a sentence is the directed process approach, which is a sequential analysis that follows the three-part presentation in PTQ. The three steps are as follows. A purely syntactic analysis of a sentence yields a set of parse trees, each an expression in the disambiguated language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Directed Process Approach",
                "sec_num": null
            },
            {
                "text": "Each parse tree is then translated by Montague's rules into a formula of intentional logic to which logical reductions are immediately applied. The reduced formulas can then be interpreted in a model. The directed process approach is the one taken in the system described by Friedman, Moran, and Warren 1978a,b. Semantic equivalence parsing is motivated by the observation that the directed process approach, in which all of the syntactic processing is completed before any semantic processing begins, does not take maximal advantage of the coupling of syntax and semantics in Montague grammars. Compositionality and the fact that for each syntactic rule there is a translation rule suggest that it would be possible to do a combined syntactic-semantic parse. In this approach, as soon as a subphrase is parsed, its logical formula is obtained and reduced to an extensionalized normal form. Two parses for the same phrase can then be regarded equivalent if they have the same formula.",
                "cite_spans": [
                    {
                        "start": 275,
                        "end": 311,
                        "text": "Friedman, Moran, and Warren 1978a,b.",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Directed Process Approach",
                "sec_num": null
            },
            {
                "text": "The approach to parsing suggested by Cooper's 1975 treatment of quantified noun phrases is like our semantic equivalence parsing in storing translations as one element of the tuple corresponding to a noun phrase. Cooper's approach differs from the approach followed here because he has an intermediate stage that might be called an \"autonomous syntax tree\". The frontier of the tree is the sentence; the scope of the quantifier of a noun phrase is not yet indicated. Cooper's approach has been followed by the GPSG system (Gawron et al. 1982) and by Rosenschein and Shieber 1982 . Neither of those systems treats pronouns.",
                "cite_spans": [
                    {
                        "start": 37,
                        "end": 50,
                        "text": "Cooper's 1975",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 522,
                        "end": 542,
                        "text": "(Gawron et al. 1982)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 550,
                        "end": 578,
                        "text": "Rosenschein and Shieber 1982",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Directed Process Approach",
                "sec_num": null
            },
            {
                "text": "In Montague's approach, which we follow here, the trees produced by the parser are expressions in the disambiguated language, so scope is determined, pronoun antecedents are indicated, and each tree has a unique (unreduced) translation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Directed Process Approach",
                "sec_num": null
            },
            {
                "text": "The descriptions of the systems that use Cooper's approach seem to imply that they use a second pass over the syntax tree to determine the actual quantifier scopes in the final logical forms. Were these systems to use a single pass to produce the final logical forms, the results described in this paper would be directly applicable.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Directed Process Approach",
                "sec_num": null
            },
            {
                "text": "Ambiguity in Montague grammar is measured by the number of different meanings. In this view syntactic structure is of no interest in its own right, but only as a vehicle for mapping semantics. Syntactic ambiguity does not directly correspond to semantic ambiguity, and there may be many parses with the same semantic interpretation. Further, sentences with scope ambiguity, such as A man loves every woman, require more than one parse, because the syntactic derivation determines quantifier scope.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ambiguity",
                "sec_num": null
            },
            {
                "text": "In PTQ there is infinite syntactic ambiguity arising from three sources: alphabetic variants of variables, variable for variable substitutions, and vacuous variable substitution. However, these semantically unnecessary constructs can be eliminated, so that the set of syntactic sources for any sentence is finite, and a parser that finds the full set is possible. (This corresponds to the \"variable principle\" enunciated by Janssen 1980 and used by Landsbergen 1980 .) This approach was the basis of our earlier PTQ parser (Friedman and Warren 1978) .",
                "cite_spans": [
                    {
                        "start": 424,
                        "end": 440,
                        "text": "Janssen 1980 and",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 441,
                        "end": 465,
                        "text": "used by Landsbergen 1980",
                        "ref_id": null
                    },
                    {
                        "start": 523,
                        "end": 549,
                        "text": "(Friedman and Warren 1978)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ambiguity",
                "sec_num": null
            },
            {
                "text": "However, even with these reductions the number of remaining parses for a sentence of reasonable complexity is still large compared to the number of nonequivalent translations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ambiguity",
                "sec_num": null
            },
            {
                "text": "In the directed process approach this is treated by first finding all the parses, next finding for each parse a reduced translation, and then finally obtaining the set of reduced translations. Each reduced translation may, but does not necessarily, represent a different sentence meaning. No meanings are lost. Further reductions of the set of translations would be possible, but the undecidability of logical equivalence precludes algorithmic reduction to a minimal set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Ambiguity",
                "sec_num": null
            },
            {
                "text": "In the underlying parser the grammar is expressed as an augmented transition network (ATN) (Woods 1973) . Both the syntactic and the semantic parsers use this same ATN. The main difficulty in construct-ing the ATN was, as usual, the non-context-free aspects of the grammar, in particular the incorporation of a treatment of substitution rules and variables. The grammar given in PTQ generates infinitely many derivations for each sentence.",
                "cite_spans": [
                    {
                        "start": 91,
                        "end": 103,
                        "text": "(Woods 1973)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The ATN Program",
                "sec_num": null
            },
            {
                "text": "All but finitely many of these are unnecessary variations on variables and were eliminated in the construction of the ATN. The ATN represents only the reduced set of structures, and must therefore be more complex.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The ATN Program",
                "sec_num": null
            },
            {
                "text": "In order to say what we mean by semantic equivalence parsing, we use Harel's 1979 notion of execution method for nondeterministic programs. An execution method is a deterministic procedure for finding the possible execution paths through a nondeterministic program given an input. For an ATN, these execution paths correspond to different parses. Viewing parsing in this way, the only difference between the usual syntactic parsing and semantic equivalence parsing is a difference in the execution method. As will be seen, semantic equivalence parsing uses semantic tests as part of the execution method.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Equivalence Testing",
                "sec_num": null
            },
            {
                "text": "We call the execution method we use to process a general ATN equivalence parsing (Warren 1979) . Equivalence parsing is based on a recall table. The recall table is a set of buckets used to organize and hold partial syntactic structures while larger ones are constructed. Equivalence parsing can be viewed as processing an input sentence and the ATN to define and fill in the buckets of the recall table. The use of the recall table reduces the amount of redundant processing in parsing a sentence.",
                "cite_spans": [
                    {
                        "start": 81,
                        "end": 94,
                        "text": "(Warren 1979)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Equivalence Testing",
                "sec_num": null
            },
            {
                "text": "Syntactic structures found along one execution path through the ATN need not be reconstructed but can be directly retrieved from the recall table and used on other paths. The recall table is a generalization of the familiar well-formed substring table (WFST) to arbitrary programs that contain procedure calls. Use of the WFST in ATN parsing is noted in Woods 1973 and Bates 1978. Bates observes that the WFST is complicated by the HOLDs and SENDRs in the ATN. These are the ATN actions that correspond to parameter passing in procedures and are required in the ATN for PTQ to correctly treat the substitution rules. In the Woods system the WFST is viewed as a possible optimization, to be turned on when it improves parsing efficiency. In our system the recall table is an intrinsic part of the parsing algorithm. Because any ATN that naturally represents PTQ must contain left recursion, the usual depth-first (or breadth-first or best-first) ATN parsing algorithm would go into an infinite loop when trying to find all the parses of any sentence. The use of the recall table in equivalence parsing handles left-recursive ATNs without special consideration (Warren 1981) . As a result there is no need to rewrite the grammar to eliminate left-recursive rules as is usually necessary.",
                "cite_spans": [
                    {
                        "start": 354,
                        "end": 368,
                        "text": "Woods 1973 and",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 369,
                        "end": 380,
                        "text": "Bates 1978.",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 1159,
                        "end": 1172,
                        "text": "(Warren 1981)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Equivalence Testing",
                "sec_num": null
            },
            {
                "text": "In a general nondeterministic program, a bucket in the recall table corresponds to a particular subroutine and a set of values for the calling parameters and return parameters. For an ATN a bucket is indexed by a triple: (1) a grammatical category, that is, a subnet to which a PUSH is made, (2) the contents of the SENDR registers at the PUSH and the current string, and (3) the contents of the LIFTR registers at the POP and the then-current string. A bucket contains the members of an equivalence class of syntactic structures; precisely what they are depends on what type of equivalence is being used.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Equivalence Testing",
                "sec_num": null
            },
            {
                "text": "What makes equivalence parsing applicable to noncontext-free grammars is that its buckets are more general than the cells in the standard tabular contextfree algorithms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Equivalence Testing",
                "sec_num": null
            },
            {
                "text": "In the C-K-Y algorithm (Kasami 1965) , for example, a cell is indexed only by the starting position and the length of the parsed segment, i.e., the current string at PUSH and POP. The cell contents are nonterminals. In our case all three are part of the bucket index, which also includes SENDR and LIFTR register values. The bucket contents are equivalence classes of structures.",
                "cite_spans": [
                    {
                        "start": 23,
                        "end": 36,
                        "text": "(Kasami 1965)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Equivalence Testing",
                "sec_num": null
            },
            {
                "text": "For sentence recognition all parses are equivalent. So it is enough to determine, for each bucket of the recall table, whether or not it is empty. A sentence is in the language if the bucket corresponding to the sentence category (with empty SENDR registers and full string, and empty LIFTR registers and null string) is nonempty. The particular forms of the syntactic structures in the bucket are irrelevant; the contents of the buckets are only a superfluous record of the specific syntactic structures.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Recognition",
                "sec_num": null
            },
            {
                "text": "The syntactic structure is never tested and so does not affect the flow of control. Thus which buckets are nonempty depends only on what other buckets are nonempty and not on what those other buckets contain. For sentence recognition, when the execution method constructs a new member of a bucket that is already nonempty, it may or may not add the new substructure, but it does not need to use it to construct any larger syntactic structures. This is because the earlier member has already verified this bucket as nonempty.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Recognition",
                "sec_num": null
            },
            {
                "text": "Therefore this fact is already known and is already being used to determine the nonemptiness of other buckets. To find all parses, however, equivalence parsing does use all members of each bucket to construct larger structures.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Recognition",
                "sec_num": null
            },
            {
                "text": "It would be possible first to do recognition and determine all the nonempty buckets in the recall table, and then to go back and take all variants of one single parse that can be obtained by replacing any substructure.by another substructure from the same bucket. This is essentially how the context-free parsing algorithms constructed from the tabular recognition methods work. This is not how the equivalence parsing algorithm works. When it obtains a substructure, it immediately tries to use it to construct larger structures.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Recognition",
                "sec_num": null
            },
            {
                "text": "The difference described above between sentence recognition and sentence parsing is a difference only in the execution methods used to execute the ATN and not in the ATN itself. This difference is in the test for equivalence of bucket contents. In sentence recognition any two syntactic structures in a bucket are equivalent since we only care whether or not the substring can be parsed to the given category. At the other extreme, in finding all parses, two entries are equivalent only if they are the identical structure. For most reasonable ATNs, including our ATN for PTQ, this would not happen; distinct paths lead to distinct structures.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Recognition",
                "sec_num": null
            },
            {
                "text": "Semantic parsing is obtained by againmodifying only the equivalence test used in the execution method to test bucket contents.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Recognition",
                "sec_num": null
            },
            {
                "text": "For semantic parsing two entries are equivalent if their logical translations, after logical reduction and extensionalization, are identical to within change of bound variable.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence Recognition",
                "sec_num": null
            },
            {
                "text": "For our examples, we introduce in Figure 1 a small subnet of the ATN for PTQ. Arcs with fully capitalized labels are PUSH arcs; those with lower case labels are CAT arcs. Structure-building operations are indicated in parentheses. This net implements just three rules of PTQ. Rule $4 forms a sentence by concatenating a term phrase and an intransitive verb phrase; Sll conjoins two sentences, and S14,i substitutes a term phrase for the syntactic variable he i in a sentence. $4 and Sll are context-free rules; S14,i is one of the substitution rules that make the grammar noncontext-free and is basic to the handling of quantifiers, pronouns, and antecedents.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 34,
                        "end": 42,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Small Grammar",
                "sec_num": null
            },
            {
                "text": "The ATN handles the substitution by using a LIFTR to carry the variablebinding information. The LIFTR is not used for the context-free rules. The first example is the sentence Bill walks. This sentence has the obvious parse using only the contextfree rule $4. It also has the parse using the substitution rule. We will carry through the details of its parse to show how this substitution rule is treated in the parsing process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Small Grammar",
                "sec_num": null
            },
            {
                "text": "In the trace PUSHes and POPs in the syntactic analysis of this sentence are shown. The entries are in chronological order. The PUSHes are numbered sequentially for identification. The PUSH number uniquely determines a) the category to which the PUSH is made, b) the remainder of the sentence being parsed at the time of the PUSH, and c) the contents of the SENDR registers at the time of the PUSH, called the PUSH environment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ITE",
                "sec_num": null
            },
            {
                "text": "At each POP a bucket and an element in that bucket are returned. The bucket name at a POP is made up of the corresponding PUSH number, the remaining input string, and the contents of the LIFTR registers, which are called the POP environment. The element in the bucket is the tree that is returned. For brevity we use in the trace only the first letters of the words in the sentence; for example, Bill walks becomes Bw. [The tree has been returned and covers the whole string. However, the returned environment is not null so the parse fails, and the execution method backs up to seek another return from PUSH 1.] 1 e null (S14,0 Bill ($4 he0 walk))",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ITE",
                "sec_num": null
            },
            {
                "text": "[This is another element in bucket 1-e-null; it is a successful parse so it is printed out. Execution continues but there are no more parses of the sentence.]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "ITE",
                "sec_num": null
            },
            {
                "text": "In this trace bucket t-e-null is the only bucket with more than one entry. The execution method was syntactic parsing, so each of the two entries was returned and printed out. For recognition, these two entries in the bucket would be considered the same and the second would not have been POPped. Instead of continuing the computation up in the subnet from which the PUSH was made, this path would be made to fail and the execution method would back up. For semantic equivalence parsing, the bucket contents throughout would not be the syntax trees, but would instead be their reduced extensionalized logical formulas. (Each such logical formula represents the equivalence class of the syntactic structures that correspond to the formula.) For example, bucket 2-w-null would contain ~,Pp{Ab} and bucket 3-c-null would contain walk'. The first entry to bucket 1-E-null would be the formula for ($4 Bill walk), that is, walk.'(b). The entry to bucket 1-e-null on the last line of the trace would be the formula for (S14,0 Bill ($4 he0 walk)), which is also walk.'(b). Therefore, this second entry would not be POPped.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": null
            },
            {
                "text": "Buckets also serve to reduce the amount of repeated computation. Suppose we have a second PUSH to the same category with the same string and environment as an earlier PUSH. The buckets resulting from this new PUSH would come out to be the same as the buckets from the earlier PUSH. Therefore the buckets need not be recomputed; the results of the earlier buckets can be used directly. This is called a \"FAKEPUSH\" because we don't actually do the PUSH to continue through the invoked subnet but simply do a \"FAKEPOP\" using the contents of the previously computed buckets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": null
            },
            {
                "text": "Consider, as an example of FAKEPOP, the partial trace of the syntactic parse of the sentence Bill walks and Mary runs (or Bw&Mr for short). The initial part of this trace, through step 4, is essentially the same as the trace above for the shorter sentence Bill walks. Figure 1 . The PUSH to TE (2 above) has completely failed. However, this PUSH, TS-Bw&Mr-null has been done before; it is PUSH 1. We already have two buckets from that PUSH: 1-&Mr-null containing two trees, and 1-&Mr-(he0 B) with one tree. There is no need to re-enter this subnet; the buckets and their contents tell us what would happen. Therefore we FAKEPOP one subtree and its bucket and follow that computation to the end; later we will return to FAKEPOP the next one.] 1 [This computation continues and parses the second half of this sentence. Two parses are produced: (S11 ($4 Bill walk) ($4 Mary run)) and (S11 ($4 Bill walk) (S14,0 Mary ($4 he0 run))) After this, the execution method fails back to the FAKEPOP at PUSH 5, and another subtree from a bucket from PUSH 2 is FAKEPOPped.] 1(5) &Mr null (S14,0 Bill (he0 walk))",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 268,
                        "end": 276,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": null
            },
            {
                "text": "[And the computation continues, eventually producing a total of ten parses for this sentence.]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Trace of",
                "sec_num": null
            },
            {
                "text": "(In the earlier example of Bill walks, these FAKEPOPs are done, but their computations immedi-ately fail, because they are looking for a conjunction but are at the end of the sentence.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Trace of",
                "sec_num": null
            },
            {
                "text": "The sentence Bill walks and Mary runs has ten syntactic structures with respect to the PTQ grammar. The rules $4, Sll, and S14,i can be used in various orders. Figure 2 shows the ten different structures in the order they are produced by the syntactic parser.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 160,
                        "end": 168,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results of Parsing",
                "sec_num": null
            },
            {
                "text": "The nodes in the trees of Figure 2 that are in italics are the syntactic structures used for the first time. The nodes in standard type are structures used previously, and thus either are part of an execution path in common with an earlier parse, or are retrieved from a bucket in the recall table to be used again. Thus the number of italicized nodes measures in a crude way the amount of work required to find all the parses for this sentence. This sentence, Bill walks and Mary runs, is one for which semantic parsing is substantially faster. It is unambiguous; its only reduced extensionalized logical translation is \"walk,'(b)&run,'(m)\".",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 26,
                        "end": 34,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results of Parsing",
                "sec_num": null
            },
            {
                "text": "In the directed process parser, all ten trees of Figure 2 are found.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 49,
                        "end": 57,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "I I I",
                "sec_num": null
            },
            {
                "text": "They will all have the same translation. In semantic parsing on'ly one is found. Here the method works to advantage because both parses of the initial string Bill walks result in the same environment for parsing Mary runs. These two parses go into the same bucket so only one needs to be used to construct larger structfires. We trace the example.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "I I I",
                "sec_num": null
            },
            {
                "text": "Bucket [This formula is the translation of the syntactic structure using S14,0 to substitute Bill into \"he0 walks\". This is the same bucket and the same translation as obtained at the return from 1 after PUSH 3 above, so we do not POP (indicated by the 'n' in the final column), but instead fail back.]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PUSH:",
                "sec_num": null
            },
            {
                "text": "5 TS Bw&Mr null [FAKEPOP, since this is a repeat PUSH to this category with these parameters. There are two buckets: 1-&Mrnull, which in syntactic parsing had two trees but now has only one translation, and bucket 1-&Mr-(he0 B) with one translation. So we FAKEPOP 1-&Mr-null.] 1 5 This completes the trace of the semantic parse of the sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PUSH:",
                "sec_num": null
            },
            {
                "text": "Figure 3 displays in graphical form the syntactic structures built during the semantic parsing of Bill walks and Mary runs traced above. A horizontal line over a particular node in a tree indicates that the translation of the structure duplicated a translation already in its bucket, so no larger structures were built using it. Only parse a) is a full parse of the sentence and thus it is the only parse returned. All the others are aborted when they are found equivalent to earlier partial results. These points of abortion in the computation are the points in the trace above at which a POP fails due to the duplication of a bucket and its contents. Note that construction of parse c) is halted when a translation is built that duplicates the translation of the right $4 subtree of parse a). This corresponds to the failure due to duplicate bucket contents in bucket 6-E-null following PUSH 9 in the trace above. Similarly parse g) is aborted before the entire tree is built. This corresponds to the failure in the final line of the trace due to a duplicate translation in bucket 10-E-null. Semantic parses that would correspond to syntactic parses h), i), and j) of Figure 2 are not considered at all. This is because bucket 1-&Mr-null contains two syntactic structures, but only one translation. Thus in semantic equivalence parsing we only do one FAKEPOP for this bucket for PUSH 5. In syntactic parsing the other parses are generated by the FAKEPOP of the other structure in this bucket.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1170,
                        "end": 1178,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results of Parsing",
                "sec_num": null
            },
            {
                "text": "The potential advantage of semantic equivalence parsing derives from treating partial results as an equivalence class in proceeding. A partial result consists of a structure, its extensionalized reduced translation, and a set of parameters of the parse to that point. These parameters are the environment for parsing the phrase. Consider the sentence John loves Mary and its parses:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reducing the Environment",
                "sec_num": null
            },
            {
                "text": "(1) ($4 John ($5 love Mary)) (2) ($4 John (S16,0 Mary ($5 love he0))) (3) (S14,0 John ($4 (he0 ($5 love Mary))) (4) (S14,0 John ($4 he0 (S16,1 Mary ($5 love hel)))) (plus 3 more)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reducing the Environment",
                "sec_num": null
            },
            {
                "text": "On reaching the phrase love Mary in parse (3) the parameters are not the same as they were at that point in parse (1), because the pair (he0 John) is in the environment. Thus the parser is not able to consult the recall table and immediately return the already parsed substructure. Instead it must reparse love Mary in the new context. This environment problem arises because the ATN is designed to follow PTQ in treating pronouns by the non-context-free substitution rules.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reducing the Environment",
                "sec_num": null
            },
            {
                "text": "We have also considered, but have not to this point implemented, alternative ways of treating variables to make partial results equal. One way would be not to pass variable bindings down into lower nets at all. Thus the PUSH environment would always be null. Since these bindings are used to find the antecedent for a pronoun, the way antecedents are determined would have to be changed. An implementation might be as follows: On encountering a pronoun during parsing, replace it by a new he-variable. Then pass back up the tree information concerning both the variable number used and the pronoun's gender. At a higher point in the tree, where the substitution rule is to be applied, a determination can be made as to which of the substituted terms could be the antecedent for the pronoun. The variable number of the pronoun can then be changed to agree with the variable number of its antecedent term by a variable-for-variable substitution. Finally the substitution rule can be used to substitute the term into the phrase for all occurrences of the variable. Note that this alternative process would construct trees that do have substitution rules to substitute variables for varia-bles, contrary to the variable principle mentioned above. We also note that with this modification a pronoun is not associated with its antecedent when it is first encountered. Instead the pronoun is saved and at some later point in the parse the association is made. This revised treatment is related computationally to that proposed in Cooper 1975 .",
                "cite_spans": [
                    {
                        "start": 1523,
                        "end": 1534,
                        "text": "Cooper 1975",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reducing the Environment",
                "sec_num": null
            },
            {
                "text": "The question of the interaction of syntax and semantics in parsing was introduced early in computational linguistics. Winograd 1971 argued for the incorporation of semantics as early as possible in the recognition process, in order to reduce the amount of syntactic processing that would be needed. Partial parses that had no interpretation did not need to be continued. The alternative position represented by Woods's early work (Woods and Kaplan 1971) was basically the inverse: less semantic processing would be needed if only completed parses were interpreted. This argument is based on the idea of eliminating uninterpretable parses as soon as possible.",
                "cite_spans": [
                    {
                        "start": 430,
                        "end": 453,
                        "text": "(Woods and Kaplan 1971)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "This advantage, if it is one, of integrated syntactic and semantic procedures does not occur here because the semantic aspect does not eliminate any logical analyses. The translation of a structure to a formula is always successful, so no partial parse is ever eliminated for lack of a translation. What happens instead is that several partial parses are found to be equivalent because they have the same translation. In this case only a representative of the set of partial parses needs to be carried forward.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "A further expansion of equivalence parsing would be interpretation equivalence parsing. Sentence processing would take place in the context of a specified model. Two structures would be regarded as equivalent if they had the same denotation in the model. More partial structures would be found equivalent under the equivalence relation than under the reduceextensionalize relation, and fewer structures would need to be constructed. Further, with the interpretation equivalence relation, we might be able to use an inconsistent denotation to eliminate an incorrect partial parse. For example, consider a sentence such as Sandy and Pat are running and she is talking to him. In this case, since the gender of Sandy and Pat cannot be determined syntactically, these words would have to be marked in the lexicon with both genders. This would result in multiple logical formulas for this sentence, one for each gender assumption.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "However, during interpretation equivalence parsing, the referents for Sandy and Pat would be found in the model and the meaning with the incorrect coreference could be rejected.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "Logical normal forms other than the reduced, extensionalized form used above lead to other reasonable versions of equivalence parsing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "For example, we could further process the reduced, extensionalized form to obtain a prenex normal form with the matrix in clausal form. We would use some standard conventions for naming variables, ordering sequences of the same quantifier in the prefix, and ordering the literals in the clauses of the matrix. This would allow the algorithm to eliminate, for example, multiple parses arising from various equivalent scopes and orderings of existential quantifiers.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "The semantic equivalence processor has been implemented in Franz Lisp. We have applied it to the PTQ grammar and tested it on various examples. For purposes of comparison the directed process version includes syntactic parse, translation to logical formula and reduction, and finally the reduction of the list of formulas to a set of formulas. The mixed strategy yields exactly this set of formulas, with one parse tree for each. Experiments with the combined parser and the directed parser show that they take approximately the same time for reasonably simple sentences. For more complicated sentences the mixed strategy usually results in less processing time and, in the best cases, results in about a 40 percent speed-up. The distinguishing characteristic of a string for which the method yields the greatest speed-up is that the environment resulting from parsing an initial segment is the same for several distinct parses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "The two parsing method we have described, the sequential process and the mixed process, were obviously not developed with psychological modeling in mind. The directed process version of the system can be immediately rejected as a possible psychological model, since it involves obtaining and storing all the structures for a sentence before beginning to interpret any one of them. However, a reorganization of the programwould make it possible to interpret each structure immediately after it is obtained. This would have the same cost in time as the first version, but would not require storing all the parses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "Although semantic equivalence parsing was developed in the specific context of the grammar of PTQ, it is more general in its applicability. The strict compositionality of syntax and semantics in PTQ is the main feature on which it depends. The general idea of equivalence parsing can be applied whenever syntactic structure is used as an intermediate form and there is a syntax-directed translation to an output form on which an equivalence relation is defined.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation of Semantic Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "We now switch our point of view and examine equivalence parsing not in algorithmic terms but in formal grammatical terms. This will then lead into showing how equivalence parsing relates to Universal Grammar (UG) (Montague 1970) . The basic concept to be used is an input-refined grammar. We begin by defining this concept for context-free grammars and using it to relate the tabular context-free recognition algorithms of Earley 1970 , Cocke-Kasami-Younger (Kasami 1965 , and Sheil 1976 to each other and eventually to our algorithm.",
                "cite_spans": [
                    {
                        "start": 213,
                        "end": 228,
                        "text": "(Montague 1970)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 423,
                        "end": 434,
                        "text": "Earley 1970",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 435,
                        "end": 470,
                        "text": ", Cocke-Kasami-Younger (Kasami 1965",
                        "ref_id": null
                    },
                    {
                        "start": 473,
                        "end": 487,
                        "text": "and Sheil 1976",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Given a context-free grammar G and a string s over the terminal symbols of G, we define from G and s a new grammar Gs, called an input-refinement of G.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "This new grammar G s will bear a particular relationship to G: L(Gs) = {s}nL(G), i.e., L(Gs) is the singleton set {s} if s is in L(G), and empty otherwise. Furthermore, there is a direct one-to-one relationship between the derivations of s in G and the derivations of s in G s. Thus the problem of recognizing s in G is reduced to the problem of determining emptiness for the grammar G s. Also, the problem of parsing s with respect to the grammar G reduces to the problem of exhaustive generation of the derivations of G s (there is at most one string). Each of the tabular context-free recognition algorithms can be viewed as implicitly defining this grammar G s and testing it for emptiness. Emptiness testing is essentially done by reducing the grammar, that is by eliminating useless symbols and productions. The table-constructing portion of a tabular recognition algorithm, in effect, constructs and reduces the grammar Gs, thus determining whether or not it is empty. The tabular methods differ in the construction and reduction algorithm used.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "In each case, to turn a tabular recognition method into a parsing algorithm, the table must first be constructed and then reprocessed to generate all the parses. This corresponds to reprocessing the grammar Gs t, the result of reducing the grammar Gs, and using it to exhaustively generate all derivations in G s.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Rather than formally defining G s from a contextfree grammar G and a string s in the general case, we illustrate the definition by example. The general definition should be clear.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Let G be the following context-free grammar:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Terminals:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "{a,b} Nonterminals: {S} Start Symbol: S Productions:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "S-~S S a S-~b S~e (S produces the empty string)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "bba. Gbb a is defined from G and Let s be the string bba:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Terminals: Nonterminals:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "{a,b} {al,a2,a3,b!,b2,b3, (t i for t a terminal of G and 1 <i<lcngth(s)) S123,512,51,S23,52,S 3,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "(A x for each nonterminal A of G and each x a nonempty subsequence of < 1,2,3 ..... length(s)>) S\u00b0,Sl,S2,S 3 } (A i for each nonterminal A of G and i, 0<i<length(s)) Start Symbol: S123 Productions:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "[from G production: S-~S S a] S123~S12S2a3 S123~S~S2a3 S123~S Sl2a 3 S 12\"~ SIS a 2 S12--~ SuSla2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "S 1 ~ S\u00b0S\u00b0a t S23--~ S~52a3 $23 ~ S'S2a 3 $2~$1Sla2 $3~$2S2a3 [from G production: S-~b] Sl~b 1 S2~b 2 $3--~ 3 [from G production: S-~ e] S\u00b0-~ \u2022 sly\u2022 $2~ \u2022 $3~ \u2022",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "[for the terminals] bl-~b b2--b a3~a These productions for G s were constructed by beginning with a production of G, adding a subscript or a superscript to the nonterminal on the LHS to obtain a nonterminal of Gs, adding single subscripts to all terminals and sequence subscripts to some nonterminals on the RHS so that the concatenation of all subscripts on the RHS equals the subscript on the LHS. For the RHS nonterminals without subscripts, add the appropriate subscript. Also, to handle the terminals, for each t i add the production Ti~t where t is the i th symbol in s.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "It is straightforward to show inductively that if a nonterminal symbol generates any string at all it generates exactly the substring of s that its subscript determines. Symbols with superscripts generate the empty string. Also a parse tree of G s can be converted to a parse tree of G by first deleting all terminals (each is dominated by the same symbol with a subscript) and then erasing all superscripts and subscripts on all symbols in the tree. Conversely, any parse tree for s in G can be converted to a parse tree of s in G s by adding appropriate subscripts and superscripts to all the symbols of the tree and then adding the terminal symbols at the leaves.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "It is clear that G s is not in general a reduced grammar. G s can be reduced to Gs ~ by eliminating unproductive and unreachable symbols and the rules involving them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Reducing the grammar will determine whether or not L(Gs) is empty. By the above discussion, this will determine whether s is in L(G), and thus an algorithm for constructing and reducing the refined grammar G s from G and s yields a recognition algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Also, given the reduced grammar Gs I, it is straightforward, in light of the above discussion, to generate all parses of s in G: simply exhaustively generate the parse trees of Gs ~ and delete subscripts and superscripts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "The tabular context-free recognition methods of Cocke-Kasami-Younger, Earley, and Sheil can all be understood as variations of this general approach. The C-K-Y recognition algorithm uses the standard bottomup method to determine emptiness of G s. It starts with the terminals and determines which G s nonterminals are productive, eventually finding whether or not the start symbol is productive. The matrix it constructs is essentially the set of productive nonterminals of G s.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Sheil's well-formed substring table algorithm is the most obviously and directly related. His simplest algorithm constructs the refined grammar and reduces it top-down. It uses a top-down control mechanism to determine the productivity only of nonterminals that are reachable from the start symbol. The well-formed substring table again consists essentially of the reachable, productive nonterminals of G s.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "Earley's recognition algorithm is more complicated because it simultaneously constructs and reduces the refined grammar. It can be viewed as manipulating sets of subscripted nonterminals and sets of productions of G s. The items on the item lists, however, correspond quite directly to reachable, productive nonterminals of G s.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "The concept of input-refined grammar provides a unified view of the tabular context-free recognition methods. Equivalence parsing as described in Part I above is also a tabular method, although it is not context-free. It applies to context-free grammars and also to some grammars such as PTQ that are not context-free. We next relate it to the very general class of grammars defined by Montague in UG.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Input-Refined Grammars",
                "sec_num": "2."
            },
            {
                "text": "In the following discussion of the problem of parsing in the general context of Montague's definitions of a language (which might more naturally be called a grammar) and an interpretation, we assume the reader is familiar with the definitions in UG (Montague 1970) . We begin with a formal definition of a refinement of a general disambiguated language. A particular type of refinement, input-refinement, leads to an equivalence parsing algorithm. This generalizes the procedure for input-refining a grammar shown above for the special case of a context-free grammar. We then discuss the implications for equivalence parsing of using the formal interpretation of the language. Finally we show how the ATN for PTQ and semantic equivalence parsing fit into this general framework.",
                "cite_spans": [
                    {
                        "start": 249,
                        "end": 264,
                        "text": "(Montague 1970)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "Recall that a disambiguated language f~ = <A, Fv, X 8, S, 80>v~r,~a can be regarded as consisting of an algebra <A,F~,>~,eF, with proper expressions A and operations Fv, basic expressions X 8 for each category The word refinement refers to the fact that the catgories of lZ are split into finer categories. Condition 1 requires that the basic expressions of a refined category come from the basic expressions of the category it refines. Condition 2 requires that the new syntactic rules be consistent with the old ones. Note that Condition 2 is not a biconditional.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "If 12 t is a refinement of ~2 with refinement function d, <C'8,>~,,~, is the family of syntactic categories of ~2' and <C0>0E a is the family of syntactic categories of ~2, then C'~,-cCd(~, ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "As a simple example of a refinement, consider an arbitrary disambiguated language ~2 t = <A, Fy, Xts,, d0w>yEr,8, Ea,. NOW let ~2 be the disambiguated language <A, Fy, Xa, S, a>yEi-, in which the set of category names is the singleton set {a}. X a = O~,EA, X~,. Let S be {<Fr, <a,a ..... a>, a> : yeF and the number of a's agrees with the arity of F}. Then f~ is a refinement of ~, with refinement function d:At-~{a}, d(8 ~) = a for all d~\u00a2A ~. Note that the disambiguated language ~2 is completely determined by the algebra <A,Fy>yeF, and is the natural disambiguated language to associate with it. Thus in a formal sense, we can view a disambiguated language as a refinement of its algebra.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "As a more intuitive example of refinement, consider an English-like language with categories term (TE) and intransitive verb phrase (IV) that both include singular and plural forms. The language generated would then allow subject-verb disagreement (assuming the ambiguating relation R does not filter them out). By refining category TE to TEsing and TEpl and category IV to IVsing and IVpl, and having syntactic rules that combine category TEsing with IVsing and TEpl with IVpl only, we obtain a refined language that has subjectverb agreement. A similar kind of refinement could eliminate such combinations as \"colorless green ideas\", if so desired.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "With this definition of refinement, we return to the problem of parsing a language L = <~, R>. The problem can now be restated: find an algorithm that, given a string ~, constructs a disambiguated language ~2~ that is an input-refinement of fL That is, f~ is a refinement in which the sentence category Cts, is exactly the set of parses of ~ in L. Finding this algorithm is equivalent to solving the parsing problem. For given such an algorithm, the parsing problem reduces to the problem of generating all members of C'80,.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "In the case of a general language <~, R>, it may be the case that for ~ a string, the input-refined language f~ has finitely many categories. In this case the reduced grammar can be computed and a recursive parsing algorithm exists. If the reduced grammar has infinitely many categories, then the string has infinitely many parses and we are not, in general, interested in trying to parse such languages. It may happen, however, that ~2~ has infinitely many categories, even though its reduction has only finitely many. In this case, we are not guaranteed a recursive parsing algorithm. However, if this reduced language can be effectively constructed, a recursive parsing algorithm still exists.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "The ATN for PTQ represents the disambiguated language for PTQ in the UG sense. The categories of this disambiguated language correspond to the set of possible triples: PTQ category name, contents of SENDR registers at a PUSH to that subnet, contents of the LIFTR registers at the corresponding POP. The input-refined categories include the remainder of the input string at the PUSH and POP. Thus the buckets in the recall table are exactly the input-refined categories. The syntactic execution method is thus an exhaustive generation of all expressions in the sentence category of the input-refined disambiguated language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Universal Grammar and Equivalence Parsing",
                "sec_num": null
            },
            {
                "text": "In UG, Montague inclues a theory of meaning by providing a definition of interpretation for a language. Let L = <<A,F,r,Xs,S,t~0>.rEF,SEA,R> be a language. An interpretation ,t' for L is a system <B,G~,,f>3,EF such that <B,Gv>v~ r is an algebra similar to <A,F./>3,eF; i.e., for each ~, E F, Fy and G./ have the same number of arguments, and f is a function from O,EAX 8 into B. Note that the algebra <B,G~,>.rE F need not be a free algebra (even though <A,Fy>v\u00a2 r must be). B is the set of meanings of the interpretation ,I,; Gv is the semantic rule corresponding to syntactic rule Fv; f assigns meanings to the basic expressions Xv. The meaning assignment for L determined by if' is the unique homomorphism g from <A,F.r>~,EF into <B,Gy>,/E F that is an extension of f.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Equivalence Parsing in OG",
                "sec_num": null
            },
            {
                "text": "There are two ways to proceed in order to find all the meanings of a sentence ~ in a language L = <f~, R> with interpretation ~. The first method is to generate all members of the sentence category Cts0 , of the input-refined language ~2~. As discussed above, this is done in the algebra <A,F./>~,cF of ~, using the syntactic functions Fv to inductively construct members of A from the basic categories of f~ and members of A constructed earlier and then applying g. The second method is to use the fact that g is a homomorphism from <A,F.~>~,EF into <B,G./>~ F. Because g is a homomorphism, we can carry out the construction of the image of the sentence category entirely in the algebra <B,G~,>~,eF of the interpretation q'. We may use the G functions to construct inductively members of B from the basic semantic categories, that is, the images under g (and f) of the basic syntactic categories, and members of B already constructed. The advantage of carrying out the construction in the algebra of ,t, is that this algebra may not be free, i.e., some element of B may have multiple construction sequences. By carrying out the construction there, such instances can be noticed and used to advantage, thus eliminating some redundant search. There are additional costs, however, associated with parsing in the interpretation algebra q'. Usually, the cost of evaluating a G function in the semantic algebra is greater than the cost of the corresponding F function in the syntactic algebra.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Equivalence Parsing in OG",
                "sec_num": null
            },
            {
                "text": "Also in semantic parsing, each member of B as it is constructed is compared to the other members of the same refined category that were previously constructed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Equivalence Parsing in OG",
                "sec_num": null
            },
            {
                "text": "In the PTQ parsing system discussed above, the interpretation algebra is the set of reduced translations. The semantic functions are those obtained from the functions given in the T-rules in PTQ, and reducing and extensionalizing their results. The directed process version of the parser finds the meanings in this algebra by the first method, generating all parses in the syntactic algebra and then taking their images under the interpretation homomorphism. Semantic equivalence parsing for PTQ uses the second method, carrying out the construction of the meaning entirely within the semantic algebra. The savings in the example sentence Bill walks and Mary runs comes about because the algebra of reduced translations is not a free algebra, and the redundant search thus eliminated more than made up for the increase in the cost of translating and comparing formulas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Equivalence Parsing in OG",
                "sec_num": null
            },
            {
                "text": "We have described a parsing algorithm for the language of PTQ viewed as consisting of two parts, a nondeterministic program and an execution method. We showed how, with only a change to an equivalence relation used in the execution method, the parser becomes a recognizer. We then discussed the addition of the semantic component of PTQ to the parser. With again only a change to the equivalence relation of the execution method, the semantic parser is obtained. The semantic equivalence relation is equality (to within change of bound variable) of reduced extensionalized translations. Examples were given to compare the two parsing methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Summary",
                "sec_num": null
            },
            {
                "text": "In the finalportion of the paper we described how the parsing method initially presented in procedural terms can be viewed in formal grammatical terms. The notion of input-refinement for context-free grammars was introduced by example, and the tabular context-free recognition algorithms were described in these terms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Summary",
                "sec_num": null
            },
            {
                "text": "We then indicated how this notion of refinement can be extended to the UG theory of language and suggested how our semantic parser is essentially parsing in the algebra of an interpretation for the PTQ language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Summary",
                "sec_num": null
            },
            {
                "text": "American Journal of Computational Linguistics, Volume 8, Number 3-4, July-December 1982",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "The theory and practise of augmented transition network grammars",
                "authors": [
                    {
                        "first": "Madeleine",
                        "middle": [],
                        "last": "Bates",
                        "suffix": ""
                    }
                ],
                "year": 1978,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "191--260",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bates, Madeleine 1978 The theory and practise of augmented transition network grammars. In Bole, Ed., Natural Lanauge Communication with Computers. New York: 191-260.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Montague's semantic theory and transformational syntax",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Cooper",
                        "suffix": ""
                    }
                ],
                "year": 1975,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Cooper, R. 1975 Montague's semantic theory and transformation- al syntax. Ph.D. thesis. Amherst, MA: University of Massa- chusetts.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "An efficient context-free parsing algorithm",
                "authors": [
                    {
                        "first": "Jay",
                        "middle": [],
                        "last": "Earley",
                        "suffix": ""
                    }
                ],
                "year": 1970,
                "venue": "Comm. ACM",
                "volume": "13",
                "issue": "",
                "pages": "94--102",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Earley, Jay 1970 An efficient context-free parsing algorithm. Comm. ACM 13, 94-102.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Evaluating English sentences in a logical model",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Friedman",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Moran",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "S"
                        ],
                        "last": "Warren",
                        "suffix": ""
                    }
                ],
                "year": 1978,
                "venue": "Information Abstracts, 7th International Conference on Computational Linguistics",
                "volume": "16",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Friedman, J., Moran, D., and Warren, D.S. 1978a Evaluating English sentences in a logical model. Abstract 16, Information Abstracts, 7th International Conference on Computational Linguistics. Norway: University of Bergen (11 pp.).",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Evaluating English sentences in a logical model",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Friedman",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Moran",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "S"
                        ],
                        "last": "Warren",
                        "suffix": ""
                    }
                ],
                "year": 1978,
                "venue": "presented to the 7th International Conference on Computation Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Friedman, J., Moran, D., and Warren, D.S. 1978b Evaluating English sentences in a logical model, presented to the 7th Inter- national Conference on Computation Linguistics, University of Bergen, Norway (August 14-18). Report N-15. Ann Arbor, MI: University of Michigan, Computer and Communication Sciences Department (mimeographed).",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "A parsing method for Montague grammars",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Friedman",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "S"
                        ],
                        "last": "Warren",
                        "suffix": ""
                    }
                ],
                "year": 1978,
                "venue": "Lingustics and Philosophy",
                "volume": "2",
                "issue": "",
                "pages": "347--372",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Friedman, J. and Warren, D.S. 1978 A parsing method for Mon- tague grammars. Lingustics and Philosophy 2, 347-372.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "The GPSG linguistic system",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Gawron",
                        "suffix": ""
                    }
                ],
                "year": 1982,
                "venue": "Proceedings 20th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "74--81",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gawron, J.M., et al. 1982 The GPSG linguistic system. In Pro- ceedings 20th Annual Meeting of the Association for Computational Linguistics, 74-81.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "English as a context-free language University of Sussex (mimeograph)",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Gazdar",
                        "suffix": ""
                    }
                ],
                "year": 1979,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gazdar, G. 1979 English as a context-free language University of Sussex (mimeograph).",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "On the total correctness of nondeterministic programs",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Harel",
                        "suffix": ""
                    }
                ],
                "year": 1979,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harel, David 1979 On the total correctness of nondeterministic programs. IBM Research Report RC 7691.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Approaches to Natural Language",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Hintikka",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Moravcsik",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Suppes",
                        "suffix": ""
                    }
                ],
                "year": 1973,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hintikka, J., Moravcsik, J., and Suppes, P., Eds. 1973 Approaches to Natural Language. Dordrecht: D. Reidel.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Proceedings of the Second Amsterdam Colloquium on Montague Grammar and Related Topics",
                "authors": [
                    {
                        "first": "T",
                        "middle": [
                            "W V"
                        ],
                        "last": "Janssen",
                        "suffix": ""
                    }
                ],
                "year": 1978,
                "venue": "Amsterdam Papers in Formal Grammar",
                "volume": "II",
                "issue": "",
                "pages": "211--234",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Janssen, T.W.V. 1978 Compositionality and the form of rules in Montague grammar. In Groenenijk, J. and Stokhof, M., Eds., Proceedings of the Second Amsterdam Colloquium on Montague Grammar and Related Topics. Amsterdam Papers in Formal Grammar, Volume II. University of Amsterdam, 211-234.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "On problems concerning the quantification rules in Montague grammar",
                "authors": [
                    {
                        "first": "T",
                        "middle": [
                            "W V"
                        ],
                        "last": "Janssen",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Janssen, T.W.V. 1980 On problems concerning the quantification rules in Montague grammar. In Roher, G., Ed., Time, Tense, and Quantifiers. Tuebingen, Max Niemeyer Verlag.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "An efficient recognition and syntax-analysis algorithm for context-free languages",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Kasami",
                        "suffix": ""
                    }
                ],
                "year": 1965,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kasami, T. 1965 An efficient recognition and syntax-analysis algorithm for context-free languages. Science Report AFCRL- 65-758. Bedford, MA: Air Force Cambridge Research Labora- tory.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Adaptation of Montague grammar to the requirements of parsing",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "P J"
                        ],
                        "last": "Landsbergen",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Landsbergen, S.P.J. 1980 Adaptation of Montague grammar to the requirements of parsing. M.S. 11.646. Eindhoven, The Netherlands: Philips Research Laboratories.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Universal grammar (UG)",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Montague",
                        "suffix": ""
                    }
                ],
                "year": 1970,
                "venue": "Theoria",
                "volume": "36",
                "issue": "",
                "pages": "373--398",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Montague, Richard 1970 Universal grammar (UG). Theoria 36, 373-398.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "The proper treatment of quantification in ordinary English",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Montague",
                        "suffix": ""
                    }
                ],
                "year": 1973,
                "venue": "Hintikka, Moravcsik, and Suppes",
                "volume": "",
                "issue": "",
                "pages": "247--270",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Montague, Richard 1973 The proper treatment of quantification in ordinary English. In Hintikka, Moravcsik, and Suppes 1973. Reprinted in Montague 1974, 247-270.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Formal Philosophy: Selected Papers of Richard Montague. Edited and with an introduction by Richmond Thomason",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Montague",
                        "suffix": ""
                    }
                ],
                "year": 1974,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Montague, Richard 1974 Formal Philosophy: Selected Papers of Richard Montague. Edited and with an introduction by Rich- mond Thomason. New Haven, CT: Yale University Press.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Translating English into logical form",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "J"
                        ],
                        "last": "Rosenschein",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "M"
                        ],
                        "last": "Shieber",
                        "suffix": ""
                    }
                ],
                "year": 1982,
                "venue": "Proceedings 20th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1--8",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rosenschein, S.J. and Shieber, S.M. 1982 Translating English into logical form. In Proceedings 20th Annual Meeting of the Associa- tion for Computational Linguistics, 1-8.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Observations on context-free parsing",
                "authors": [
                    {
                        "first": "B",
                        "middle": [
                            "A"
                        ],
                        "last": "Sheil",
                        "suffix": ""
                    }
                ],
                "year": 1976,
                "venue": "Statistical Methods in Linguistics",
                "volume": "",
                "issue": "",
                "pages": "71--109",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sheil, B.A. 1976 Observations on context-free parsing. Statistical Methods in Linguistics 71-109.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Syntax and semantics in parsing: an application to Montague grammar",
                "authors": [
                    {
                        "first": "David",
                        "middle": [
                            "S"
                        ],
                        "last": "Warren",
                        "suffix": ""
                    }
                ],
                "year": 1979,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Warren, David S. 1979 Syntax and semantics in parsing: an appli- cation to Montague grammar. Ph.D. thesis. Ann Arbor, MI: University of Michigan.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Understanding Natural Language",
                "authors": [
                    {
                        "first": "T",
                        "middle": [
                            "A"
                        ],
                        "last": "Winograd",
                        "suffix": ""
                    }
                ],
                "year": 1972,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Winograd, T.A. 1972 Understanding Natural Language. New York: Academic Press.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "The Lunar Sciences Natural Language Information System",
                "authors": [
                    {
                        "first": "W",
                        "middle": [
                            "A"
                        ],
                        "last": "Woods",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "M"
                        ],
                        "last": "Kaplan",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Woods, W.A. and Kaplan, R.M. 1971 The Lunar Sciences Natural Language Information System. BBN Report No. 2265. Cam- bridge, MA Bolt Beranek and Newman.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "An experimental parsing system for transition network grammars",
                "authors": [
                    {
                        "first": "W",
                        "middle": [
                            "A"
                        ],
                        "last": "Woods",
                        "suffix": ""
                    }
                ],
                "year": 1973,
                "venue": "Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "111--154",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Woods, W.A. 1973 An experimental parsing system for transition network grammars. In Rustin, R., Ed., Natural Language Processing. New York: Algorithmics Press, Inc., 111-154.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "text": "Subnet of the ATN for PTQ, Example 1: Bill walks",
                "type_str": "figure",
                "uris": null
            },
            "FIGREF1": {
                "num": null,
                "text": "Figure 3. continued",
                "type_str": "figure",
                "uris": null
            },
            "FIGREF2": {
                "num": null,
                "text": "index d eA, a set of syntactic rules S, and a sentence category index 80EA. A language is a pair <~2,R> where ~2 is a disambiguated language and R is a binary relation with domain included in A. Given a disambiguated language ~2 = <A, F~, X#, S, 80>~EF, ~EA, a disambiguated language f~v = <A, F~,Xts,, S t , 80v>~,EF, 8'EA' is a refinement of 12 if there is a refinement function d:AW-.A from the category indices of fl' to those of f~ such that 1) Xt~ _c Xd(8,) ' 2) If <F~,<~lW,~2t ..... 8n1>,St> E S v, then <F~,<d(81'),d(~2') ..... d(~n')>, d(8')> E S', and 3) d(80') = 60 . (Note that the proper expressions A, the operation indexing set F, and the operations Fy of ~ and 12 ~ are the same.)",
                "type_str": "figure",
                "uris": null
            },
            "TABREF1": {
                "type_str": "table",
                "text": "has been returned to the top level, but it does not cover the whole sentence, so the path fails and the This is the second arc from the TS node of",
                "content": "<table><tr><td/><td colspan=\"2\">Bill walks and Mary runs</td><td/><td/></tr><tr><td>PUSH:</td><td/><td/><td colspan=\"2\">Bucket:</td><td>Contents:</td></tr><tr><td colspan=\"2\"># CAT Str</td><td>Env</td><td colspan=\"2\">from Str</td><td>Env</td><td>Tree</td></tr><tr><td>1 TS</td><td>Bw&amp;Mr</td><td>null</td><td/><td/></tr><tr><td>2 TE</td><td>Bw</td><td>null</td><td/><td/></tr><tr><td/><td/><td/><td>2</td><td>w&amp;Mr</td><td>null</td><td>Bill</td></tr><tr><td>3 IV</td><td>w</td><td>null</td><td/><td/></tr><tr><td/><td/><td/><td>3</td><td>&amp;Mr</td><td>null</td><td>walk</td></tr><tr><td/><td/><td/><td>1</td><td>&amp;Mr</td><td>null</td><td>($4 Bill walk)</td></tr><tr><td colspan=\"3\">[A tree execution method backs up.]</td><td/><td/></tr><tr><td/><td/><td/><td>2</td><td>w&amp;Mr</td><td>(he0 B)</td><td>he0</td></tr><tr><td>4 IV</td><td>w&amp;Mr</td><td>(he0 B)</td><td/><td/></tr><tr><td/><td/><td/><td>4</td><td>&amp;Mr</td><td>null</td><td>walk</td></tr><tr><td/><td/><td/><td>1</td><td>&amp;Mr</td><td>(he0 B</td><td>($4 he0 walk)</td></tr><tr><td colspan=\"6\">[Again a tree has been returned to the top level, but it does not span the whole string, nor is the returned</td></tr><tr><td colspan=\"3\">environment null, so we fail.]</td><td/><td/></tr><tr><td/><td/><td/><td>1</td><td>&amp;Mr</td><td>null</td><td>(S14,0 Bill ($4 he0 walk))</td></tr><tr><td colspan=\"6\">[Again we are the top level; again we do not span the whole string, so again we fail.]</td></tr><tr><td>5 TS</td><td>Bw&amp;Mr</td><td>null</td><td/><td/></tr><tr><td>[</td><td/><td/><td/><td/></tr></table>",
                "num": null,
                "html": null
            },
            "TABREF3": {
                "type_str": "table",
                "text": "",
                "content": "<table><tr><td>g.</td><td/><td/><td colspan=\"3\">S14,0 Bill</td><td/><td/><td/><td/><td/><td/><td>$11</td></tr><tr><td>I he0</td><td/><td>I $4 I</td><td>I walk</td><td>I $11 I</td><td>I hel</td><td colspan=\"2\">I S14,1 Mary I $4 I run I</td><td colspan=\"2\">I heO</td><td colspan=\"2\">I $14,0 Bill i $4 I walk I</td><td>I</td><td>I Mary</td><td>I $4 I</td><td>I run</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>b.</td><td/><td/><td colspan=\"2\">S14,0 Mary</td></tr><tr><td/><td/><td/><td colspan=\"4\">S14,1 Mary</td><td/><td/><td/><td/><td colspan=\"2\">I $11 $11</td></tr><tr><td>i Bill e, I Bill heO I</td><td colspan=\"7\">$4 I S14,0 Bill i walk I Sll $11 i Mary I I 1 I $4 L walk I he0 I S14,0 Mary $4 t I run I $4 I I run I $4 I i I S4 I I hel walk I run</td><td>I Bill I he0 he0 I</td><td colspan=\"4\">I $4 I S14,0 Bill I walk I I $4 I walk I $4 $14,0 Bill I I he0 I Mary i $11 I I I I walk he 1 $4 I i S14,1 Mary I I $4 I run run I f $4 i I I run</td></tr><tr><td>e.</td><td/><td/><td colspan=\"3\">S14,1 Mary</td><td/><td/><td/><td/><td/><td colspan=\"2\">S14,0 Bill</td></tr><tr><td/><td/><td/><td colspan=\"3\">I $14,0 Bill</td><td/><td/><td/><td/><td/><td colspan=\"2\">I S14,1 Mary</td></tr><tr><td/><td/><td/><td colspan=\"2\">I $11</td><td/><td/><td/><td/><td/><td/><td>I Sll</td></tr><tr><td/><td>I $4</td><td/><td/><td>I</td><td colspan=\"2\">I $4</td><td/><td/><td/><td>I $4</td><td>I</td><td>I $4</td></tr><tr><td>I he0</td><td>I</td><td colspan=\"2\">I walk</td><td>I hel</td><td/><td>I</td><td>I run</td><td>I he0</td><td/><td>I</td><td>wJlk</td><td>I</td><td>I</td></tr></table>",
                "num": null,
                "html": null
            },
            "TABREF5": {
                "type_str": "table",
                "text": "This again duplicates a bucket and its contents, so we fail back to the second FAKEPOP from PUSH 5.",
                "content": "<table><tr><td/><td/><td/><td>1</td><td>c</td><td>(he0 B)</td><td>walk,'(Vx0)&amp;run,'(m)</td><td>n</td></tr><tr><td colspan=\"4\">[Duplicate bucket and translation, so fail.]</td><td/></tr><tr><td/><td/><td/><td>1</td><td>c</td><td>null</td><td>walk,'(b)&amp;run,'(m)</td><td>n</td></tr><tr><td colspan=\"2\">[Duplicate, so fail.]</td><td/><td/><td/></tr><tr><td/><td/><td/><td>10</td><td>e</td><td>null</td><td>run,'(m)</td><td>n</td></tr><tr><td colspan=\"2\">[Duplicate, so fail.]</td><td/><td/><td/></tr><tr><td/><td/><td>(FAKEPOP)</td><td/><td>&amp;Mr</td><td>null</td><td>walk,'(b)</td><td>y</td></tr><tr><td>6 TS</td><td>Mr</td><td>null</td><td/><td/></tr><tr><td>7 TE</td><td>Mr</td><td>null</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">7</td><td>r</td><td>null</td><td>Mary</td><td>y</td></tr><tr><td>8 IV</td><td>r</td><td>null</td><td/><td/></tr><tr><td/><td/><td/><td>8</td><td>E</td><td>null</td><td>run'</td><td>y</td></tr><tr><td/><td/><td/><td>6</td><td>e</td><td>null</td><td>run,'(m)</td><td>y</td></tr><tr><td/><td/><td/><td>1</td><td>~</td><td>null</td><td>walk,'(b)&amp;run,'(m)</td><td>y</td></tr><tr><td colspan=\"6\">[This is a successful parse. The top level prints out the translation and then the execution method fails back.]</td></tr><tr><td/><td/><td/><td>7</td><td colspan=\"2\">(he0 M) null</td><td>~PP{x0}</td><td>y</td></tr><tr><td>9 IV</td><td>r</td><td>(he0 M)</td><td/><td/></tr><tr><td/><td/><td/><td>9</td><td>e</td><td>null</td><td>run'</td><td>y</td></tr><tr><td/><td/><td/><td>6</td><td>E</td><td>null</td><td>run,'(Vx0)</td><td>y</td></tr><tr><td/><td/><td/><td>1</td><td>E</td><td>(he0 M)</td><td>walk,'(b)&amp;run,'(Vx0)</td><td>y</td></tr><tr><td colspan=\"6\">[Fail because we are at the top level and the environment is not null.]</td></tr><tr><td/><td/><td/><td>1</td><td>~</td><td>null</td><td>walk,'(b)&amp;run,'(m)</td><td>n</td></tr><tr><td/><td/><td/><td/><td/><td>So</td></tr><tr><td/><td/><td>]</td><td/><td/></tr><tr><td/><td/><td/><td>6</td><td>E</td><td>null</td><td>run,'(m)</td><td>n</td></tr><tr><td colspan=\"3\">[use the other bucket: 1-&amp;Mr-(he0 B).]</td><td/><td/><td>Now we</td></tr><tr><td/><td/><td>(FAKEPOP)</td><td colspan=\"2\">1(5) &amp;Mr</td><td>(he0 B)</td><td>walk,'(Vx0)</td><td>y</td></tr><tr><td>10 TS</td><td>Mr</td><td>(he0 B)</td><td/><td/></tr><tr><td>11 TE</td><td>Mr</td><td>(he0 B)</td><td/><td/></tr><tr><td/><td/><td/><td>11</td><td>r</td><td>null</td><td>~P{xO}</td><td>y</td></tr><tr><td>12 IV</td><td>r</td><td>(he0 B)</td><td/><td/></tr><tr><td/><td/><td/><td>12</td><td>\u00a2</td><td>null</td><td>run'</td><td>y</td></tr><tr><td/><td/><td/><td>10</td><td>e</td><td>null</td><td>run,'(m)</td><td>y</td></tr><tr><td/><td/><td/><td>1</td><td>E</td><td>(he0 B)</td><td>walk,'(Vx0)&amp;run,'(m)</td><td>y</td></tr><tr><td colspan=\"6\">[Fail at the top level since the environment is not null.]</td></tr><tr><td/><td/><td/><td>1</td><td>e</td><td>(he0 B)</td><td>walk,'(b)&amp;run,'(m)</td></tr><tr><td colspan=\"6\">[This duplicates a bucket and its contents, so we do not POP it but fail back.]</td></tr><tr><td/><td/><td/><td>ll</td><td>r</td><td>(hel M)</td><td>~PP{xl}</td><td>Y</td></tr><tr><td>13 IV</td><td>r</td><td>(he0 B)</td><td/><td/></tr><tr><td/><td/><td>(hel M)</td><td/><td/></tr><tr><td/><td/><td/><td>13</td><td>e</td><td>null</td><td>run'</td></tr><tr><td/><td/><td/><td>10</td><td>e</td><td>(hel M)</td><td>run,'(~xl)</td></tr><tr><td/><td/><td/><td>1</td><td>e</td><td>(he0 B)</td><td>walk,'(~x0) &amp;run,'(Vx 1 )</td></tr><tr><td/><td/><td/><td/><td/><td>(hel M)</td></tr><tr><td colspan=\"5\">[Fail at top level because environment is not null.]</td></tr><tr><td/><td/><td/><td>1</td><td>c</td><td>(hel M)</td><td>walk,'(b)&amp;run,'(Vxl)</td></tr><tr><td colspan=\"5\">[Fail at top level because environment is not null.]</td></tr><tr><td/><td/><td/><td>1</td><td>e</td><td>null</td><td>walk,'(b)&amp;run,'(m)</td></tr><tr><td colspan=\"4\">[Duplicate bucket and translation, so fail.]</td><td/></tr></table>",
                "num": null,
                "html": null
            },
            "TABREF6": {
                "type_str": "table",
                "text": "",
                "content": "<table><tr><td colspan=\"2\">David I $4</td><td>e.</td><td colspan=\"3\">S14,1 Mary I $14,0 Bill I $11 I</td><td>I $4</td><td/><td/><td>I $4</td><td colspan=\"4\">$14,0 Bill I $14,1 Mary I Sll I</td><td>I $4</td></tr><tr><td>I he0</td><td>I</td><td colspan=\"2\">I walk</td><td/><td>I he 1</td><td>I</td><td>I run</td><td>I he0</td><td>I</td><td>I walk</td><td/><td colspan=\"2\">I he 1</td><td>I</td><td>I run</td></tr><tr><td/><td/><td>g.</td><td colspan=\"3\">S14,1 Mary</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td>I</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td>$4</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>I he 1</td><td/><td>I</td><td>I run</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>a.</td><td/><td>Sll</td><td/><td/><td/><td/><td/><td>b.</td><td colspan=\"4\">S14,0 Mary</td></tr><tr><td>1 Bill</td><td>I $4 I</td><td colspan=\"2\">I walk</td><td>I</td><td>I Mary</td><td>I $4 I</td><td>I run</td><td>I Bill</td><td>I $4 I</td><td colspan=\"2\">I walk</td><td>Sll I</td><td>I he0</td><td>1 $4 I</td><td>I run</td></tr><tr><td/><td/><td/><td colspan=\"3\">S14,0 Mary</td><td/><td/><td/><td/><td>d.</td><td colspan=\"3\">$14,0 Bill</td></tr><tr><td/><td/><td/><td/><td>$4</td><td/><td/><td/><td/><td/><td/><td/><td>$11</td><td/></tr><tr><td/><td/><td>I he0</td><td/><td>I</td><td>I run</td><td/><td/><td/><td>I $4</td><td/><td/><td>I</td><td/><td>I $4</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>I he0</td><td>I</td><td colspan=\"2\">I walk</td><td/><td>I Mary</td><td>I</td><td>I run</td></tr></table>",
                "num": null,
                "html": null
            }
        }
    }
}