File size: 106,030 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
{
    "paper_id": "C65-1018",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T13:12:20.521654Z"
    },
    "title": "",
    "authors": [],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper is concerned with the design of a processor capable of formalizing English language descriptions of problems in the sententlal calculus. The emphasis is on the design of a system with natural language processing capabilities, but the formal languages specified are oriented to the problem context. A series of automata are specified to carry out the necessary functions. The automata identifythe premises in the problem strings~ specify the appropriate logical connectives among the premises and determine which premises are meaning-equivalent. The syntax of each automaton is defined and examples are used to illustrate their functioning. The automata accept statements in the language L1, the set of English statements of problems in the sententlal calculus. The individual premises p @ L1 are recognized by the syntax~, where ~ is chosen so that the language L2 recognized by it is a subset of L1. Furthermore, the strings in L2 are restricted to the declarative sentences. Once the premises and their logical connectives have been identified, those that are meaningequivalent are located in two additional steps. First the L2 description of the string is mapped into a string in L3. The L3 language consists of a limited set of canonical forms that ease the problem of establishing meaning equivalence of premises. Finally, the automaton applies heuristically a sequence of problem-orlented and meaning-preserving transformations in order to establish meaning-equivalence. Two premises are taken to be meaning-equivalent if one can be deduced from the other. Otherwise~ they are taken to be not meaning-equlvalent.",
    "pdf_parse": {
        "paper_id": "C65-1018",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper is concerned with the design of a processor capable of formalizing English language descriptions of problems in the sententlal calculus. The emphasis is on the design of a system with natural language processing capabilities, but the formal languages specified are oriented to the problem context. A series of automata are specified to carry out the necessary functions. The automata identifythe premises in the problem strings~ specify the appropriate logical connectives among the premises and determine which premises are meaning-equivalent. The syntax of each automaton is defined and examples are used to illustrate their functioning. The automata accept statements in the language L1, the set of English statements of problems in the sententlal calculus. The individual premises p @ L1 are recognized by the syntax~, where ~ is chosen so that the language L2 recognized by it is a subset of L1. Furthermore, the strings in L2 are restricted to the declarative sentences. Once the premises and their logical connectives have been identified, those that are meaningequivalent are located in two additional steps. First the L2 description of the string is mapped into a string in L3. The L3 language consists of a limited set of canonical forms that ease the problem of establishing meaning equivalence of premises. Finally, the automaton applies heuristically a sequence of problem-orlented and meaning-preserving transformations in order to establish meaning-equivalence. Two premises are taken to be meaning-equivalent if one can be deduced from the other. Otherwise~ they are taken to be not meaning-equlvalent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "The recent evolution of programming languages has tended to improve communication between man and computer. The use of mnemonics~ automatic storage allocation~ English-like operators (such as in COBOL) and problem-oriented languages has greatly facilitated the task of the programmer. Thus, the solution algorithm for a large class of computational problems can be defined with relative ease in languages such as FORTRAN and ALGOL, specifically designed for these classes of problems.",
                "cite_spans": [
                    {
                        "start": 195,
                        "end": 201,
                        "text": "COBOL)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "This paper describes an attempt to further simplify the communication between programmer and computer by defining a system which can produce a formal description from its natural (verbal) input. 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "In order to study this approach a specific problem area was chosen, the propositional or statement calculus. It will be evident that the problem area chosen has influenced the design of the system; nonetheless it should be clear that the linguistic capabilities of the system are general rather than specific to the problem context.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "In designing this processor, two major abilities are required.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "First, the processor must be able to identify each elementary premise and all logical connectives.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "It must also determine which premises are to be taken as equivalent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "i This research was supported by Grant G-17951 of the National Science Foundation. A majority of the system has been programmed in the list processing language IPL-V (Newell, 1961) .",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 180,
                        "text": "(Newell, 1961)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The processor is composed of three series coupled automata (see Fig. 1 ). The first automaton, A1, accepts as its inputs the language L1, where L1 is the set of all English language statements of problems in the propositional calculus. This automaton is concerned with the identification of the premises and logical connectives of a problem. This is achieved by using a syntax ~ capable of recognizing strings in L2. where L2 is a subset of L1. The syntax ~ consists of a hierarchy of syntaxes; a phrase structure syntax ~idesigned to recognize a subset of English composed of simple declarative sentences and the set of' transformations specified by~ T.I",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 64,
                        "end": 70,
                        "text": "Fig. 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The equivalent premises are identified by the automata A2 and A3. The automaton A2 maps a premise, identified by AI~ into a canonical form specified by the syntax C that defines the language L3. This step is designed to facilitate the distinction of equivalent premises. Finally A3 applies a sequence of meaning preserving transformations from the set TO = ~TI,T2,... ~ Tm~ on the string (~r,~'s ~ L3 such that if:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "TiTj'''T% (~r) :~s with T k C TO the two strings are considered meaning equivalent. Should the system be unable to find a deduction satisfying these conditions or under certain other heuristically chosen criteria the strin6~s are asslnned to represent different premises.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "In order to test the system described in this paper, problems were drawn from Stoll (1961) . Some will be used later to illustrate the capabilities and inadequacies of the present system. 1 Chomsky's discussion of transformations and the inadequacies of various models for natural languages can be found in the monograph \"Syntactic Structure s\". Each of the automata will be discussed in two ways, first in terms of its syntax. Finally the information flow for its implementation as a computer program will be outlined.",
                "cite_spans": [
                    {
                        "start": 78,
                        "end": 90,
                        "text": "Stoll (1961)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": null
            },
            {
                "text": "The automaton A1, as mentioned in the previous section, consists of two completely different syntactic mechanisms. The system includes a phrase structure syntax designed to recognize an extremely restricted subset of the English language, simple declarative sentences. The syntax of the processor also includes a limited set of transformations chosen to enhance the power of the language generated, but also specifically chosen for the problem context.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Characteristics of the Natural Language Processor (AI)",
                "sec_num": null
            },
            {
                "text": "If we consider the syntax of A1, ~ , as consisting of~l and T we have defined a hierarchy of languages:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Characteristics of the Natural Language Processor (AI)",
                "sec_num": null
            },
            {
                "text": "Here L1 consists of all the legal problem statements; L2 consists of the set of strings recognized by~ ; and L~l consists of all the strings recognized by the syutax ~. Thus, the syntax ~ of the automaton A1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "is really composed of two disjoint sets of rewriting rules,~l and ~T.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "The syntax ~l is a phrase structure crammar designed to generate or recognize a subset of English Composed of simple declarative sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "The syntax ~T contains a set of transformations designed for the purposes of isolating premises and specifying logical connectives. This hierarchy can be visualized in Figure 2 . Initially~ we shall describe the class of sentences recognized by 91~ and then characterize the strings recognized by P. From the following discussion it will be made clear that we are building a recognizer rather than a generator. The automaton A1 will not perform syntactic analysis below the level of the alphabet (i.e., words) of the language. Thus~ the processor w\u00b0uld recognize:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 168,
                        "end": 176,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "The bridge was high",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "The bridges was high as the same sentence since the differences are at a level below that specified by its syntax.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "The processor consists of an alphabet A, where: xi x j for i J",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "A = N u D u PN u ADJ u",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "where ~ represents the empty set. The occurrence of an element of the alphabet in more than one word class is known as homography and is common to the natural languages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "For purposes of derivation, we distinguish between the elements of the alphabet, to be known as the \"terminal\" elements, and the symbols the nonterminals.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "from the syntax such as S, NP, ADJ, etc., which will be referred to as",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "The word assignments might be as shown in Table 1 Although the processor is limited in the size of the available dictionary , for purposes of discussion no limitations will be assumed.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 42,
                        "end": 49,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "In addition it is necessary to specify the syntax of the recognizer, which uses the rewriting rules of the axiomatic ~system ~l in Table 2 . Examining the syntax ~l, we see that it meets all the requirements of a phrase structure grammar. Also, ~l generates several classes of strings characterized by the verb type. Since this classification will be fundamental to the design of A2, we shall give some examples in L2 and later show the mapping of A2. The syntax~l identifies four verb types, equational verbs, intransitive verbs, transitive verbs, and factitive verbs with their corresponding predicates. The following examples show some of the possible sentences:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 131,
                        "end": 138,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "Equational verb:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "(i) John is home.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "(ii) John is tall.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "(iii) John is by the house.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "(iv) John is taller than Peter.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "A derivation of (ii) in the syntax~l is",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "(S(NP(N John)) (VP(VEQ(VMEQ is)) (PREDEQ(ADJ tall) )) )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "Intransitive verb: Imperative sentences:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "Go home.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "Interrogative sentences:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "Is John coming home?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "Passive sentences:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "Home is where John should be.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "L1 ~ L2 ~ I~l",
                "sec_num": null
            },
            {
                "text": "If John should come home...",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conditional sentences:",
                "sec_num": null
            },
            {
                "text": "John will go home and Mary will stay.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "Complex sentences:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "John, should he so desire, will go home.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "In order to make the processor A1 useful in the problem context, it is necessary to increase the class of strings in L2. In contrast to the syntax ~i, which uses the rewriting rules on the nonterminals in the deduction string, the transformation set rT is designed to operate on the derivations in ~i. Generally, transformations have been discussed in terms of generators. Attention has been focused on increasing the class of strings that a formal language can generate (39).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "However, our problem is to use ~T in order to simplify the class of strings that ~l will have to recognize. Thus, our transformation set rT should decompose the string John will go home and Mary will stay.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "into the following simpler strings:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "(1) John will go home.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "(il) will stay.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "Since we are interested in formalizing the natural language inputs as statements in the sententlal calculus, the transformations will also give us information as to the appropriate logical connectives for the premise. Thus, in the previous example our processor could be expected to define a statement of the form:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "P;kg",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "In order to explore the powerful linguistic possibilities of transformations, a limited number were chosen. We shall now define the transformations and show how the linguistic capabilities of A1 have been increased.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "The transformation set~T presently contains as its axioms:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "T = ~TNOT, TCOM, TCOND~",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "In order to specify a transformation, we must not only define the structural changes it produces but also the class of strings to which it is applicable. The transformations~ as defined in~T were adapted for A1. Since we are not interested in generating grammatically correct English sentences, but rather mapping the input strings into a form recognizable to ~l, it is possible to omit the transformations for tenses because they operate at a level lower than that of the terminals. By implication~ 1 will process strings that are not grammatically correct. Thus, if A1 were presented with the sentence:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "If it were cold tomorrow~ ....",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "the transformation TCOND will give as its output:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "It were cold tomorrow.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "This premise would still be processed althouch it is grammatically incorrect.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "Another difference between the transformations as specified by Chomsky~ and those used by A1 is in the direction of the mapping.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "The ~T transformations have L2 as their domain and the kernel strings generated by ~l as their range. This is the inverse of the mappings considered by Chomsky (1957) .",
                "cite_spans": [
                    {
                        "start": 152,
                        "end": 166,
                        "text": "Chomsky (1957)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "TNOT: is defined on strings of the form TNOT(~): John will hit Mary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "(~2: Today is not cold.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "TNOT~--2) : Today is cold.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "~3: Tomorrow will not be cold.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compound sentences:",
                "sec_num": null
            },
            {
                "text": "Tomorrow will be cold.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "~-4: John never suffers.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "TNOT(q-4) : John suffers.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "TCOM: operates on strings in the following domain only:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "(i) \u2022 .+Sl+and+S2+ \u2022 \u2022 \u2022 ( ii)..+Sl+ ,+ s2+... ( iii)..+SI+oreS2+... ( iv)..+Sl+then+S2+... (vi) Either +Sl+Or+S2+... (vii) Therefore+,+Either+Sl+or+S2+ \u2022. ;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "The range of the function is any string with the following format:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "S I S 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "Here the information between \"SI\" and \"$2\" is used by the processor only and has as its range the following forms:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "\u2022 .+Sl+ ...",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TNOT~3):",
                "sec_num": null
            },
            {
                "text": "As in the other transformations its application defines the logical connectives for A1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 .+S2+...",
                "sec_num": null
            },
            {
                "text": "We can see the effect of TCOND on the following strings: In this example the resultant strings are not recognizable by ~i. Thus~ \"start to feel do~ncast\" has its subject implied by the preceding string, and could be thought of as \"I start to feel downcast\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 .+S2+...",
                "sec_num": null
            },
            {
                "text": "Some of the difficulties caused by the transformations can be overcome by AI.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 .+S2+...",
                "sec_num": null
            },
            {
                "text": "In order to design a processor of the type described in the previous section it is necessary to specify therelationship between the recognition rules ~l of the phrase structure grammar and the rewriting rules ~T of the set of transformations. If John went to the store then Mary went home.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "This is clearly a case in which we sho~d apply TCOND~ T in order to obtain: S1 -John went to the store.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "$2 -Mary went home.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "However, the processor cannot find S1 and S2 because they are defined in terms of ~ 1 which cannot determine S1 and $2 since it cannot analyze strings such as \"If John went to the store...\". This vicious circle has been resolved by determining heuristically when the transformations should be applied. If the strings resulting from the application of the transformations cannot be analyzed by ~i~ the system attempts to apply the transformations again.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "The general hierarchy of the programs can be found in Figure ] ~-.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 54,
                        "end": 62,
                        "text": "Figure ]",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "The program DO embodies the essential features of the automaton AI. A brief description of the various sub-routines involved will serve to illustrate the workings of the processor and the difficulties that it might encounter.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "The automaton A1 can be considered as having two quite distinct functions. Initially, certain key words are marked in the problem input (giving rise to the hypothesized input string) and later the set of transformations are used in conjunction with the marked words to generate possible premises (to be called \"input strings\").",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "The necessary information can be more fully explained by considering a program DO designed to implement A1 (see Figure 3) . The program DO initially calls the sub-routine D15 which performs a left-toright scan on the problem string. All elements of the set MTO (where MTO = ~if, then, and~ or, not, never, either, therefore.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 112,
                        "end": 121,
                        "text": "Figure 3)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "then, ~} the last two elements are the symbols \", then\" and \",\") are marked. attempts to apply the transformations TNOT, TCOND, or TCOM by using the test routines D3, D4 or D5 in transferring control to D6, D7 or DS~ respectively. D3 transfers control to D6 when \"no__~t\" or \"never\" (the underlining is used in this section to indicate the symbols as marked.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "are in the h.i.s.; D6 deletes the marked symbol from the h.i.s. The sub-routine D5 is only applied when the h.i.s, begins with \"if\"\" it in turn transfers control to D7 which deletes the first of the marked \" \" .... then\" that it finds in the h.i.s. symbols \"then\", \"therefore , \u00b1 or ~_ As indicated in the above examples the parsing of the i.s. is attempted by sub-routine E0, using the syntax specified in Table 2 . The presently implemented version of EO uses a bottom-to-top search in the sense that the parsing tree always begins by analyzing the input string 1 rather than the set of productions.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 407,
                        "end": 414,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "In addition, the sub-routine is \"predictive\" in utilizing the productions to and establishing the next syntactic element.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Natural Language Processor (Al~",
                "sec_num": null
            },
            {
                "text": "The automaton A2 has as its domain the strings of L2. However, its syntax is based on Reichenbach's methods of linguistic analysis. In this section we will define a convenient formalism~ the predicate form, and discuss its syntax. Later we will discuss how the processor discovers the L3 (predicate function) mapping of an L2 string. In defining the syntax C of A2, it will be shown that U1 was designed in order to simplify i For a review of current parsing algorithms see Bobrow. the mapping into a predicate form. As in~l, the patterns that can be specified by a predicate form depend on the verb. Thus, the forms fall into four basic categories; equational, intransitive, transitive and factitive forms. John is home. John is tall.",
                "cite_spans": [
                    {
                        "start": 474,
                        "end": 481,
                        "text": "Bobrow.",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax of the Predicate Forms (A2)",
                "sec_num": null
            },
            {
                "text": "There is a man.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax of the Predicate Forms (A2)",
                "sec_num": null
            },
            {
                "text": "John is taller than Peter.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax of the Predicate Forms (A2)",
                "sec_num": null
            },
            {
                "text": "The Dodgers win.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax of the Predicate Forms (A2)",
                "sec_num": null
            },
            {
                "text": "The Dodgers win seldom. One special characteristic of the mapping should be noted. It is not necessary that elements be contiguous for them to be bound to the same variable. Thus, the verb \"saw\" and the preposition \"at the track\" are not contiguous in the string yet appear so in the function. This characteristic of the syntax has influenced the design of the processor, as will be made explicit in a later section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax of the Predicate Forms (A2)",
                "sec_num": null
            },
            {
                "text": "Using the syntax C shown in Table 3 , and the same conventions for ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 28,
                        "end": 35,
                        "text": "Table 3",
                        "ref_id": "TABREF8"
                    }
                ],
                "eq_spans": [],
                "section": "Syntax of the Predicate Forms (A2)",
                "sec_num": null
            },
            {
                "text": "The mapping from L2 to L3 has not been formalized by the syntax C.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(ARaMOD t e))",
                "sec_num": null
            },
            {
                "text": "However, this syntax is implicit in the processor and will be described in the same section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "(ARaMOD t e))",
                "sec_num": null
            },
            {
                "text": "The predicate forms have been designed to mechanize efficiently the problems of pattern recognition and of equivalence of strings by providing a limited number of canonical forms or patterns to describe a large number of natural language strings. The syntax implicit in the processor for canonical reduction is quite simple as is shown in Table 4 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 339,
                        "end": 346,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Description of the Canonical Form Processor (A?)",
                "sec_num": null
            },
            {
                "text": "It should be noted that the mapping presupposes a description in L2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Description of the Canonical Form Processor (A?)",
                "sec_num": null
            },
            {
                "text": "Another implication is the necessity to order the arguments. The ordering of arguments is not made explicit by the rewriting rules given; however, the ordering is implicit in the processor. The rule followed in ordering arguments is simply defining each one as it is found in a left to right scan of the L2 description. Table 4 The flow diagram of FO, designed to behave like the automaton ~, is described in Figure 4 . Although the syntax does not give a complete description of how the L2 to L3 mapping should be carried out, it will become clear in the descriptions of the subroutines. F1 is essentially a hypothesis generator. It examines the L2 input and decides on an appropriate canonical form. Should it find the string L2 to have an equational verb, the possible canonical forms are:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 320,
                        "end": 327,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 409,
                        "end": 417,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Description of the Canonical Form Processor (A?)",
                "sec_num": null
            },
            {
                "text": "PRED(~) PRED(ARG) PRED(ARG, ARG) \u2022 3o",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "Intransitive verbs restrict us to the form:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "'~hen the string has a transitive verb, we choose between the canonical forms:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "PRED(ARG, ARG)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "PRED(ARG, ARG, ARG).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "Finally problem strings with factitive verbs must follow the form:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "PRED(ARG, ARG, ARG)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "Sub-routine F1 searches the string and locates the main verb.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "The verb class is noted in order to establish the appropriate forms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "When no verb is located, control is transferred to FlO, which notifies the programmer of the difficulty and stops. Once a verb has been located Fll generates a predicate form. F12 copies the form as the current prediction. The next sub-routine is F2; it binds the words of the problem string to the form. Thus, the words of each NP are bound to an ARG in accordance with a left-to-right scan of the problem string.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "~.2nen a one-to-one correspondence is established between the NPs and the ARGs the processor transfers to F14. F14 leaves all the names of the ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "1) PR O(\u00a2)",
                "sec_num": null
            },
            {
                "text": "and erases the additional variables from the ARG and binds them to the ARGMOD. Following the execution of F5 the processor returns to F13.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "YlG. 4 \"",
                "sec_num": null
            },
            {
                "text": "F6 locates the verb. For transitive, intransitive, and factitive verbs all the words in VTR, VITR and VFAC are bound to the PRED of the form.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "YlG. 4 \"",
                "sec_num": null
            },
            {
                "text": "For equational verbs, the processor searches to see if it is followed by an ADJC or a PRP; if it is, the ADJC or a PRP becomes part of the PRED. FII: The form PRED(~) is generated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "YlG. 4 \"",
                "sec_num": null
            },
            {
                "text": "FI2: PRED(~) is the current form.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "YlG. 4 \"",
                "sec_num": null
            },
            {
                "text": "Since NP \"Big John\" is localized this predicate form is not appropriate. The executive returns to FII.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "FII: The form PRED(ARG) is generated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "FI2: PRED(ARG) is the current form.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "Since the NPs ~'Big John\" and \"Paul\" are localized this form is inappropriate. Control returns to FII. The form PRED(AR%ARG) is generated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "PRED(AR%ARG) is the current form.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "The NPs are in one-to-one correspondence with the ARGs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "The variables are bound as PRED (ARG Big John, ARG Paul) and the executive transfers to F14.",
                "cite_spans": [
                    {
                        "start": 32,
                        "end": 56,
                        "text": "(ARG Big John, ARG Paul)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "The names of the ARGs are placed in a pushdown list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "Since the pushdown list is not empty control passes to F4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "The first ARG in the pushdown list names 'Maul\". There is no ARGMOD so control passes to F15.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "Pops up the ARG naming \"Paul\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "There is still an ARG name on the pushdown list.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "The ARG names \"Big John\"; so the output becomes PRED(~C Big John (ARG Mod)), ~a Paul)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "and then the variables are rearranged as PRED(ARG John (tLRGMOD Big) ), ARG Paul).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "Pops up the last ARG name.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "Since the pushdown list is empty the executive program calls F6.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "Since L2 has a VEQ the PRED is bound as PRED is (ARG John (ARGMOD Big), ARG Paul)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "and then a further search is made for an ADJC or PRP.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "The f~)JC naming \"larger\" is found so the predicate function becomes PRED is larger (ARG John (ARGMOD Big), ARG Paul).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "F2:",
                "sec_num": null
            },
            {
                "text": "Since a PADV cannot be located and the verb is not transitive (so there can be no PREDTR) the processor calls sub-routlne F8.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "FT:",
                "sec_num": null
            },
            {
                "text": "F8: The predicate function is printed and the processor halts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "FT:",
                "sec_num": null
            },
            {
                "text": "Meaning equivalence is determined by A3 which attempts to apply a set of heuristically determined transformations in order to eliminate the differences between the strings ~-i and ~'j. The set of transformations TO was chosen on the basis that it is found useful in a large class of problems taken from Stoll. The set TO does not correctly solve all premise equivalence problems. Some examples will be given",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recognition of Equivalent Strings (A3)",
                "sec_num": null
            },
            {
                "text": "where it is inadequate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recognition of Equivalent Strings (A3)",
                "sec_num": null
            },
            {
                "text": "The recognition of meaning equivalence is postponed until the mapping to L3 is complete. L3 was chosen to determine the pattern classes because the language not only orders the structure of L2~ but also shows the dependencies between the elements of the language, and permits us to manipulate easily the L3 representations ofG\" i andO\"j.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recognition of Equivalent Strings (A3)",
                "sec_num": null
            },
            {
                "text": "The actual recognition of equivalence is determined by the set of transformations TO.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recognition of Equivalent Strings (A3)",
                "sec_num": null
            },
            {
                "text": "The strings ~'l and~-2~ ~ L 3 are said to be \"meaning equivalent\" when we can find:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "(Ti(Tj'\" \"(Tm~l) 1) =6\"2 where the Ti~ Tj~...T m belong to the set TO. Where:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "TO =~ TPRN, TIMP, TTIME, TSYN~",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "The domain and the range of TPRN are the ARGs of the predicate forms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "The transformation replaces the current ARG with the corresponding one of the preceding premise. A necessary condition for the application of TPRN is that the first ARG be a pronoun in its L2 representation. The bug crawled along the leaf.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "~ne b_~ in the program was found.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "He likes to bug me.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "The word bug takes on a different meaning in each sentence. The mistakes that transformations can lead to should be evident. In some contexts the TSYN might be appropriate while in others it is not.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "Another type of difficulty that has not been considered in the derivation of meaning equivalent strings is the following: One ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Definition:",
                "sec_num": null
            },
            {
                "text": "difference is found in the strings. A difference in the strings leads the processor to execute G20, GlO~ and G16. As previously mentioned~ G20 searches for synonyms. G10 attempts to reduce differences by finding permutations of the differing ARGMODs. Finally, G16 keeps track of the number of differences in the strings (based on the order and symbols on each ARGMOD list). ~ When all differences are eliminated control is passed to a print routine, G12. Should the number of differences remain constant on successive executions of the G20, G10 and G16 loop~ the processor calls sub-routine G15. If the number of differences is decreasing the loop is repeated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4o",
                "sec_num": null
            },
            {
                "text": "The following example illustrates the logic of the system: ~J~ l: PRED is (ARG John(ARGMOD Big tall), ARG home) ~'2: PRED is (ARG John (ARGMOD Tall large), ARG home) GO: Calls GI.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "4o",
                "sec_num": null
            },
            {
                "text": "Initializes storage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "GI:",
                "sec_num": null
            },
            {
                "text": "Both~\" 1 andS-2 have two ARGs so the executive calls G4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G2:",
                "sec_num": null
            },
            {
                "text": "Since both the ARGs have the PRED \"is\" control is transferred to GT.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G4:",
                "sec_num": null
            },
            {
                "text": "There are no PREDMODs so the processor continues to G8.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "GT:",
                "sec_num": null
            },
            {
                "text": "ARGs are checked in order, firstQ-1 andS-2 are shown to have the same ARG \"John\"~ then the second ARGs are both identified as \"home\". Since no difference exists the processor calls G9.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "GS:",
                "sec_num": null
            },
            {
                "text": "In the first ARG~DD the difference count is 2 since \"Big tall\" and \"Tall large\" are both different symbols. No second ARC~MOD is located for either~ 1 or~\" 2. The executive program calls G20. Attempts to locate \"Tall' as a synonym for f'big\" and \"large\" as a synonym for \"tall\", and fails in both cases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "Notes that the difference count can be decreased by rearranging the ARGMODs as \"Big tall\" and \"Large tall\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "Since the number of differences has decreased from 2 to i the executive returns to G20.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "This time the synonym \"Large' is located for \"Big\"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "(assuming that the synonym is stored in the dictionary DO).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "Since no differences are located by GI0 it cannot perform any permutations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "The differences between the ARGMODs of G-I and~ 2 have been eliminated so a transfer is made to GI2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "The print out \"PR~4S EQUIV\" is followed by the fact that the transformation TGYN was necessary on \"Big\" and TPERM on \"Tall large\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G9:",
                "sec_num": null
            },
            {
                "text": "Summary This completes our description of a processing system for problems in the statement calculus. The system accepts problems as they are normally written in English and attempts to produce a formalized equivalent as its outpu t . It makes uses of a series of automata, the first of which attempts to identify the elementary premises and the logical connectives. Two additional automata are used in order to compare premises and to determine whether or not they should be identified as equivalent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manelski &Krulee h2",
                "sec_num": null
            },
            {
                "text": "As a first step, each premise is mapped into a canonical form which simplifies the identification of equivalent premises.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manelski &Krulee h2",
                "sec_num": null
            },
            {
                "text": "In the second step~ pairs of premises are compared.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manelski &Krulee h2",
                "sec_num": null
            },
            {
                "text": "This automata makes use of a number of meaning-preserving transformations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manelski &Krulee h2",
                "sec_num": null
            },
            {
                "text": "In a sense, two premises are equivalent if one can be derived from the other with the aid of these transformations. Otherwise, the premises are evaluated as not equivalent. Although this processor is limited to a particular class of problems~ it was designed with two purposes in mind:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manelski &Krulee h2",
                "sec_num": null
            },
            {
                "text": "as an attempt to simplify the problems of communications between programmer and computer and to clarify those processes by means of which meaning is extracted from natural language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manelski &Krulee h2",
                "sec_num": null
            },
            {
                "text": "For a more complete description and some program listings seeManelski, 196~.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Manelski & Kru.lee",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The flow chart (seeFigure 5) of GO was intended to implement A3. Clearly, meaning equivalence, as defined by GO, can only be understood in light of the problem context.Thus, in the formalization of the sentential calculus, we shall consider 1 and 2 i: John will go home. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Structure of the Equivalence Recognizer (A3)",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Syntactic Analysis of English by Computer -A Survey",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Bobrow",
                        "suffix": ""
                    }
                ],
                "year": 1963,
                "venue": "Proceedings of the FJCC",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bobrow, D., \"Syntactic Analysis of English by Computer -A Survey\", Proceedings of the FJCC, 1963.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Introduction to Symbolic Logic and its Applications",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Carnap",
                        "suffix": ""
                    }
                ],
                "year": 1958,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carnap, R., Introduction to Symbolic Logic and its Applications. New York: Dover, 1958.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Syntactic Structures. 's-Gravenhage: Mouton",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Chomsky",
                        "suffix": ""
                    }
                ],
                "year": 1957,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chomsky, N., Syntactic Structures. 's-Gravenhage: Mouton, 1957.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Baseball: an Automatic Question-Answerer'. Proceedings, Western Joint Computer Conference",
                "authors": [
                    {
                        "first": "B",
                        "middle": [
                            "F"
                        ],
                        "last": "Green",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Jr",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "K"
                        ],
                        "last": "Wolf",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Chomsky",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Laughery",
                        "suffix": ""
                    }
                ],
                "year": 1961,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "219--224",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Green, B. F., Jr., Wolf, A. K., Chomsky, C., and Laughery, K., \"Baseball: an Automatic Question-Answerer'. Proceedings, Western Joint Computer Conference. May, 1961, pp. 219-224.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Natural Language Inputs for a Problem-Solving System",
                "authors": [
                    {
                        "first": "G",
                        "middle": [
                            "K"
                        ],
                        "last": "Krulee",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "J"
                        ],
                        "last": "Kuck",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "M"
                        ],
                        "last": "Landi",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "M"
                        ],
                        "last": "Manelski",
                        "suffix": ""
                    }
                ],
                "year": 1964,
                "venue": "Behavioral Science",
                "volume": "",
                "issue": "",
                "pages": "281--288",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Krulee, G. K., Kuck, D. J., Landi, D. M., and Manelski, D. M., \"Natural Language Inputs for a Problem-Solving System\". Behavioral Science\u2022 July, 1964, pp. 281-288.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "A Proolem Solver with Formal Descriptive Kuck",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Krulee",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "K"
                        ],
                        "last": "Inputs",
                        "suffix": ""
                    }
                ],
                "year": 1964,
                "venue": "Computers and Information Science",
                "volume": "",
                "issue": "",
                "pages": "344--374",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "\u2022 \"A Proolem Solver with Formal Descriptive Kuck, D. J , and Krulee, G. K., Inputs\". Computers and Information Science. Baltimore: Spartan, 1964, pp. 344-374.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A Heuristic Approach to Natural Language Processing",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "M"
                        ],
                        "last": "Manelski",
                        "suffix": ""
                    }
                ],
                "year": 1964,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Manelski, D. M., \"A Heuristic Approach to Natural Language Processing\", Unpublished Ph.D. thesis, Northwestern University, 1964.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Information Processing Language-V Manual",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Newell",
                        "suffix": ""
                    }
                ],
                "year": 1961,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Newell, A. (Ed.), Information Processing Language-V Manual. Englewood Cliffs: Prentice-Hall, 1961.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "!Programming the Logic Theory Machine",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Newell",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "C"
                        ],
                        "last": "Shaw",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Newell, A., and Shaw, J. C., '!Programming the Logic Theory Machine\".",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Proceedings~ Western Joint Computer Conference",
                "authors": [],
                "year": 1957,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "23--24",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Proceedings~ Western Joint Computer Conference. February, 1957, pp. 23O-24O.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Elements of Symbolic Logic",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Reichenbach",
                        "suffix": ""
                    }
                ],
                "year": 1938,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Reichenbach, H., Elements of Symbolic Logic. New York: MacMillan, 1938.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Sets Logic and Axiomatic Theories",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "R"
                        ],
                        "last": "Stoll",
                        "suffix": ""
                    }
                ],
                "year": 1961,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stoll, R. R., Sets Logic and Axiomatic Theories. San Francisco: W. H. Freeman, 1961.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "The Structure of Language for Man and Computer: Problems in Formalization",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "E"
                        ],
                        "last": "Walker",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Bartlett",
                        "suffix": ""
                    }
                ],
                "year": 1963,
                "venue": "~ I nformati.on System Science and Engineering",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Walker, D. E., and Bartlett, J. M., \"The Structure of Language for Man and Computer: Problems in Formalization\"~ I nformati.on System Science and Engineering. New York: McGraw-Hill, 1963.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF3": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "John called home. (il) John called his friend a fool."
            },
            "FIGREF4": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": ")..+ NP+ VAUX+ never*VITR+. \u2022. ( ix)..+NP+ never+VITR+ ...( x)..+NP+VAUX+ never+VTR+... ( xi)..+ NP+VAUX+never+VTR... ( x\u00b1 i )..+ NP+ never+VTR+... ( xill)..+ NP+VAUX+ not+VFAC+... (xiv) .. + NP+VAUX+ never+ VFAC+... ( xv)..+ NP+ never+VFAC+... Should a string~ 1 correspond to one of the above patterns TNOT(0of the cases follow: ~'l: John will never hit Mary."
            },
            "FIGREF5": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "to establish the Boolean connectives for the statements. Some examples will show the effect of TCOM On strings ~'in the domain of the t rans format ion. ~'i: Either Sally and Bob are the same age or Sally is older than Bob.TCOM~I):Sally and Bob are the same age.Sally is older than Bob.~2: The races are fixed or the gambling houses are crooked\u2022 TCOM(~): The races are fixed. The gambling houses are crooked. TCOND: is defined over strings wlth the following configuration: ( i)..+ If+ Sl+...+, then+ $2+ .... ( li)..+If+Sl+...+ ,+$2+ ...."
            },
            "FIGREF6": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Clearly ~l and~T are interdependent since the input cannot always be analyzed in terms of the syntax~ ]. and because the rewriting rules of~T are defined in terms of 1. Perhaps an example illustrates this point more effectively. Consider the inp~ string:"
            },
            "FIGREF7": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "after testing the h.i.s, at the top of the pushdown list (John went home) transfers control to EO. current h.l.s. transfers control to DI3. locates the next h.i.s. successfully parses the h.i.s, at the top of the pushdown list (Mary went to the store). transfers the processor to D13. cannot locate any additional h.i.s. prints the results of the parsing. If John, Peter and Paul were at the game,... calls D15. marhs the problem string as \"If John~ Peter an__~d Paul were at the gs~ne~, .... ' which is copied as the h.i.s. fails to find a deduction for the h.i.s. transfers control to DI. transfers control to D3. transfers control to D4. transfers control to D7. the marked words have the structure required for TCOND and changes the i.s. to \"l_~f John, Peter an__~d Paul were at the game~ .... \" and the h.i.s, become \"John\" \"Peter and Paul were at the game....\" the h.i.s, does not begin with a verb. i.s. \"John\" has no marked words. the \"previous i.s. becomes the h.l.s. \"If John, Peter an__~d Paul were at the game,...\" fails to find a parsing. transfers control to D3. calls sub-routine D4. Ir finds the marked \"If\" and \"\u00b1 calling for TCOND. the h.l.s, become \"John, Peter\" \"Paul were at the gam2A...\" and the i.s. is marked as \"If John, Peter and Paul were at the game&...\" the h.l.s, does not begin with a verb. a satisfactory parsing cannot be found. transfers the processor to D1. there are no marked words in the h.i.s. the h.i.s, becomes \"If John, Peter and Paul were at the gamez..and Paul were at the game\" (the remainder of the sentence is a separate h.i.s.). the i.s. is changed to \"If John, Peter and Paul were at the game,.h.i.s. The program would then analyze the remainder of the sentence."
            },
            "FIGREF10": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "the chairman. With one exception the verb types used in the above classification follow conventional definitions. However, following Sledd, factitive verbs are also included. Factitive verbs are transitive verbs that take an object complement. The following predicate functions show the L3 mappings of the examples. In order to avoid using Church's Lambda notation to bind the variables, the convention of using upper case letters for the nonterminal elements and following them by the variables in lower case letters, is utilized to fully define the predicate function. than (ARG John, ARG Peter) PRED win (ARG The Dodgers) PRED win seldom (ARG The Dodgers) PRED loves (ARG Tall Johns ARG Mary) PRED saw at the track (ARG John, ARG Peter) PRED elected (ARG John, ARG Peter, ~LRG the chairman)"
            },
            "FIGREF11": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": ""
            },
            "FIGREF13": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "searches for a PADV or a PRP on the tree of a PREDTR. The words named by the PADV or PRP are bound to the PREDMOD. Sub-routine F8 then prints the L3 mapping of the problem string and halts the processor. The following example illustrates the flow of the program: InputS\" l~ L2 = (S(NP(ADJ Big)(N John)) (VP(VEQ(VMEQ is) )(PREDEQ (PADJC(ADJC smarter) (THAN thanl(NP(N Paul)))"
            },
            "FIGREF15": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "PRED loves (ARG John, ARG music) PRED dressed (PREMOD quickly)(ARG He) The transformation TPRN~'-2) results in PRED dressed (PREDMOD quicl~ly)(ARG John) The implied transformation, TIMP, has a domain of the predicate functions with a null argument. The transformation replaces the missing argument with that of the preceding premise. For: of PRED won (ARG Dodgers (ARGMOD the), ARG Pennant(ARGMOD the)) PRED lost (ARGO, ARG series (ARGMOD the)). TIMP (~-2) results in \"the predicate function PRED lost (ARG Dodgers, ARG series (ARGMOD the)). The time transformation, TTIME~ has as its domain the predicates. The range is also the predicates. This transformation eliminates auxiliary verbs and replaces the main verb with its root. The main verb is determined by the L2 representation of the string. An example would be: ~-l: John should go home. with ~,n L3 representation PRED should go (ARG John, ARG home) Thus TTIME ~l)becomes PRED go (ARG John, ARG home) The synonym transformation, TSYN, has a domain of the words Wi~ L2. Its range is also the words Wi~ L2. The transformation is defined by replacing any W i by its synonym as defined in the dictionary of the processor. The effect of TSY~ can be seen on~-le L1. ~--l: John is happy. which has an L3 representation PRED is happy (ARG John) after TSYN(~'I) the predicate function might appear as PRED is glad (ARG John) This approach can certainly lead to difficulties. Some problems in semantics have been avoided. A word can take on various meanings depending on the context, as in:"
            },
            "FIGREF16": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "possible transformation contracts a number of arguments in the L3 representation of a string. Thus~l, 0\" 2 ~ L2. ~l: John hits the ball with the bat. ~'2: John bats the ball. would have their respective representations as follows in L3: PRED hits (ARG John) ARG ball (ARGMOD the), ARG bat (ARGMOD the)) PREO bats (ARG John, ARG ball (ARGMOD the))l By changing the predicate, a 3 ARG function becomes a 2 ARG function with the same meaning. By working with the set TO, the great majority of problems in Stoll are amenable to solution. However, the processor is not capable of doing justice to the human abilities of linguistic resolution. One noticeable characteristic of utilizing TO as a recognition device is its tendency to err by not recognizing equivalent strings rather than by un~iustified recognition.Although this section defines the scope and effect of TO, it is also necessary to specify under what conditions the automaton attempts to apply one of the transformations, and under what conditions the processor will stop trying to match the strings. The criteria for applying a member of TO, and the decision to halt, will. be made explicit in the next section.i Example thanks to D. Kuck."
            },
            "TABREF0": {
                "html": null,
                "content": "<table><tr><td/><td>VEQu VTRu VINu VFAC U VAUX</td></tr><tr><td colspan=\"2\">o PREPu ADVu THANu ADJC</td></tr><tr><td colspan=\"2\">with the sets representing:</td></tr><tr><td>N:</td><td>noun</td></tr><tr><td>D:</td><td>determiner</td></tr><tr><td>PN:</td><td>pronoun</td></tr><tr><td>ADJ:</td><td>adjective</td></tr><tr><td>VEQ:</td><td>verb equational</td></tr><tr><td>VTR:</td><td>verb transitive</td></tr><tr><td>VIN:</td><td>verb intransitive</td></tr><tr><td colspan=\"2\">VFAC : verb factitive</td></tr><tr><td colspan=\"2\">VAUX: verb auxiliary</td></tr><tr><td>PREP:</td><td>prepos it ion</td></tr><tr><td>ADV:</td><td>adverb</td></tr><tr><td>ADJC:</td><td>comparative adjective</td></tr><tr><td colspan=\"2\">THAN: Than</td></tr></table>",
                "num": null,
                "text": "Although the task of the assignment of word classes is that of the linguist, in general, if X i and Xj are sets comprising A we expect",
                "type_str": "table"
            },
            "TABREF1": {
                "html": null,
                "content": "<table><tr><td>N</td><td>..</td><td>man, boy, house,...</td></tr><tr><td>D</td><td>=</td><td>a, the,...</td></tr><tr><td>PN</td><td>=</td><td>he, they,...</td></tr><tr><td>ADJ</td><td>=</td><td>blue, large,...</td></tr><tr><td>VMEQ</td><td>=</td><td>is, are,...</td></tr><tr><td>V~R</td><td>=</td><td>hit, hits,...</td></tr><tr><td colspan=\"2\">VMINTR =</td><td>rained, went,...</td></tr><tr><td colspan=\"2\">VMFAC -</td><td>appoint, call,...</td></tr><tr><td>VAUX</td><td/><td>will, should, ...</td></tr><tr><td>PRP</td><td>=</td><td>in, to, ...</td></tr><tr><td>ADV</td><td>=</td><td>quickly, slowly,...</td></tr><tr><td>AI~C</td><td>=</td><td>larger, better,...</td></tr></table>",
                "num": null,
                "text": "",
                "type_str": "table"
            },
            "TABREF3": {
                "html": null,
                "content": "<table/>",
                "num": null,
                "text": "",
                "type_str": "table"
            },
            "TABREF5": {
                "html": null,
                "content": "<table><tr><td colspan=\"5\">TCOM(TCOND~ 1)) : The Dodgers will win.</td></tr><tr><td/><td/><td/><td colspan=\"2\">Los Angeles will celebrate.</td></tr><tr><td/><td/><td/><td colspan=\"2\">If the White Sox win, Chicago will celebrate.</td></tr><tr><td/><td/><td/><td/><td>Dodgers will win.</td></tr><tr><td/><td/><td/><td colspan=\"2\">Los Angeles will celebrate.</td></tr><tr><td/><td/><td/><td colspan=\"2\">The }~nite Sox win.</td></tr><tr><td/><td/><td/><td colspan=\"2\">Chicago will celebrate.</td></tr><tr><td colspan=\"5\">~'2: If I miss my appointment and start to feel downcast, then</td></tr><tr><td/><td colspan=\"3\">I should not go home.</td></tr><tr><td colspan=\"5\">TCOND(~2): I miss my appointment and start to feel downcast.</td></tr><tr><td/><td colspan=\"4\">I should not go home.</td></tr><tr><td colspan=\"3\">TCOM(TCOND(~):</td><td colspan=\"2\">I miss my appointment.</td></tr><tr><td>~i:</td><td colspan=\"4\">If the Dodgers win~ then Los Angeles will celebrate\u2022</td></tr><tr><td/><td/><td/><td colspan=\"2\">Start to feel downcast.</td></tr><tr><td colspan=\"5\">TCOND(~I) : The Dodgers win.</td></tr><tr><td/><td/><td/><td colspan=\"2\">I should not go home.</td></tr><tr><td/><td/><td colspan=\"3\">Los Angeles will celebrate\u2022</td></tr><tr><td colspan=\"5\">TNOT(TCOM(TCOND(~2)): I miss my appointment.</td></tr><tr><td/><td/><td/><td colspan=\"2\">Start to feel downcast.</td></tr><tr><td colspan=\"5\">To illustrate their use, we utilize the following examples:</td></tr><tr><td>~i:</td><td>If the</td><td colspan=\"2\">Dodgers wln~</td><td>then Los Angeles will celebrate, and</td></tr><tr><td/><td colspan=\"4\">if the White Box win, Chicago will celebrate.</td></tr><tr><td colspan=\"5\">TCOND(~I): The Dodgers win.</td></tr><tr><td/><td/><td colspan=\"3\">Los Angeles will celebrate and if the White Sox win,</td></tr><tr><td/><td/><td colspan=\"3\">Chicago wlll celebrate.</td></tr></table>",
                "num": null,
                "text": "The definitions of the syntactic elements used in establishing the domain of~T are given by the phrase-structure grammar ~i. Another convention used in the discussion is to allow a series of dots( .... )    to refer to any syntactic structure. It is also implied that the transformations may be concatenated as necessary.",
                "type_str": "table"
            },
            "TABREF6": {
                "html": null,
                "content": "<table><tr><td/><td/><td/><td/><td>DOz Start</td></tr><tr><td/><td/><td/><td colspan=\"2\">DISz mark a~l words in MTO ..~'</td><td>.</td><td>~'~</td></tr><tr><td colspan=\"2\">./t</td><td/><td/></tr><tr><td/><td/><td>'</td><td/><td>D2. Was a satisfactory parsing</td></tr><tr><td>/</td><td/><td>.'</td><td/><td>. found for scrlng?</td><td>/!</td></tr><tr><td>/</td><td/><td/><td/><td>\\</td><td>Dl6: stop ~</td></tr><tr><td>~</td><td colspan=\"4\">D13Z are .y additional /pu~ .tring.?</td><td>| S</td><td>(~ ~o \\'</td></tr><tr><td/><td/><td/><td/><td>DI0z Try Co fill In~</td></tr><tr><td/><td>~/</td><td>No</td><td>.</td><td>DI~ are there any marked</td></tr><tr><td/><td/><td/><td>~.</td><td>~</td><td>sized input: string?</td></tr><tr><td/><td/><td/><td/><td>Yes /</td><td>D9: Does string</td></tr><tr><td colspan=\"4\">ii~ Copy Input string as</td><td>~</td><td>.</td><td>begin with</td></tr><tr><td colspan=\"4\">hypothesized sCrlns?</td><td>r?</td></tr><tr><td/><td/><td/><td colspan=\"2\">D31 For Yes</td><td>D6z Apply TNOT</td></tr><tr><td/><td/><td/><td>No</td></tr><tr><td/><td/><td colspan=\"3\">D4z For TCOND?</td><td>....</td><td>D7: Apply TCOND</td></tr><tr><td/><td/><td/><td/><td>Yes</td></tr><tr><td/><td>No</td><td/><td/></tr><tr><td/><td/><td colspan=\"3\">DSz For TCOM?</td><td>xes</td><td>~D8: Apply TCOM</td></tr><tr><td/><td/><td/><td>)'No</td></tr><tr><td/><td/><td/><td>S cop</td></tr><tr><td/><td/><td/><td/><td>Figure 3</td></tr></table>",
                "num": null,
                "text": "Should no other h.i.s, be found, the executive calls D14 which halts the program. After performing the necessary output functions, D1 scans the h.i.s, currently being processed. If any marked words are found, control is passed to D3; otherwise the transfer is to Dll. Dll erases the previous h.i.s, and replaces them (i.e., all of them) with the i.s. Should D1 find that some of the words are marked, the processor 20. TNOT? ........ ~>",
                "type_str": "table"
            },
            "TABREF7": {
                "html": null,
                "content": "<table><tr><td colspan=\"4\">The branching of the problem would be</td></tr><tr><td/><td/><td>DO:</td><td colspan=\"2\">transfers control to DI5.</td></tr><tr><td/><td/><td>D15:</td><td colspan=\"2\">marks the word 'and\"; the h.i.s, is \"John and Mary went</td></tr><tr><td/><td/><td/><td colspan=\"2\">home (the underlining indicates the marked word).</td></tr><tr><td/><td/><td>EO:</td><td colspan=\"2\">parses \"John and Mary\" went home.</td></tr><tr><td/><td/><td>DI3:</td><td colspan=\"2\">there are no additional h.i.s.</td></tr><tr><td/><td/><td>DI4:</td><td>stop.</td></tr><tr><td/><td/><td>(\u2022:</td><td colspan=\"2\">John went home and Mary went to the store.</td></tr><tr><td/><td/><td>DO:</td><td>transfers to DI5.</td></tr><tr><td/><td/><td>DI5:</td><td colspan=\"2\">the i.s. and h.i.s, become John went home and Mary went</td></tr><tr><td/><td/><td/><td>to the store.</td></tr><tr><td colspan=\"2\">the h.i.s,</td><td colspan=\"3\">is done on \"and\", EO: fails to parse the sentence. \"or\" or with the symbol</td><td>\"either\"</td><td>being</td></tr><tr><td>erased</td><td colspan=\"3\">from the beginning D2: transfers to D1. of the h.i.s,</td><td>if it is present.</td><td>The routines</td></tr><tr><td colspan=\"5\">D6, D7 and D8 transfer DI: transfers control to D3. control to D9 which</td><td>is called to test whether</td><td>the</td></tr><tr><td>h.i.s.,</td><td colspan=\"4\">being processed, D3: control parses to D4. begins with a verb: if this condition</td><td>exists</td><td>HO</td></tr><tr><td>attempts</td><td/><td colspan=\"3\">to precede D4: transfers control to D5. it with the first noun or pronoun</td><td>of the previous</td></tr><tr><td>h.i.s.</td><td colspan=\"2\">Should D5:</td><td colspan=\"2\">it not be possible transfers control to D8. for the processor</td><td>to carry out this</td></tr><tr><td colspan=\"2\">operation,</td><td colspan=\"3\">the program D8: the i.s. becomes prints out the syntactic</td><td>analysis</td><td>it has</td></tr><tr><td colspan=\"5\">accomplished and halts. Both DIO and D9 transfer to EO. John went home and Mary went to the store.</td></tr><tr><td/><td/><td colspan=\"3\">Some examples will clarify the logic of DO. Let the input while the h.i.s, become</td></tr><tr><td colspan=\"3\">string~ 1 be:</td><td>John went home.</td></tr><tr><td/><td/><td colspan=\"3\">(~'l: John and Mary went to the store. went home. Mary</td></tr><tr><td/><td/><td>D9:</td><td/></tr></table>",
                "num": null,
                "text": "While removing the marking from the corresponding symbol in the i.s. two new h.i.s, are created by dividing the list at the location of the marked symbol. D5 and D8 are similar to D4 and D7; however, division of",
                "type_str": "table"
            },
            "TABREF8": {
                "html": null,
                "content": "<table><tr><td>binding the variables, results in the following predicate functions for</td></tr><tr><td>the previous examples:</td></tr></table>",
                "num": null,
                "text": "",
                "type_str": "table"
            },
            "TABREF9": {
                "html": null,
                "content": "<table><tr><td/><td/><td/><td/><td/><td>8</td><td>~w m</td></tr><tr><td/><td/><td/><td colspan=\"2\">FO: \u2022 Star~</td><td>e</td><td>.</td><td>31.</td></tr><tr><td/><td/><td/><td colspan=\"2\">F1: locate</td></tr><tr><td/><td/><td/><td colspan=\"2\">main verb</td></tr><tr><td/><td/><td/><td>~</td><td>~</td><td>-'---&gt;F~o. ~r~ ~Ow\"</td></tr><tr><td/><td colspan=\"3\">Yll: generate a prediction</td><td>~</td><td>.</td><td>.</td></tr><tr><td/><td colspan=\"3\">YI2: copy prediction as</td><td/><td>-~</td></tr><tr><td/><td/><td colspan=\"2\">current predicate form</td><td/><td>~ no</td></tr><tr><td/><td>.</td><td/><td>~</td><td colspan=\"2\">I ~ F2: Bind the ARGs. Does the number of</td></tr><tr><td/><td/><td/><td colspan=\"3\">FI4: place names of variables for</td></tr><tr><td>F13:</td><td>a</td><td/><td/><td/></tr><tr><td>F6:</td><td colspan=\"3\">locate verb and bind</td><td/><td>\u00a55: modify form and b:Lu~/</td></tr><tr><td>FT:</td><td colspan=\"3\">are there variables for</td><td>,</td><td>&gt;</td><td>Yg: modii~ canonical form</td></tr><tr><td/><td colspan=\"2\">a PRE~0D?</td><td/><td>Yes</td><td>and bind variable</td></tr><tr><td/><td/><td colspan=\"2\">nO ~</td><td/></tr><tr><td>F8:</td><td>Prln~</td><td>and</td><td>stop</td><td/></tr><tr><td/><td colspan=\"5\">down list. If there are not the processor returns to F13. When</td></tr><tr><td/><td colspan=\"5\">additional words are found F5 rewrites the predicate form as</td></tr><tr><td/><td/><td colspan=\"2\">~a ~ ~G(A~MOD)</td><td/></tr></table>",
                "num": null,
                "text": "ARGs on a pushdown list. The next sub-routine is F13 which tests whether the pushdown list string named by the ARG is empty. Should the llst be empty F6 is the next sub-routine; otherwise it is F~. F4 tests whether there are any variables beside an N or PN in the ARG named on the push-",
                "type_str": "table"
            }
        }
    }
}