File size: 92,397 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
{
    "paper_id": "D09-1001",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T16:40:01.601855Z"
    },
    "title": "Unsupervised Semantic Parsing",
    "authors": [
        {
            "first": "Hoifung",
            "middle": [],
            "last": "Poon",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Washington Seattle",
                "location": {
                    "postCode": "98195-2350",
                    "region": "WA",
                    "country": "U.S.A"
                }
            },
            "email": "hoifung@cs.washington.edu"
        },
        {
            "first": "Pedro",
            "middle": [],
            "last": "Domingos",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Washington Seattle",
                "location": {
                    "postCode": "98195-2350",
                    "region": "WA",
                    "country": "U.S.A"
                }
            },
            "email": "pedrod@cs.washington.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present the first unsupervised approach to the problem of learning a semantic parser, using Markov logic. Our USP system transforms dependency trees into quasi-logical forms, recursively induces lambda forms from these, and clusters them to abstract away syntactic variations of the same meaning. The MAP semantic parse of a sentence is obtained by recursively assigning its parts to lambda-form clusters and composing them. We evaluate our approach by using it to extract a knowledge base from biomedical abstracts and answer questions. USP substantially outperforms TextRunner, DIRT and an informed baseline on both precision and recall on this task.",
    "pdf_parse": {
        "paper_id": "D09-1001",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present the first unsupervised approach to the problem of learning a semantic parser, using Markov logic. Our USP system transforms dependency trees into quasi-logical forms, recursively induces lambda forms from these, and clusters them to abstract away syntactic variations of the same meaning. The MAP semantic parse of a sentence is obtained by recursively assigning its parts to lambda-form clusters and composing them. We evaluate our approach by using it to extract a knowledge base from biomedical abstracts and answer questions. USP substantially outperforms TextRunner, DIRT and an informed baseline on both precision and recall on this task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Semantic parsing maps text to formal meaning representations. This contrasts with semantic role labeling (Carreras and Marquez, 2004) and other forms of shallow semantic processing, which do not aim to produce complete formal meanings. Traditionally, semantic parsers were constructed manually, but this is too costly and brittle. Recently, a number of machine learning approaches have been proposed (Zettlemoyer and Collins, 2005; Mooney, 2007) . However, they are supervised, and providing the target logical form for each sentence is costly and difficult to do consistently and with high quality. Unsupervised approaches have been applied to shallow semantic tasks (e.g., paraphrasing (Lin and Pantel, 2001) , information extraction (Banko et al., 2007) ), but not to semantic parsing.",
                "cite_spans": [
                    {
                        "start": 105,
                        "end": 133,
                        "text": "(Carreras and Marquez, 2004)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 400,
                        "end": 431,
                        "text": "(Zettlemoyer and Collins, 2005;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 432,
                        "end": 445,
                        "text": "Mooney, 2007)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 688,
                        "end": 710,
                        "text": "(Lin and Pantel, 2001)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 736,
                        "end": 756,
                        "text": "(Banko et al., 2007)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper we develop the first unsupervised approach to semantic parsing, using Markov logic (Richardson and Domingos, 2006) . Our USP system starts by clustering tokens of the same type, and then recursively clusters expressions whose subexpressions belong to the same clusters. Experiments on a biomedical corpus show that this approach is able to successfully translate syntactic variations into a logical representation of their common meaning (e.g., USP learns to map active and passive voice to the same logical form, etc.). This in turn allows it to correctly answer many more questions than systems based on TextRunner (Banko et al., 2007) and DIRT (Lin and Pantel, 2001) .",
                "cite_spans": [
                    {
                        "start": 97,
                        "end": 128,
                        "text": "(Richardson and Domingos, 2006)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 631,
                        "end": 651,
                        "text": "(Banko et al., 2007)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 661,
                        "end": 683,
                        "text": "(Lin and Pantel, 2001)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We begin by reviewing the necessary background on semantic parsing and Markov logic. We then describe our Markov logic network for unsupervised semantic parsing, and the learning and inference algorithms we used. Finally, we present our experiments and results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The standard language for formal meaning representation is first-order logic. A term is any expression representing an object in the domain. An atomic formula or atom is a predicate symbol applied to a tuple of terms. Formulas are recursively constructed from atomic formulas using logical connectives and quantifiers. A lexical entry defines the logical form for a lexical item (e.g., a word). The semantic parse of a sentence is derived by starting with logical forms in the lexical entries and recursively composing the meaning of larger fragments from their parts. In traditional approaches, the lexical entries and meaning-composition rules are both manually constructed. Below are sample rules in a definite clause grammar (DCG) for parsing the sentence: \"Utah borders Idaho\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "V erb [\u03bby\u03bbx.borders(x, y) ",
                "cite_spans": [
                    {
                        "start": 6,
                        "end": 25,
                        "text": "[\u03bby\u03bbx.borders(x, y)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "] \u2192 borders N P [Utah] \u2192 U tah N P [Idaho] \u2192 Idaho V P [rel(obj)] \u2192 V erb[rel] N P [obj] S[rel(obj)] \u2192 N P [obj] V P [rel]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "The first three lines are lexical entries. They are fired upon seeing the individual words. For example, the first rule applies to the word \"borders\" and generates syntactic category Verb with the meaning \u03bby\u03bbx.borders(x, y) that represents the nextto relation. Here, we use the standard lambdacalculus notation, where \u03bby\u03bbx.borders(x, y) represents a function that is true for any (x, y)pair such that borders(x, y) holds. The last two rules compose the meanings of sub-parts into that of the larger part. For example, after the first and third rules are fired, the fourth rule fires and generates V P [\u03bby\u03bbx.borders(x, y)(Idaho)]; this meaning simplifies to \u03bbx.borders(x, Idaho) by the \u03bb-reduction rule, which substitutes the argument for a variable in a functional application.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "A major challenge to semantic parsing is syntactic variations of the same meaning, which abound in natural languages. For example, the aforementioned sentence can be rephrased as \"Utah is next to Idaho,\"\"Utah shares a border with Idaho,\" etc. Manually encoding all these variations into the grammar is tedious and error-prone. Supervised semantic parsing addresses this issue by learning to construct the grammar automatically from sample meaning annotations (Mooney, 2007) . Existing approaches differ in the meaning representation languages they use and the amount of annotation required. In the approach of Zettlemoyer and Collins (2005) , the training data consists of sentences paired with their meanings in lambda form. A probabilistic combinatory categorial grammar (PCCG) is learned using a loglinear model, where the probability of the final logical form L and meaning-derivation tree T conditioned on the sentence S is P (L,",
                "cite_spans": [
                    {
                        "start": 459,
                        "end": 473,
                        "text": "(Mooney, 2007)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 610,
                        "end": 640,
                        "text": "Zettlemoyer and Collins (2005)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "T |S) = 1 Z exp ( i w i f i (L, T, S)).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "Here Z is the normalization constant and f i are the feature functions with weights w i . Candidate lexical entries are generated by a domain-specific procedure based on the target logical forms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "The major limitation of supervised approaches is that they require meaning annotations for example sentences. Even in a restricted domain, doing this consistently and with high quality requires nontrivial effort. For unrestricted text, the complexity and subjectivity of annotation render it essentially infeasible; even pre-specifying the target predicates and objects is very difficult. Therefore, to apply semantic parsing beyond limited domains, it is crucial to develop unsupervised methods that do not rely on labeled meanings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "In the past, unsupervised approaches have been applied to some semantic tasks, but not to semantic parsing. For example, DIRT (Lin and Pantel, 2001 ) learns paraphrases of binary relations based on distributional similarity of their arguments; TextRunner (Banko et al., 2007) automatically extracts relational triples in open domains using a self-trained extractor; SNE applies relational clustering to generate a semantic network from TextRunner triples (Kok and Domingos, 2008) . While these systems illustrate the promise of unsupervised methods, the semantic content they extract is nonetheless shallow and does not constitute the complete formal meaning that can be obtained by a semantic parser.",
                "cite_spans": [
                    {
                        "start": 126,
                        "end": 147,
                        "text": "(Lin and Pantel, 2001",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 255,
                        "end": 275,
                        "text": "(Banko et al., 2007)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 455,
                        "end": 479,
                        "text": "(Kok and Domingos, 2008)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "Another issue is that existing approaches to semantic parsing learn to parse syntax and semantics together. 1 The drawback is that the complexity in syntactic processing is coupled with semantic parsing and makes the latter even harder. For example, when applying their approach to a different domain with somewhat less rigid syntax, Zettlemoyer and Collins (2007) need to introduce new combinators and new forms of candidate lexical entries. Ideally, we should leverage the enormous progress made in syntactic parsing and generate semantic parses directly from syntactic analysis.",
                "cite_spans": [
                    {
                        "start": 108,
                        "end": 109,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic Parsing",
                "sec_num": "2.1"
            },
            {
                "text": "In many NLP applications, there exist rich relations among objects, and recent work in statistical relational learning (Getoor and Taskar, 2007) and structured prediction (Bakir et al., 2007) has shown that leveraging these can greatly improve accuracy. One of the most powerful representations for this is Markov logic, which is a probabilistic extension of first-order logic (Richardson and Domingos, 2006) . Markov logic makes it possible to compactly specify probability distributions over complex relational domains, and has been successfully applied to unsupervised coreference resolution (Poon and Domingos, 2008) and other tasks. A Markov logic network (MLN) is a set of weighted first-order clauses. Together with a set of constants, it defines a Markov network with one node per ground atom and one feature per ground clause. The weight of a feature is the weight of the first-order clause that originated it. The probability of a state x in such a network is given by the log-linear model",
                "cite_spans": [
                    {
                        "start": 119,
                        "end": 144,
                        "text": "(Getoor and Taskar, 2007)",
                        "ref_id": null
                    },
                    {
                        "start": 171,
                        "end": 191,
                        "text": "(Bakir et al., 2007)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 377,
                        "end": 408,
                        "text": "(Richardson and Domingos, 2006)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 595,
                        "end": 620,
                        "text": "(Poon and Domingos, 2008)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic",
                "sec_num": "2.2"
            },
            {
                "text": "P (x) = 1 Z exp ( i w i n i (x)),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic",
                "sec_num": "2.2"
            },
            {
                "text": "where Z is a normalization constant, w i is the weight of the ith formula, and n i is the number of satisfied groundings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic",
                "sec_num": "2.2"
            },
            {
                "text": "Unsupervised semantic parsing (USP) rests on three key ideas. First, the target predicate and object constants, which are pre-specified in supervised semantic parsing, can be viewed as clusters of syntactic variations of the same meaning, and can be learned from data. For example, borders represents the next-to relation, and can be viewed as the cluster of different forms for expressing this relation, such as \"borders\", \"is next to\", \"share the border with\"; Utah represents the state of Utah, and can be viewed as the cluster of \"Utah\", \"the beehive state\", etc. Second, the identification and clustering of candidate forms are integrated with the learning for meaning composition, where forms that are used in composition with the same forms are encouraged to cluster together, and so are forms that are composed of the same sub-forms. This amounts to a novel form of relational clustering, where clustering is done not just on fixed elements in relational tuples, but on arbitrary forms that are built up recursively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Semantic Parsing with Markov Logic",
                "sec_num": "3"
            },
            {
                "text": "Third, while most existing approaches (manual or supervised learning) learn to parse both syntax and semantics, unsupervised semantic parsing starts directly from syntactic analyses and focuses solely on translating them to semantic content. This enables us to leverage advanced syntactic parsers and (indirectly) the available rich resources for them. More importantly, it separates the complexity in syntactic analysis from the semantic one, and makes the latter much easier to perform. In particular, meaning composition does not require domain-specific procedures for generating candidate lexicons, as is often needed by supervised methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Semantic Parsing with Markov Logic",
                "sec_num": "3"
            },
            {
                "text": "The input to our USP system consists of dependency trees of training sentences. Compared to phrase-structure syntax, dependency trees are the more appropriate starting point for semantic processing, as they already exhibit much of the relation-argument structure at the lexical level.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Semantic Parsing with Markov Logic",
                "sec_num": "3"
            },
            {
                "text": "USP first uses a deterministic procedure to convert dependency trees into quasi-logical forms (QLFs). The QLFs and their sub-formulas have natural lambda forms, as will be described later. Starting with clusters of lambda forms at the atom level, USP recursively builds up clusters of larger lambda forms. The final output is a probability distribution over lambda-form clusters and their compositions, as well as the MAP semantic parses of training sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Semantic Parsing with Markov Logic",
                "sec_num": "3"
            },
            {
                "text": "In the remainder of the section, we describe the details of USP. We first present the procedure for generating QLFs from dependency trees. We then introduce their lambda forms and clusters, and show how semantic parsing works in this setting. Finally, we present the Markov logic network (MLN) used by USP. In the next sections, we present efficient algorithms for learning and inference with this MLN.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Semantic Parsing with Markov Logic",
                "sec_num": "3"
            },
            {
                "text": "A dependency tree is a tree where nodes are words and edges are dependency labels. To derive the QLF, we convert each node to an unary atom with the predicate being the lemma plus POS tag (below, we still use the word for simplicity), and each edge to a binary atom with the predicate being the dependency label. For example, the node for Utah becomes Utah(n 1 ) and the subject dependency becomes nsubj(n1, n2). Here, the n i are Skolem constants indexed by the nodes. The QLF for a sentence is the conjunction of the atoms for the nodes and edges, e.g., the sentence above will become borders",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Derivation of Quasi-Logical Forms",
                "sec_num": "3.1"
            },
            {
                "text": "(n 1 ) \u2227 Utah(n 2 ) \u2227 Idaho(n 3 ) \u2227 nsubj(n 1 , n 2 ) \u2227 dobj(n 1 , n 3 ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Derivation of Quasi-Logical Forms",
                "sec_num": "3.1"
            },
            {
                "text": "Parsing in USP Given a QLF, a relation or an object is represented by the conjunction of a subset of the atoms. For example, the next-to relation is represented by borders(n 1 ) \u2227 nsubj(n 1 , n 2 ) \u2227 dobj(n 1 , n 3 ), and the states of Utah and Idaho are represented by Utah(n 2 ) and Idaho(n 3 ). The meaning composition of two sub-formulas is simply their conjunction. This allows the maximum flexibility in learning. In particular, lexical entries are no longer limited to be adjacent words as in Zettlemoyer and Collins (2005) , but can be arbitrary fragments in a dependency tree. For every sub-formula F , we define a corresponding lambda form that can be derived by replacing every Skolem constant n i that does not appear in any unary atom in F with a unique lambda variable x i . Intuitively, such constants represent objects introduced somewhere else (by the unary atoms containing them), and correspond to the arguments of the relation represented by F . For example, the lambda form",
                "cite_spans": [
                    {
                        "start": 500,
                        "end": 530,
                        "text": "Zettlemoyer and Collins (2005)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lambda-Form Clusters and Semantic",
                "sec_num": "3.2"
            },
            {
                "text": "for borders(n 1 ) \u2227 nsubj(n 1 , n 2 ) \u2227 dobj(n 1 , n 3 ) is \u03bbx 2 \u03bbx 3 . borders(n 1 ) \u2227 nsubj(n 1 , x 2 ) \u2227 dobj(n 1 , x 3 ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lambda-Form Clusters and Semantic",
                "sec_num": "3.2"
            },
            {
                "text": "Conceptually, a lambda-form cluster is a set of semantically interchangeable lambda forms. For example, to express the meaning that Utah borders Idaho, we can use any form in the cluster representing the next-to relation (e.g., \"borders\", \"shares a border with\"), any form in the cluster representing the state of Utah (e.g., \"the beehive state\"), and any form in the cluster representing the state of Idaho (e.g., \"Idaho\"). Conditioned on the clusters, the choices of individual lambda forms are independent of each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lambda-Form Clusters and Semantic",
                "sec_num": "3.2"
            },
            {
                "text": "To handle variable number of arguments, we follow Davidsonian semantics and further decompose a lambda form into the core form, which does not contain any lambda variable (e.g., borders(n 1 )), and the argument forms, which contain a single lambda variable (e.g., \u03bbx 2 .nsubj(n 1 , x 2 ) and \u03bbx 3 .dobj(n 1 , x 3 )). Each lambda-form cluster may contain some number of argument types, which cluster distinct forms of the same argument in a relation. For example, in Stanford dependencies, the object of a verb uses the dependency dobj in the active voice, but nsubjpass in passive.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lambda-Form Clusters and Semantic",
                "sec_num": "3.2"
            },
            {
                "text": "Lambda-form clusters abstract away syntactic variations of the same meaning. Given an instance of cluster T with arguments of argument types",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lambda-Form Clusters and Semantic",
                "sec_num": "3.2"
            },
            {
                "text": "A 1 , \u2022 \u2022 \u2022 , A k , its abstract lambda form is given by \u03bbx 1 \u2022 \u2022 \u2022 \u03bbx k .T(n) \u2227 k i=1 A i (n, x i )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lambda-Form Clusters and Semantic",
                "sec_num": "3.2"
            },
            {
                "text": ". Given a sentence and its QLF, semantic parsing amounts to partitioning the atoms in the QLF, dividing each part into core form and argument forms, and then assigning each form to a cluster or an argument type. The final logical form is derived by composing the abstract lambda forms of the parts using the \u03bb-reduction rule. 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lambda-Form Clusters and Semantic",
                "sec_num": "3.2"
            },
            {
                "text": "Formally, for a QLF Q, a semantic parse L partitions Q into parts p 1 , p 2 , \u2022 \u2022 \u2022 , p n ; each part p is assigned to some lambda-form cluster c, and is further partitioned into core form f and argument forms f 1 , \u2022 \u2022 \u2022 , f k ; each argument form is assigned to an argument type a in c. The USP MLN defines a joint probability distribution over Q and L by modeling the distributions over forms and arguments given the cluster or argument type.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "Before presenting the predicates and formulas in our MLN, we should emphasize that they should not be confused with the atoms and formulas in the QLFs, which are represented by reified constants and variables.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "To model distributions over lambda forms, we introduce the predicates Form(p, f!) and ArgForm(p, i, f!), where p is a part, i is the index of an argument, and f is a QLF subformula. Form(p, f) is true iff part p has core form f, and ArgForm(p, i, f) is true iff the ith argument in p has form f. 3 The \"f!\" notation signifies that each part or argument can have only one form.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "To model distributions over arguments, we introduce three more predicates: ArgType(p, i, a!) signifies that the ith argument of p is assigned to argument type a; Arg(p, i, p \u2032 ) signifies that the ith argument of p is p \u2032 ; Number(p, a, n) signifies that there are n arguments of p that are assigned to type a. The truth value of Number(p, a, n) is determined by the ArgType atoms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "Unsupervised semantic parsing can be captured by four formulas:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "p \u2208 +c \u2227 Form(p, +f) ArgType(p, i, +a) \u2227 ArgForm(p, i, +f) Arg(p, i, p \u2032 ) \u2227 ArgType(p, i, +a) \u2227 p \u2032 \u2208 +c \u2032 Number(p, +a, +n)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "All free variables are implicitly universally quantified. The \"+\" notation signifies that the MLN contains an instance of the formula, with a separate weight, for each value combination of the variables with a plus sign. The first formula models the mixture of core forms given the cluster, and the others model the mixtures of argument forms, argument types, and argument numbers, respectively, given the argument type.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "To encourage clustering and avoid overfitting, we impose an exponential prior with weight \u03b1 on the number of parameters. 4 The MLN above has one problem: it often clusters expressions that are semantically opposite. For example, it clusters antonyms like \"elderly/young\", \"mature/immature\". This issue also occurs in other semantic-processing systems (e.g., DIRT). In general, this is a difficult open problem that only recently has started to receive some attention (Mohammad et al., 2008) . Resolving this is not the focus of this paper, but we describe a general heuristic for fixing this problem. We observe that the problem stems from the lack of negative features for discovering meanings in contrast. In natural languages, parallel structures like conjunctions are one such feature. 5 We thus introduce an exponential prior with weight \u03b2 on the number of conjunctions where the two conjunctive parts are assigned to the same cluster. To detect conjunction, we simply used the Stanford dependencies that begin with \"conj\". This proves very effective, fixing the majority of the errors in our experiments.",
                "cite_spans": [
                    {
                        "start": 121,
                        "end": 122,
                        "text": "4",
                        "ref_id": null
                    },
                    {
                        "start": 467,
                        "end": 490,
                        "text": "(Mohammad et al., 2008)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 790,
                        "end": 791,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The USP MLN",
                "sec_num": "3.3"
            },
            {
                "text": "Given a sentence and the quasi-logical form Q derived from its dependency tree, the conditional probability for a semantic parse L is given by P r",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "(L|Q) \u221d exp ( i w i n i (L, Q)). The MAP se- mantic parse is simply arg max L i w i n i (L, Q).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "Enumerating all L's is intractable. It is also unnecessary, since most partitions will result in parts whose lambda forms have no cluster they can be assigned to. Instead, USP uses a greedy algorithm to search for the MAP parse. First we introduce some definitions: a partition is called \u03bb-reducible from p if it can be obtained from the current partition by recursively \u03bb-reducing the part containing p with one of its arguments; such a partition is Algorithm 1 USP-Parse(MLN, QLF)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "Form parts for individual atoms in QLF and assign each to its most probable cluster repeat for all parts p in the current partition do for all partitions that are \u03bb-reducible from p and feasible do Find the most probable cluster and argument type assignments for the new part and its arguments end for end for Change to the new partition and assignments with the highest gain in probability until none of these improve the probability return current partition and assignments called feasible if the core form of the new part is contained in some cluster. For example, consider the QLF of \"Utah borders Idaho\" and assume that the current partition is",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "\u03bbx 2 x 3 .borders(n 1 ) \u2227 nsubj(n 1 , x 2 ) \u2227 dobj(n 1 , x 3 ), Utah(n 2 ), Idaho(n 3 ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "Then the following partition is \u03bb-reducible from the first part in the above partition:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "\u03bbx 3 .borders(n 1 ) \u2227 nsubj(n 1 , n 2 ) \u2227 Utah(n 2 ) \u2227 dobj(n 1 , x 3 ), Idaho(n 3 ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "Whether this new partition is feasible depends on whether the core form of the new part \u03bbx 3 .borders(n 1 ) \u2227 nsubj(n 1 , n 2 ) \u2227 Utah(n 2 ) \u2227 dobj(n 1 , x 3 ) (i.e. borders(n 1 ) \u2227 nsubj(n 1 , n 2 ) \u2227 Utah(n 2 )) is contained in some lambda-form cluster.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "Algorithm 1 gives pseudo-code for our algorithm. Given part p, finding partitions that are \u03bbreducible from p and feasible can be done in time O(ST ), where S is the size of the clustering in the number of core forms and T is the maximum number of atoms in a core form. We omit the proof here but point out that it is related to the unordered subtree matching problem which can be solved in linear time (Kilpelainen, 1992) . Inverted indexes (e.g., from p to eligible core forms) are used to further improve the efficiency. For a new part p and a cluster that contains p's core form, there are k m ways of assigning p's m arguments to the k argument types of the cluster. For larger k and m, this is very expensive. We therefore approximate it by assigning each argument to the best type, independent of other arguments. This algorithm is very efficient, and is used repeatedly in learning.",
                "cite_spans": [
                    {
                        "start": 402,
                        "end": 421,
                        "text": "(Kilpelainen, 1992)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "4"
            },
            {
                "text": "The learning problem in USP is to maximize the log-likelihood of observing the QLFs obtained from the dependency trees, denoted by Q, summing out the unobserved semantic parses:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "L \u03b8 (Q) = log P \u03b8 (Q) = log L P \u03b8 (Q, L)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "Here, L are the semantic parses, \u03b8 are the MLN parameters, and P \u03b8 (Q, L) are the completion likelihoods. A serious challenge in unsupervised learning is the identifiability problem (i.e., the optimal parameters are not unique) (Liang and Klein, 2008) . This problem is particularly severe for log-linear models with hard constraints, which are common in MLNs. For example, in our USP MLN, conditioned on the fact that p \u2208 c, there is exactly one value of f that can satisfy the formula p \u2208 c \u2227 Form(p, f), and if we add some constant number to the weights of p \u2208 c \u2227 Form(p, f) for all f, the probability distribution stays the same. 6 The learner can be easily confused by the infinitely many optima, especially in the early stages. To address this problem, we impose local normalization constraints on specific groups of formulas that are mutually exclusive and exhaustive, i.e., in each group, we require that k i=1 e w i = 1, where w i are the weights of formulas in the group. Grouping is done in such a way as to encourage the intended mixture behaviors. Specifically, for the rule p \u2208 +c \u2227 Form(p, +f), all instances given a fixed c form a group; for each of the remaining three rules, all instances given a fixed a form a group. Notice that with these constraints the completion likelihood P (Q, L) can be computed in closed form for any L. In particular, each formula group contributes a term equal to the weight of the currently satisfied formula. In addition, the optimal weights that maximize the completion likelihood P (Q, L) can be derived in closed form using empirical relative frequencies. E.g., the optimal weight of p \u2208 c \u2227 Form(p, f) is log(n c,f /n c ), where n c,f is the number of parts p that satisfy both p \u2208 c and Form(p, f), and n c is the number of parts p that satisfy p \u2208 c. 7 We leverage this fact for efficient learning in USP. Another major challenge in USP learning is the summation in the likelihood, which is over all possible semantic parses for a given dependency tree. Even an efficient sampler like MC-SAT (Poon and Domingos, 2006) , as used in Poon & Domingos (2008) , would have a hard time generating accurate estimates within a reasonable amount of time. On the other hand, as already noted in the previous section, the lambda-form distribution is generally sparse. Large lambda-forms are rare, as they correspond to complex expressions that are often decomposable into smaller ones. Moreover, while ambiguities are present at the lexical level, they quickly diminish when more words are present. Therefore, a lambda form can usually only belong to a small number of clusters, if not a unique one. We thus simplify the problem by approximating the sum with the mode, and search instead for the L and \u03b8 that maximize log P \u03b8 (Q, L). Since the optimal weights and log-likelihood can be derived in closed form given the semantic parses L, we simply search over semantic parses, evaluating them using log-likelihood.",
                "cite_spans": [
                    {
                        "start": 228,
                        "end": 251,
                        "text": "(Liang and Klein, 2008)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 635,
                        "end": 636,
                        "text": "6",
                        "ref_id": null
                    },
                    {
                        "start": 1807,
                        "end": 1808,
                        "text": "7",
                        "ref_id": null
                    },
                    {
                        "start": 2048,
                        "end": 2073,
                        "text": "(Poon and Domingos, 2006)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 2087,
                        "end": 2109,
                        "text": "Poon & Domingos (2008)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "Algorithm 2 gives pseudo-code for our algorithm. The input consists of an MLN without weights and the QLFs for the training sentences. Two operators are used for updating semantic parses. The first is to merge two clusters, denoted by MERGE(C 1 , C 2 ) for clusters C 1 , C 2 , which does the following:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "and there is the local normalization constraint f e wc,f = 1. The optimal weights wc,f are easily derived by solving this constrained optimization problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "1. Create a new cluster C and add all core forms in C 1 , C 2 to C;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "2. Create new argument types for C by merging those in C 1 , C 2 so as to maximize the loglikelihood;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "3. Remove C 1 , C 2 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "Here, merging two argument types refers to pooling their argument forms to create a new argument type. Enumerating all possible ways of creating new argument types is intractable. USP approximates it by considering one type at a time and either creating a new type for it or merging it to types already considered, whichever maximizes the loglikelihood. The types are considered in decreasing order of their numbers of occurrences so that more information is available for each decision. MERGE clusters syntactically different expressions whose meanings appear to be the same according to the model. The second operator is to create a new cluster by composing two existing ones, denoted by COMPOSE(C R , C A ), which does the following:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "1. Create a new cluster C;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "2. Find all parts r \u2208 C R , a \u2208 C A such that a is an argument of r, compose them to r(a) by \u03bb-reduction and add the new part to C;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "3. Create new argument types for C from the argument forms of r(a) so as to maximize the log-likelihood.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "COMPOSE creates clusters of large lambda-forms if they tend to be composed of the same subforms (e.g., the lambda form for \"is next to\"). These lambda-forms may later be merged with other clusters (e.g., borders). At learning time, USP maintains an agenda that contains operations that have been evaluated and are pending execution. During initialization, USP forms a part and creates a new cluster for each unary atom u(n). It also assigns binary atoms of the form b(n, n \u2032 ) to the part as argument forms and creates a new argument type for each. This forms the initial clustering and semantic parses. USP then merges clusters with the same core form (i.e., the same unary predicate) using MERGE. 8 At each step, USP evaluates the candidate operations and adds them to the agenda if the improvement is above a threshold. 9 The operation with the highest score is executed, and the parameters are updated with the new optimal values. The QLFs which contain an affected part are reparsed, and operations in the agenda whose score might be affected are re-evaluated. These changes are done very efficiently using inverted indexes. We omit the details here due to space limitations. USP terminates when the agenda is empty, and outputs the current MLN parameters and semantic parses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "USP learning uses the same optimization objective as hard EM, and is also guaranteed to find a local optimum since at each step it improves the log-likelihood. It differs from EM in directly optimizing the likelihood instead of a lower bound.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Learning",
                "sec_num": "5"
            },
            {
                "text": "Evaluating unsupervised semantic parsers is difficult, because there is no predefined formal language or gold logical forms for the input sentences. Thus the best way to test them is by using them for the ultimate goal: answering questions based on the input corpus. In this paper, we applied USP to extracting knowledge from biomedical abstracts and evaluated its performance in answering a set of questions that simulate the information needs of biomedical researchers. We used the GENIA dataset (Kim et al., 2003) as the source for knowledge extraction. It contains 1999 PubMed abstracts and marks all mentions of biomedical entities according to the GENIA ontology, such as cell, protein, and DNA. As a first approximation to the questions a biomedical researcher might ask, we generated a set of two thousand questions on relations between entities. Sample questions are: \"What regulates MIP-1alpha?\", \"What does anti-STAT 1 inhibit?\". To simulate the real information need, we sample the relations from the 100 most frequently used verbs (excluding the auxiliary verbs be, have, and do), and sample the entities from those annotated in GENIA, both according to their numbers of occurrences. We evaluated USP by the number of answers it provided and the accuracy as determined by manual labeling. 10",
                "cite_spans": [
                    {
                        "start": 498,
                        "end": 516,
                        "text": "(Kim et al., 2003)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task",
                "sec_num": "6.1"
            },
            {
                "text": "Since USP is the first unsupervised semantic parser, conducting a meaningful comparison of it with other systems is not straightforward. Standard question-answering (QA) benchmarks do not provide the most appropriate comparison, because they tend to simultaneously emphasize other aspects not directly related to semantic parsing. Moreover, most state-of-the-art QA systems use supervised learning in their key components and/or require domain-specific engineering efforts. The closest available system to USP in aims and capabilities is TextRunner (Banko et al., 2007) , and we compare with it. TextRunner is the state-of-the-art system for open-domain information extraction; its goal is to extract knowledge from text without using supervised labels. Given that a central challenge to semantic parsing is resolving syntactic variations of the same meaning, we also compare with RESOLVER (Yates and Etzioni, 2009) , a state-of-the-art unsupervised system based on TextRunner for jointly resolving entities and relations, and DIRT (Lin and Pantel, 2001) , which resolves paraphrases of binary relations. Finally, we also compared to an informed baseline based on keyword matching. Keyword: We consider a baseline system based on keyword matching. The question substring containing the verb and the available argument is directly matched with the input text, ignoring case and morphology. We consider two ways to derive the answer given a match. The first one (KW) simply returns the rest of sentence on the other side of the verb. The second one (KW-SYN) is informed by syntax: the answer is extracted from the subject or object of the verb, depending on the question. If the verb does not contain the expected argument, the sentence is ignored. TextRunner: TextRunner inputs text and outputs relational triples in the form (R, A 1 , A 2 ) , where R is the relation string, and A 1 , A 2 the argument strings. Given a triple and a question, we first match their relation strings, and then match the strings for the argument that is present in the question. If both match, we return the other argument string in the triple as an answer. We report results when exact match is used (TR-EXACT), or when the triple string can contain the question one as a substring (TR-SUB). RESOLVER: RESOLVER (Yates and Etzioni, 2009) inputs TextRunner triples and collectively resolves coreferent relation and argument strings. On the GENIA data, using the default parameters, RESOLVER produces only a few trivial relation clusters and no argument clusters. This is not surprising, since RESOLVER assumes high redundancy in the data, and will discard any strings with fewer than 25 extractions. For a fair comparison, we also ran RESOLVER using all extractions, and manually tuned the parameters based on eyeballing of clustering quality. The best result was obtained with 25 rounds of execution and with the entity multiple set to 200 (the default is 30). To answer questions, the only difference from TextRunner is that a question string can match any string in its cluster. As in TextRunner, we report results for both exact match (RS-EXACT) and substring (RS-SUB). DIRT: The DIRT system inputs a path and returns a set of similar paths. To use DIRT in question answering, we queried it to obtain similar paths for the relation of the question, and used these paths while matching sentences. We first used MINIPAR (Lin, 1998) to parse input text using the same dependencies as DIRT. To determine a match, we first check if the sentence contains the question path or one of its DIRT paths. If so, and if the available argument slot in the question is contained in the one in the sentence, it is a match, and we return the other argument slot from the sentence if it is present. Ideally, a fair comparison will require running DIRT on the GENIA text, but we were not able to obtain the source code. We thus resorted to using the latest DIRT database released by the author, which contains paths extracted from a large corpus with more than 1GB of text. This puts DIRT in a very advantageous position compared with other systems. In our experiments, we used the top three similar paths, as including more results in very low precision. USP: We built a system for knowledge extraction and question answering on top of USP. It generated Stanford dependencies (de Marneffe et al., 2006) from the input text using the Stanford parser, and then fed these to USP-Learn 11 , which produced an MLN with learned weights and the MAP semantic parses of the input sentences. These MAP parses formed our knowledge base (KB). To answer questions, the system first parses the questions 12 using USP-Parse with the learned MLN, and then matches the question parse to parses in the KB by testing subsumption (i.e., a question parse matches a KB one iff the former is subsumed by the latter). When a match occurs, our system then looks for arguments of type in accordance with the question. For example, if the question is \"What regulates MIP-1alpha?\", it searches for the argument type of the relation that contains the argument form \"nsubj\" for subject. If such an argument exists for the relation part, it will be returned as the answer. Table 1 shows the results for all systems. USP extracted the highest number of answers, almost doubling that of the second highest (RS-SUB). It obtained the highest accuracy at 88%, and the number of correct answers it extracted is three times that of the second highest system. The informed baseline (KW-SYN) did surprisingly well compared to systems other than USP, in terms of accuracy and number of correct answers. TextRunner achieved good accuracy when exact match is used (TR-EXACT), but only obtained a fraction of the answers compared to USP. With substring match, its recall substantially improved, but precision dropped more than 20 points. RE-SOLVER improved the number of extracted answers by sanctioning more matches based on the clusters it generated. However, most of those additional answers are incorrect due to wrong clustering. DIRT obtained the second highest number of correct answers, but its precision is quite low because the similar paths contain many errors.",
                "cite_spans": [
                    {
                        "start": 549,
                        "end": 569,
                        "text": "(Banko et al., 2007)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 890,
                        "end": 915,
                        "text": "(Yates and Etzioni, 2009)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 1032,
                        "end": 1054,
                        "text": "(Lin and Pantel, 2001)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 2291,
                        "end": 2316,
                        "text": "(Yates and Etzioni, 2009)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 3400,
                        "end": 3411,
                        "text": "(Lin, 1998)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 4340,
                        "end": 4366,
                        "text": "(de Marneffe et al., 2006)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1825,
                        "end": 1840,
                        "text": "(R, A 1 , A 2 )",
                        "ref_id": null
                    },
                    {
                        "start": 5206,
                        "end": 5213,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Systems",
                "sec_num": "6.2"
            },
            {
                "text": "Manual inspection shows that USP is able to resolve many nontrivial syntactic variations without user supervision. It consistently resolves the syntactic difference between active and passive voices. It successfully identifies many distinct argument forms that mean the same (e.g., \"X stimulates Y\" \u2248 \"Y is stimulated with X\", \"expression of X\" \u2248 \"X expression\"). It also resolves many nouns correctly and forms meaningful groups of relations. Here are some sample clusters in core forms: {investigate, examine, evaluate, analyze, study, assay} {diminish, reduce, decrease, attenuate} {synthesis, production, secretion, release} {dramatically, substantially, significantly} An example question-answer pair, together with the source sentence, is shown below: Q: What does IL-13 enhance? A: The 12-lipoxygenase activity of murine macrophages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Analysis",
                "sec_num": "6.4"
            },
            {
                "text": "Sentence: The data presented here indicate that (1) the 12-lipoxygenase activity of murine macrophages is upregulated in vitro and in vivo by IL-4 and/or IL-13, . . .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Qualitative Analysis",
                "sec_num": "6.4"
            },
            {
                "text": "This paper introduces the first unsupervised approach to learning semantic parsers. Our USP system is based on Markov logic, and recursively clusters expressions to abstract away syntactic variations of the same meaning. We have successfully applied USP to extracting a knowledge base from biomedical text and answering questions based on it.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "Directions for future work include: better handling of antonyms, subsumption relations among expressions, quantifier scoping, more complex lambda forms, etc.; use of context and discourse to aid expression clustering and semantic parsing; more efficient learning and inference; application to larger corpora; etc.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "The only exception that we are aware of isGe and Mooney (2009).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Currently, we do not handle quantifier scoping or semantics for specific closed-class words such as determiners. These will be pursued in future work.3 There are hard constraints to guarantee that these assignments form a legal partition. We omit them for simplicity.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Excluding weights of \u221e or \u2212\u221e, which signify hard constraints.5 For example, in the sentence \"IL-2 inhibits X in A and induces Y in B\", the conjunction between \"inhibits\" and \"induces\" suggests that they are different. If \"inhibits\" and \"induces\" are indeed synonyms, such a sentence will sound awkward and would probably be rephrased as \"IL-2 inhibits X in A and Y in B\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Regularizations, e.g., Gaussian priors on weights, alleviate this problem by penalizing large weights, but it remains true that weights within a short range are roughly equivalent.7 To see this, notice that for a given c, the total contribution to the completion likelihood from all groundings in its formula group is f wc,fnc,f. In addition,f nc,f = nc",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Word-sense disambiguation can be handled by including a new kind of operator that splits a cluster into subclusters. We leave this to future work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We currently set it to 10 to favor precision and guard against errors due to inexact estimates.10 The labels and questions are available at http://alchemy.cs.washington.edu/papers/poon09.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "\u03b1 and \u03b2 are set to \u22125 and \u221210.12 The question slot is replaced by a dummy word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We thank the anonymous reviewers for their comments. This ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": "8"
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Predicting Structured Data",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Bakir",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Hofmann",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [
                            "B"
                        ],
                        "last": "Sch\u00f6lkopf",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "G. Bakir, T. Hofmann, B. B. Sch\u00f6lkopf, A. Smola, B. Taskar, S. Vishwanathan, and (eds.). 2007. Pre- dicting Structured Data. MIT Press, Cambridge, MA.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Open information extraction from the web",
                "authors": [
                    {
                        "first": "Michele",
                        "middle": [],
                        "last": "Banko",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "J"
                        ],
                        "last": "Cafarella",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Soderland",
                        "suffix": ""
                    },
                    {
                        "first": "Matt",
                        "middle": [],
                        "last": "Broadhead",
                        "suffix": ""
                    },
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Etzioni",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Twentieth International Joint Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "2670--2676",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soder- land, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Pro- ceedings of the Twentieth International Joint Con- ference on Artificial Intelligence, pages 2670-2676, Hyderabad, India. AAAI Press.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Introduction to the CoNLL-2004 shared task: Semantic role labeling",
                "authors": [
                    {
                        "first": "Xavier",
                        "middle": [],
                        "last": "Carreras",
                        "suffix": ""
                    },
                    {
                        "first": "Luis",
                        "middle": [],
                        "last": "Marquez",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the Eighth Conference on Computational Natural Language Learning",
                "volume": "",
                "issue": "",
                "pages": "89--97",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xavier Carreras and Luis Marquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role la- beling. In Proceedings of the Eighth Conference on Computational Natural Language Learning, pages 89-97, Boston, MA. ACL.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Generating typed dependency parses from phrase structure parses",
                "authors": [
                    {
                        "first": "Marie-Catherine",
                        "middle": [],
                        "last": "De Marneffe",
                        "suffix": ""
                    },
                    {
                        "first": "Bill",
                        "middle": [],
                        "last": "Maccartney",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [
                            "D"
                        ],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "449--454",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, pages 449- 454, Genoa, Italy. ELRA.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Learning a compositional semantic parser using an existing syntactic parser",
                "authors": [
                    {
                        "first": "Ruifang",
                        "middle": [],
                        "last": "Ge",
                        "suffix": ""
                    },
                    {
                        "first": "Raymond",
                        "middle": [
                            "J"
                        ],
                        "last": "Mooney",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the Forty Seventh Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ruifang Ge and Raymond J. Mooney. 2009. Learning a compositional semantic parser using an existing syntactic parser. In Proceedings of the Forty Sev- enth Annual Meeting of the Association for Compu- tational Linguistics, Singapore. ACL.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Introduction to Statistical Relational Learning",
                "authors": [],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lise Getoor and Ben Taskar, editors. 2007. Introduc- tion to Statistical Relational Learning. MIT Press, Cambridge, MA.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Tree Matching Problems with Applications to Structured Text databases",
                "authors": [
                    {
                        "first": "Pekka",
                        "middle": [],
                        "last": "Kilpelainen",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pekka Kilpelainen. 1992. Tree Matching Prob- lems with Applications to Structured Text databases. Ph.D. Thesis, Department of Computer Science, University of Helsinki.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "GENIA corpus -a semantically annotated corpus for bio-textmining",
                "authors": [
                    {
                        "first": "Jin-Dong",
                        "middle": [],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "Tomoko",
                        "middle": [],
                        "last": "Ohta",
                        "suffix": ""
                    },
                    {
                        "first": "Yuka",
                        "middle": [],
                        "last": "Tateisi",
                        "suffix": ""
                    },
                    {
                        "first": "Jun'ichi",
                        "middle": [],
                        "last": "Tsujii",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Bioinformatics",
                "volume": "19",
                "issue": "",
                "pages": "180--82",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jin-Dong Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. GENIA corpus -a seman- tically annotated corpus for bio-textmining. Bioin- formatics, 19:180-82.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Extracting semantic networks from text via relational clustering",
                "authors": [
                    {
                        "first": "Stanley",
                        "middle": [],
                        "last": "Kok",
                        "suffix": ""
                    },
                    {
                        "first": "Pedro",
                        "middle": [],
                        "last": "Domingos",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Nineteenth European Conference on Machine Learning",
                "volume": "",
                "issue": "",
                "pages": "624--639",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stanley Kok and Pedro Domingos. 2008. Extract- ing semantic networks from text via relational clus- tering. In Proceedings of the Nineteenth European Conference on Machine Learning, pages 624-639, Antwerp, Belgium. Springer.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Analyzing the errors of unsupervised learning",
                "authors": [
                    {
                        "first": "Percy",
                        "middle": [],
                        "last": "Liang",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Forty Sixth Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "879--887",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Percy Liang and Dan Klein. 2008. Analyzing the er- rors of unsupervised learning. In Proceedings of the Forty Sixth Annual Meeting of the Association for Computational Linguistics, pages 879-887, Colum- bus, OH. ACL.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "DIRT -discovery of inference rules from text",
                "authors": [
                    {
                        "first": "Dekang",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Pantel",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
                "volume": "",
                "issue": "",
                "pages": "323--328",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -dis- covery of inference rules from text. In Proceedings of the Seventh ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, pages 323-328, San Francisco, CA. ACM Press.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Dependency-based evaluation of MINIPAR",
                "authors": [
                    {
                        "first": "Dekang",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the Workshop on the Evaluation of Parsing Systems",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dekang Lin. 1998. Dependency-based evaluation of MINIPAR. In Proceedings of the Workshop on the Evaluation of Parsing Systems, Granada, Spain. ELRA.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Computing word-pair antonymy",
                "authors": [
                    {
                        "first": "Saif",
                        "middle": [],
                        "last": "Mohammad",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Dorr",
                        "suffix": ""
                    },
                    {
                        "first": "Graeme",
                        "middle": [],
                        "last": "Hirst",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "982--991",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Saif Mohammad, Bonnie Dorr, and Graeme Hirst. 2008. Computing word-pair antonymy. In Proceed- ings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 982-991, Honolulu, HI. ACL.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Learning for semantic parsing",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Raymond",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mooney",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Eighth International Conference on Computational Linguistics and Intelligent Text Processing",
                "volume": "",
                "issue": "",
                "pages": "311--324",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Raymond J. Mooney. 2007. Learning for semantic parsing. In Proceedings of the Eighth International Conference on Computational Linguistics and Intel- ligent Text Processing, pages 311-324, Mexico City, Mexico. Springer.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Sound and efficient inference with probabilistic and deterministic dependencies",
                "authors": [
                    {
                        "first": "Hoifung",
                        "middle": [],
                        "last": "Poon",
                        "suffix": ""
                    },
                    {
                        "first": "Pedro",
                        "middle": [],
                        "last": "Domingos",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Twenty First National Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "458--463",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hoifung Poon and Pedro Domingos. 2006. Sound and efficient inference with probabilistic and determin- istic dependencies. In Proceedings of the Twenty First National Conference on Artificial Intelligence, pages 458-463, Boston, MA. AAAI Press.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Joint unsupervised coreference resolution with Markov logic",
                "authors": [
                    {
                        "first": "Hoifung",
                        "middle": [],
                        "last": "Poon",
                        "suffix": ""
                    },
                    {
                        "first": "Pedro",
                        "middle": [],
                        "last": "Domingos",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "649--658",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hoifung Poon and Pedro Domingos. 2008. Joint unsu- pervised coreference resolution with Markov logic. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 649-658, Honolulu, HI. ACL.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Markov logic networks",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Richardson",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Domingos",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Machine Learning",
                "volume": "62",
                "issue": "",
                "pages": "107--136",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Richardson and P. Domingos. 2006. Markov logic networks. Machine Learning, 62:107-136.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Unsupervised methods for determining object and relation synonyms on the web",
                "authors": [
                    {
                        "first": "Alexander",
                        "middle": [],
                        "last": "Yates",
                        "suffix": ""
                    },
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Etzioni",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Journal of Artificial Intelligence Research",
                "volume": "34",
                "issue": "",
                "pages": "255--296",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alexander Yates and Oren Etzioni. 2009. Unsuper- vised methods for determining object and relation synonyms on the web. Journal of Artificial Intelli- gence Research, 34:255-296.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammers",
                "authors": [
                    {
                        "first": "Luke",
                        "middle": [
                            "S"
                        ],
                        "last": "Zettlemoyer",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Collins",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the Twenty First Conference on Uncertainty in Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "658--666",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Struc- tured classification with probabilistic categorial grammers. In Proceedings of the Twenty First Conference on Uncertainty in Artificial Intelligence, pages 658-666, Edinburgh, Scotland. AUAI Press.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Online learning of relaxed CCG grammars for parsing to logical form",
                "authors": [
                    {
                        "first": "Luke",
                        "middle": [
                            "S"
                        ],
                        "last": "Zettlemoyer",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Collins",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
                "volume": "",
                "issue": "",
                "pages": "878--887",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Luke S. Zettlemoyer and Michael Collins. 2007. On- line learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 878-887, Prague, Czech. ACL.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF1": {
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td/><td colspan=\"3\"># Total # Correct Accuracy</td></tr><tr><td>KW</td><td>150</td><td>67</td><td>45%</td></tr><tr><td>KW-SYN</td><td>87</td><td>67</td><td>77%</td></tr><tr><td>TR-EXACT</td><td>29</td><td>23</td><td>79%</td></tr><tr><td>TR-SUB</td><td>152</td><td>81</td><td>53%</td></tr><tr><td>RS-EXACT</td><td>53</td><td>24</td><td>45%</td></tr><tr><td>RS-SUB</td><td>196</td><td>81</td><td>41%</td></tr><tr><td>DIRT</td><td>159</td><td>94</td><td>59%</td></tr><tr><td>USP</td><td>334</td><td>295</td><td>88%</td></tr></table>",
                "text": "Comparison of question answering results on the GENIA dataset.",
                "num": null
            }
        }
    }
}