File size: 84,280 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
{
    "paper_id": "P05-1014",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:38:04.176907Z"
    },
    "title": "The Distributional Inclusion Hypotheses and Lexical Entailment",
    "authors": [
        {
            "first": "Maayan",
            "middle": [],
            "last": "Geffet",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Hebrew University",
                "location": {
                    "postCode": "91904",
                    "settlement": "Jerusalem",
                    "country": "Israel"
                }
            },
            "email": ""
        },
        {
            "first": "Ido",
            "middle": [],
            "last": "Dagan",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Ilan University",
                "location": {
                    "postCode": "52900",
                    "settlement": "Ramat-Gan",
                    "country": "Israel"
                }
            },
            "email": "dagan@cs.biu.ac.il"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper suggests refinements for the Distributional Similarity Hypothesis. Our proposed hypotheses relate the distributional behavior of pairs of words to lexical entailment-a tighter notion of semantic similarity that is required by many NLP applications. To automatically explore the validity of the defined hypotheses we developed an inclusion testing algorithm for characteristic features of two words, which incorporates corpus and web-based feature sampling to overcome data sparseness. The degree of hypotheses validity was then empirically tested and manually analyzed with respect to the word sense level. In addition, the above testing algorithm was exploited to improve lexical entailment acquisition.",
    "pdf_parse": {
        "paper_id": "P05-1014",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper suggests refinements for the Distributional Similarity Hypothesis. Our proposed hypotheses relate the distributional behavior of pairs of words to lexical entailment-a tighter notion of semantic similarity that is required by many NLP applications. To automatically explore the validity of the defined hypotheses we developed an inclusion testing algorithm for characteristic features of two words, which incorporates corpus and web-based feature sampling to overcome data sparseness. The degree of hypotheses validity was then empirically tested and manually analyzed with respect to the word sense level. In addition, the above testing algorithm was exploited to improve lexical entailment acquisition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Distributional Similarity between words has been an active research area for more than a decade. It is based on the general idea of Harris' Distributional Hypothesis, suggesting that words that occur within similar contexts are semantically similar (Harris, 1968) . Concrete similarity measures compare a pair of weighted context feature vectors that characterize two words (Church and Hanks, 1990; Ruge, 1992; Pereira et al., 1993; Grefenstette, 1994; Lee, 1997; Lin, 1998; Pantel and Lin, 2002; Weeds and Weir, 2003) .",
                "cite_spans": [
                    {
                        "start": 249,
                        "end": 263,
                        "text": "(Harris, 1968)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 374,
                        "end": 398,
                        "text": "(Church and Hanks, 1990;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 399,
                        "end": 410,
                        "text": "Ruge, 1992;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 411,
                        "end": 432,
                        "text": "Pereira et al., 1993;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 433,
                        "end": 452,
                        "text": "Grefenstette, 1994;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 453,
                        "end": 463,
                        "text": "Lee, 1997;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 464,
                        "end": 474,
                        "text": "Lin, 1998;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 475,
                        "end": 496,
                        "text": "Pantel and Lin, 2002;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 497,
                        "end": 518,
                        "text": "Weeds and Weir, 2003)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "As it turns out, distributional similarity captures a somewhat loose notion of semantic similarity (see Table 1 ). It does not ensure that the meaning of one word is preserved when replacing it with the other one in some context. However, many semantic information-oriented applications like Question Answering, Information Extraction and Paraphrase Acquisition require a tighter similarity criterion, as was also demonstrated by papers at the recent PASCAL Challenge on Recognizing Textual Entailment (Dagan et al., 2005) . In particular, all these applications need to know when the meaning of one word can be inferred (entailed) from another word, so that one word could substitute the other in some contexts. This relation corresponds to several lexical semantic relations, such as synonymy, hyponymy and some cases of meronymy. For example, in Question Answering, the word company in a question can be substituted in the text by firm (synonym), automaker (hyponym) or division (meronym). Unfortunately, existing manually constructed resources of lexical semantic relations, such as WordNet, are not exhaustive and comprehensive enough for a variety of domains and thus are not sufficient as a sole resource for application needs 1 .",
                "cite_spans": [
                    {
                        "start": 502,
                        "end": 522,
                        "text": "(Dagan et al., 2005)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1234,
                        "end": 1235,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 104,
                        "end": 111,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Most works that attempt to learn such concrete lexical semantic relations employ a co-occurrence pattern-based approach (Hearst, 1992; Ravichandran and Hovy, 2002; Moldovan et al., 2004) . Typically, they use a set of predefined lexicosyntactic patterns that characterize specific semantic relations. If a candidate word pair (like company-automaker) co-occurs within the same sentence satisfying a concrete pattern (like \" \u2026companies, such as automakers\"), then it is expected that the corresponding semantic relation holds between these words (hypernym-hyponym in this example).",
                "cite_spans": [
                    {
                        "start": 120,
                        "end": 134,
                        "text": "(Hearst, 1992;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 135,
                        "end": 163,
                        "text": "Ravichandran and Hovy, 2002;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 164,
                        "end": 186,
                        "text": "Moldovan et al., 2004)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In recent work (Geffet and Dagan, 2004) we explored the correspondence between the distributional characterization of two words (which may hardly co-occur, as is usually the case for syno-nyms) and the kind of tight semantic relationship that might hold between them. We formulated a lexical entailment relation that corresponds to the above mentioned substitutability criterion, and is termed meaning entailing substitutability (which we term here for brevity as lexical entailment). Given a pair of words, this relation holds if there are some contexts in which one of the words can be substituted by the other, such that the meaning of the original word can be inferred from the new one. We then proposed a new feature weighting function (RFF) that yields more accurate distributional similarity lists, which better approximate the lexical entailment relation. Yet, this method still applies a standard measure for distributional vector similarity (over vectors with the improved feature weights), and thus produces many loose similarities that do not correspond to entailment.",
                "cite_spans": [
                    {
                        "start": 15,
                        "end": 39,
                        "text": "(Geffet and Dagan, 2004)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This paper explores more deeply the relationship between distributional characterization of words and lexical entailment, proposing two new hypotheses as a refinement of the distributional similarity hypothesis. The main idea is that if one word entails the other then we would expect that virtually all the characteristic context features of the entailing word will actually occur also with the entailed word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To test this idea we developed an automatic method for testing feature inclusion between a pair of words. This algorithm combines corpus statistics with a web-based feature sampling technique. The web is utilized to overcome the data sparseness problem, so that features which are not found with one of the two words can be considered as truly distinguishing evidence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Using the above algorithm we first tested the empirical validity of the hypotheses. Then, we demonstrated how the hypotheses can be leveraged in practice to improve the precision of automatic acquisition of the entailment relation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This subsection reviews the relevant details of earlier methods that were utilized within this paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "In the computational setting contexts of words are represented by feature vectors. Each word w is represented by a feature vector, where an entry in the vector corresponds to a feature f. Each feature represents another word (or term) with which w cooccurs, and possibly specifies also the syntactic relation between the two words as in (Grefenstette, 1994; Lin, 1998; Weeds and Weir, 2003) . Pado and Lapata (2003) demonstrated that using syntactic dependency-based vector space models can help distinguish among classes of different lexical relations, which seems to be more difficult for traditional \"bag of words\" co-occurrence-based models.",
                "cite_spans": [
                    {
                        "start": 337,
                        "end": 357,
                        "text": "(Grefenstette, 1994;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 358,
                        "end": 368,
                        "text": "Lin, 1998;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 369,
                        "end": 390,
                        "text": "Weeds and Weir, 2003)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 393,
                        "end": 415,
                        "text": "Pado and Lapata (2003)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "A syntactic feature is defined as a triple <term, syntactic_relation, relation_direction> (the direction is set to 1, if the feature is the word's modifier and to 0 otherwise). For example, given the word \"company\" the feature <earnings_report, gen, 0> (genitive) corresponds to the phrase \"company's earnings report\", and <profit, pcomp, 0> (prepositional complement) corresponds to \"the profit of the company\". Throughout this paper we used syntactic features generated by the Minipar dependency parser (Lin, 1993) .",
                "cite_spans": [
                    {
                        "start": 505,
                        "end": 516,
                        "text": "(Lin, 1993)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "The value of each entry in the feature vector is determined by some weight function weight(w,f), which quantifies the degree of statistical association between the feature and the corresponding word. The most widely used association weight function is (point-wise) Mutual Information (MI) (Church and Hanks, 1990; Lin, 1998; Dagan, 2000; Weeds et al., 2004 (Geffet and Dagan, 2004) . Entailment judgments are marked by the arrow direction, with '*' denoting no entailment.",
                "cite_spans": [
                    {
                        "start": 289,
                        "end": 313,
                        "text": "(Church and Hanks, 1990;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 314,
                        "end": 324,
                        "text": "Lin, 1998;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 325,
                        "end": 337,
                        "text": "Dagan, 2000;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 338,
                        "end": 356,
                        "text": "Weeds et al., 2004",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 357,
                        "end": 381,
                        "text": "(Geffet and Dagan, 2004)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "Once feature vectors have been constructed, the similarity between two words is defined by some vector similarity metric. Different metrics have been used, such as weighted Jaccard (Grefenstette, 1994; Dagan, 2000) , cosine (Ruge, 1992) , various information theoretic measures (Lee, 1997) , and the widely cited and competitive (see (Weeds and Weir, 2003) ) measure of Lin (1998) for similarity between two words, w and v, defined as follows:",
                "cite_spans": [
                    {
                        "start": 181,
                        "end": 201,
                        "text": "(Grefenstette, 1994;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 202,
                        "end": 214,
                        "text": "Dagan, 2000)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 224,
                        "end": 236,
                        "text": "(Ruge, 1992)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 278,
                        "end": 289,
                        "text": "(Lee, 1997)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 334,
                        "end": 356,
                        "text": "(Weeds and Weir, 2003)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 370,
                        "end": 380,
                        "text": "Lin (1998)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": ", ) , ( ) , ( ) , ( ) , ( ) , ( ) ( ) ( ) ( ) ( \u2208 \u2208 \u2229 \u2208 + + = f v weight f w weight f v weight f w weight v w sim v F f w F f v F w F f Lin",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "where F(w) and F(v) are the active features of the two words (positive feature weight) and the weight function is defined as MI. As typical for vector similarity measures, it assigns high similarity scores if many of the two word's features overlap, even though some prominent features might be disjoint. This is a major reason for getting such semantically loose similarities, like companygovernment and country -economy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "Investigating the output of Lin's (1998) similarity measure with respect to the above criterion in (Geffet and Dagan, 2004) , we discovered that the quality of similarity scores is often hurt by inaccurate feature weights, which yield rather noisy feature vectors. Hence, we tried to improve the feature weighting function to promote those features that are most indicative of the word meaning. A new weighting scheme was defined for bootstrapping feature weights, termed RFF (Relative Feature Focus). First, basic similarities are generated by Lin's measure. Then, feature weights are recalculated, boosting the weights of features that characterize many of the words that are most similar to the given one 2 . As a result the most prominent features of a word are concentrated within the top-100 entries of the vector. Finally, word similarities are recalculated by Lin's metric over the vectors with the new RFF weights. The lexical entailment prediction task of (Geffet and Dagan, 2004) measures how many of the top ranking similarity pairs produced by the 2 In concrete terms RFF is defined by:",
                "cite_spans": [
                    {
                        "start": 99,
                        "end": 123,
                        "text": "(Geffet and Dagan, 2004)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 966,
                        "end": 990,
                        "text": "(Geffet and Dagan, 2004)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "\u2229 \u2208 = ) , ( ) ( ) ( ) , ( v w sim w N f WS v f w RFF ,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "where sim(w,v) is an initial approximation of the similarity space by Lin's measure, WS(f) is a set of words co-occurring with feature f, and N(w) is the set of the most similar words of w by Lin's measure.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "RFF-based metric hold the entailment relation, in at least one direction. To this end a data set of 1,200 pairs was created, consisting of top-N (N=40) similar words of 30 randomly selected nouns, which were manually judged by the lexical entailment criterion. Quite high Kappa agreement values of 0.75 and 0.83 were reported, indicating that the entailment judgment task was reasonably well defined. A subset of the data set is demonstrated in Table 1 . The RFF weighting produced 10% precision improvement over Lin's original use of MI, suggesting the RFF capability to promote semantically meaningful features. However, over 47% of the word pairs in the top-40 similarities are not related by entailment, which calls for further improvement. In this paper we use the same data set 3 and the RFF metric as a basis for our experiments. Weeds et al. (2004) attempted to refine the distributional similarity goal to predict whether one term is a generalization/specification of the other. They present a distributional generality concept and expect it to correlate with semantic generality. Their conjecture is that the majority of the features of the more specific word are included in the features of the more general one. They define the feature recall of w with respect to v as the weighted proportion of features of v that also appear in the vector of w. Then, they suggest that a hypernym would have a higher feature recall for its hyponyms (specifications), than vice versa.",
                "cite_spans": [
                    {
                        "start": 837,
                        "end": 856,
                        "text": "Weeds et al. (2004)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 445,
                        "end": 452,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Implementations of Distributional Similarity",
                "sec_num": "2.1"
            },
            {
                "text": "However, their results in predicting the hyponymy-hyperonymy direction (71% precision) are comparable to the na\u00efve baseline (70% precision) that simply assumes that general words are more frequent than specific ones. Possible sources of noise in their experiment could be ignoring word polysemy and data sparseness of word-feature cooccurrence in the corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predicting Semantic Inclusion",
                "sec_num": "2.2"
            },
            {
                "text": "Extending the rationale of Weeds et al., we suggest that if the meaning of a word v entails another word w then it is expected that all the typical contexts (features) of v will occur also with w. That is, the characteristic contexts of v are expected to be included within all w's contexts (but not necessarily amongst the most characteristic ones for w). Conversely, we might expect that if v's characteristic contexts are included within all w's contexts then it is likely that the meaning of v does entail w. Taking both directions together, lexical entailment is expected to highly correlate with characteristic feature inclusion.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predicting Semantic Inclusion",
                "sec_num": "2.2"
            },
            {
                "text": "Two additional observations are needed before concretely formulating these hypotheses. As explained in Section 2, word contexts should be represented by syntactic features, which are more restrictive and thus better reflect the restrained semantic meaning of the word (it is difficult to tie entailment to looser context representations, such as co-occurrence in a text window). We also notice that distributional similarity principles are intended to hold at the sense level rather than the word level, since different senses have different characteristic contexts (even though computational common practice is to work at the word level, due to the lack of robust sense annotation).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predicting Semantic Inclusion",
                "sec_num": "2.2"
            },
            {
                "text": "We can now define the two distributional inclusion hypotheses, which correspond to the two directions of inference relating distributional feature inclusion and lexical entailment. Let v i and w j be two word senses of the words w and v, correspondingly, and let v i => w j denote the (directional) entailment relation between these senses. Assume further that we have a measure that determines the set of characteristic features for the meaning of each word sense. Then we would hypothesize:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Predicting Semantic Inclusion",
                "sec_num": "2.2"
            },
            {
                "text": "If v i => w j then all the characteristic (syntacticbased) features of v i are expected to appear with w j .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hypothesis I:",
                "sec_num": null
            },
            {
                "text": "If all the characteristic (syntactic-based) features of v i appear with w j then we expect that v i => w j .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hypothesis II:",
                "sec_num": null
            },
            {
                "text": "To check the validity of the hypotheses we need to test feature inclusion. In this section we present an automated word-level feature inclusion testing method, termed ITA (Inclusion Testing Algorithm).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Level Testing of Feature Inclusion",
                "sec_num": "4"
            },
            {
                "text": "To overcome the data sparseness problem we incorporated web-based feature sampling. Given a test pair of words, three main steps are performed, as detailed in the following subsections:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Level Testing of Feature Inclusion",
                "sec_num": "4"
            },
            {
                "text": "Step 1: Computing the set of characteristic features for each word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Level Testing of Feature Inclusion",
                "sec_num": "4"
            },
            {
                "text": "Step 2: Testing feature inclusion for each pair, in both directions, within the given corpus data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Level Testing of Feature Inclusion",
                "sec_num": "4"
            },
            {
                "text": "Step 3: Complementary testing of feature inclusion for each pair in the web.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Level Testing of Feature Inclusion",
                "sec_num": "4"
            },
            {
                "text": "To implement the first step of the algorithm, the RFF weighting function is exploited and its top-100 weighted features are taken as most characteristic for each word. As mentioned in Section 2, (Geffet and Dagan, 2004) shows that RFF yields high concentration of good features at the top of the vector.",
                "cite_spans": [
                    {
                        "start": 195,
                        "end": 219,
                        "text": "(Geffet and Dagan, 2004)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Step 1: Corpus-based generation of characteristic features",
                "sec_num": "4.1"
            },
            {
                "text": "We first check feature inclusion in the corpus that was used to generate the characteristic feature sets. For each word pair (w, v) we first determine which features of w do co-occur with v in the corpus. The same is done to identify features of v that co-occur with w in the corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Step 2: Corpus-based feature inclusion test",
                "sec_num": "4.2"
            },
            {
                "text": "This step is most important to avoid inclusion misses due to the data sparseness of the corpus. A few recent works (Ravichandran and Hovy, 2002; Keller et al., 2002; Chklovski and Pantel, 2004) used the web to collect statistics on word cooccurrences. In a similar spirit, our inclusion test is completed by searching the web for the missing (non-included) features on both sides. We call this web-based technique mutual web-sampling. The web results are further parsed to verify matching of the feature's syntactic relationship.",
                "cite_spans": [
                    {
                        "start": 115,
                        "end": 144,
                        "text": "(Ravichandran and Hovy, 2002;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 145,
                        "end": 165,
                        "text": "Keller et al., 2002;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 166,
                        "end": 193,
                        "text": "Chklovski and Pantel, 2004)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Step 3: Complementary Webbased Inclusion Test",
                "sec_num": "4.3"
            },
            {
                "text": "We denote the subset of w's features that are missing for v as M (w, v) (and equivalently M(v, w) ). Since web sampling is time consuming we randomly sample a subset of k features (k=20 in our experiments), denoted as M (v,w,k) .",
                "cite_spans": [
                    {
                        "start": 65,
                        "end": 71,
                        "text": "(w, v)",
                        "ref_id": null
                    },
                    {
                        "start": 72,
                        "end": 97,
                        "text": "(and equivalently M(v, w)",
                        "ref_id": null
                    },
                    {
                        "start": 220,
                        "end": 227,
                        "text": "(v,w,k)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Step 3: Complementary Webbased Inclusion Test",
                "sec_num": "4.3"
            },
            {
                "text": "For each pair (w, v) and their k-subsets M (w, v, k) and M(v, w, k) execute:",
                "cite_spans": [
                    {
                        "start": 43,
                        "end": 52,
                        "text": "(w, v, k)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mutual Web-sampling Procedure:",
                "sec_num": null
            },
            {
                "text": "1. Syntactic Filtering of \"Bag-of-Words\" Search:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mutual Web-sampling Procedure:",
                "sec_num": null
            },
            {
                "text": "Search the web for sentences including v and a feature f from M (w, v, k) as \"bag of words\", i. e. sentences where w and f appear in any distance and in either order. Then filter out the sentences that do not match the defined syntactic relation between f and v (based on parsing). Features that co-occur with w in the correct syntactic relation are removed from M (w, v, k) . Do the same search and filtering for w and features from M(v, w, k).",
                "cite_spans": [
                    {
                        "start": 64,
                        "end": 73,
                        "text": "(w, v, k)",
                        "ref_id": null
                    },
                    {
                        "start": 365,
                        "end": 374,
                        "text": "(w, v, k)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mutual Web-sampling Procedure:",
                "sec_num": null
            },
            {
                "text": "On the missing features on both sides (which are left in M (w, v, k) and M(v, w, k) after stage 1), apply \"exact string\" search of the web. For this, convert the tuple (v, f) to a string by adding prepositions and articles where needed. For example, for (element, <project, pcomp_of, 1>) generate the corresponding string \"element of the project\" and search the web for exact matches of the string. Then validate the syntactic relationship of f and v in the extracted sentences. Remove the found features from M (w, v, k) and M(v, w, k) , respectively.",
                "cite_spans": [
                    {
                        "start": 59,
                        "end": 68,
                        "text": "(w, v, k)",
                        "ref_id": null
                    },
                    {
                        "start": 512,
                        "end": 536,
                        "text": "(w, v, k) and M(v, w, k)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactic Filtering of \"Exact String\" Matching:",
                "sec_num": "2."
            },
            {
                "text": "Since some of the features may be too infrequent or corpus-biased, check whether the remaining missing features do co-occur on the web with their original target words (with which they did occur in the corpus data). Otherwise, they should not be considered as valid misses and are also removed from M (w, v, k) and M(v, w, k) . Output: Inclusion in either direction holds if the corresponding set of missing features is now empty.",
                "cite_spans": [
                    {
                        "start": 301,
                        "end": 325,
                        "text": "(w, v, k) and M(v, w, k)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Missing Features Validation:",
                "sec_num": "3."
            },
            {
                "text": "We also experimented with features consisting of words without syntactic relations. For example, exact string, or bag-of-words match. However, al-most all the words (also non-entailing) were found with all the features of each other, even for semantically implausible combinations (e.g. a word and a feature appear next to each other but belong to different clauses of the sentence). Therefore we conclude that syntactic relation validation is very important, especially on the web, in order to avoid coincidental co-occurrences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Missing Features Validation:",
                "sec_num": "3."
            },
            {
                "text": "To test the validity of the distributional inclusion hypotheses we performed an empirical analysis on a selected test sample using our automated testing procedure.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Empirical Results",
                "sec_num": "5"
            },
            {
                "text": "We experimented with a randomly picked test sample of about 200 noun pairs of 1,200 pairs produced by RFF (for details see Geffet and Dagan, 2004 ) under Lin's similarity scheme (Lin, 1998) . The words were judged by the lexical entailment criterion (as described in Section 2). The original percentage of correct (52%) and incorrect (48%) entailments was preserved.",
                "cite_spans": [
                    {
                        "start": 123,
                        "end": 145,
                        "text": "Geffet and Dagan, 2004",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 178,
                        "end": 189,
                        "text": "(Lin, 1998)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and setting",
                "sec_num": "5.1"
            },
            {
                "text": "To estimate the degree of validity of the distributional inclusion hypotheses we decomposed each word pair of the sample (w, v) to two directional pairs ordered by potential entailment direction: (w, v) and (v, w) . The 400 resulting ordered pairs are used as a test set in Sections 5.2 and 5.3.",
                "cite_spans": [
                    {
                        "start": 196,
                        "end": 202,
                        "text": "(w, v)",
                        "ref_id": null
                    },
                    {
                        "start": 207,
                        "end": 213,
                        "text": "(v, w)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and setting",
                "sec_num": "5.1"
            },
            {
                "text": "Features were computed from co-occurrences in a subset of the Reuters corpus of about 18 million words. For the web feature sampling the maximal number of web samples for each query (wordfeature) was set to 3,000 sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and setting",
                "sec_num": "5.1"
            },
            {
                "text": "The test set of 400 ordered pairs was examined in terms of entailment (according to the manual judgment) and feature inclusion (according to the ITA algorithm), as shown in Table 2 . According to Hypothesis I we expect that a pair (w, v) that satisfies entailment will also preserve feature inclusion. On the other hand, by Hypothesis II if all the features of w are included by v then we expect that w entails v.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 173,
                        "end": 180,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Automatic Testing the Validity of the Hypotheses at the Word Level",
                "sec_num": "5.2"
            },
            {
                "text": "We observed that Hypothesis I is better attested by our data than the second hypothesis. Thus 86% (97 out of 113) of the entailing pairs fulfilled the inclusion condition. Hypothesis II holds for approximately 70% (97 of 139) of the pairs for which feature inclusion holds. In the next section we analyze the cases of violation of both hypotheses and find that the first hypothesis held to an almost perfect extent with respect to word senses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Automatic Testing the Validity of the Hypotheses at the Word Level",
                "sec_num": "5.2"
            },
            {
                "text": "It is also interesting to note that thanks to the web-sampling procedure over 90% of the nonincluded features in the corpus were found on the web, while most of the missing features (in the web) are indeed semantically implausible.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Automatic Testing the Validity of the Hypotheses at the Word Level",
                "sec_num": "5.2"
            },
            {
                "text": "Since our data was not sense tagged, the automatic validation procedure could only test the hypotheses at the word level. In this section our goal is to ana-lyze the findings of our empirical test at the word sense level as our hypotheses were defined for senses. Basically, two cases of hypotheses invalidity were detected: Case 1: Entailments with non-included features (violation of Hypothesis I);",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual Sense Level Testing of Hypotheses Validity",
                "sec_num": "5.3"
            },
            {
                "text": "Case 2: Feature Inclusion for non-entailments (violation of Hypothesis II).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual Sense Level Testing of Hypotheses Validity",
                "sec_num": "5.3"
            },
            {
                "text": "At the word level we observed 14% invalid pairs of the first case and 30% of the second case. However, our manual analysis shows, that over 90% of the first case pairs were due to a different sense of one of the entailing word, e.g. capital -town (capital as money) and spread -gap (spread as distribution) ( Table 3 ). Note that ambiguity of the entailed word does not cause errors (like town -area, area as domain) (Table 3) . Thus the first hypothesis holds at the sense level for over 98% of the cases (Table 4) .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 309,
                        "end": 316,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 417,
                        "end": 426,
                        "text": "(Table 3)",
                        "ref_id": null
                    },
                    {
                        "start": 506,
                        "end": 515,
                        "text": "(Table 4)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Manual Sense Level Testing of Hypotheses Validity",
                "sec_num": "5.3"
            },
            {
                "text": "Two remaining invalid instances of the first case were due to the web sampling method limitations and syntactic parsing filtering mistakes, especially for some less characteristic and infrequent features captured by RFF. Thus, in virtually all the examples tested in our experiment Hypothesis I was valid.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual Sense Level Testing of Hypotheses Validity",
                "sec_num": "5.3"
            },
            {
                "text": "We also explored the second case of invalid pairs: non-entailing words that pass the feature inclusion test. After sense based analysis their percentage was reduced slightly to 27.4%. Three possible reasons were discovered. First, there are words with features typical to the general meaning of the domain, which tend to be included by many other words of this domain, like valley -town. The features of valley (\"eastern valley\", \"central valley\", \"attack in valley\", \"industry of the valley\") are not discriminative enough to be distinguished from town, as they are all characteristic to any geographic location. +  -+  97  16  -42  245  Table 2 : Distribution of 400 entailing/nonentailing ordered pairs that hold/do not hold feature inclusion at the word level. +  -+  111  2  -42  245  Table 4 : Distribution of the entailing/nonentailing ordered pairs that hold/do not hold feature inclusion at the sense level.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 614,
                        "end": 646,
                        "text": "+  -+  97  16  -42  245  Table 2",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 765,
                        "end": 797,
                        "text": "+  -+  111  2  -42  245  Table 4",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Manual Sense Level Testing of Hypotheses Validity",
                "sec_num": "5.3"
            },
            {
                "text": "The Committee was discussing the Programme of the \"Big Eight,\" aimed against spread of weapon of mass destruction. town -area (\"town\" entails \"area\") <cooperation, pcomp_for> This is a promising area for cooperation and exchange of experiences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "spread -gap (mutually entail each other) <weapon, pcomp_of>",
                "sec_num": null
            },
            {
                "text": "capital -town (\"capital\" entails \"town\") <flow, nn> Offshore financial centers affect cross-border capital flow in China. Table 3 : Examples of ambiguity of entailmentrelated words, where the disjoint features belong to a different sense of the word.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 122,
                        "end": 129,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "spread -gap (mutually entail each other) <weapon, pcomp_of>",
                "sec_num": null
            },
            {
                "text": "The second group consists of words that can be entailing, but only in a context-dependent (anaphoric) manner rather than ontologically. For example, government and neighbour, while neighbour is used in the meaning of \"neighbouring (country) government\". Finally, sometimes one or both of the words are abstract and general enough and also highly ambiguous to appear with a wide range of features on the web, like element (violence -element, with all the tested features of violence included by element).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "spread -gap (mutually entail each other) <weapon, pcomp_of>",
                "sec_num": null
            },
            {
                "text": "To prevent occurrences of the second case more characteristic and discriminative features should be provided. For this purpose features extracted from the web, which are not domain-biased (like features from the corpus) and multi-word features may be helpful. Overall, though, there might be inherent cases that invalidate Hypothesis II.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "spread -gap (mutually entail each other) <weapon, pcomp_of>",
                "sec_num": null
            },
            {
                "text": "In this section we show that ITA can be practically used to improve the (non-directional) lexical entailment prediction task described in Section 2. Given the output of the distributional similarity method, we employ ITA at the word level to filter out non-entailing pairs. Word pairs that satisfy feature inclusion of all k features (at least in one direction) are claimed as entailing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improving Lexical Entailment Prediction by ITA (Inclusion Testing Algorithm)",
                "sec_num": "6"
            },
            {
                "text": "The same test sample of 200 word pairs mentioned in Section 5.1 was used in this experiment. The results were compared to RFF under Lin's similarity scheme (RFF-top-40 in Table 5 ).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 171,
                        "end": 178,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Improving Lexical Entailment Prediction by ITA (Inclusion Testing Algorithm)",
                "sec_num": "6"
            },
            {
                "text": "Precision was significantly improved, filtering out 60% of the incorrect pairs. On the other hand, the relative recall (considering RFF recall as 100%) was only reduced by 13%, consequently leading to a better relative F1, when considering the RFF-top-40 output as 100% recall (Table 5) .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 277,
                        "end": 286,
                        "text": "(Table 5)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Improving Lexical Entailment Prediction by ITA (Inclusion Testing Algorithm)",
                "sec_num": "6"
            },
            {
                "text": "Since our method removes about 35% of the original top-40 RFF output, it was interesting to compare our results to simply cutting off the 35% of the lowest ranked RFF words (top-26). The comparison to the baseline (RFF-top-26 in Table  5 ) showed that ITA filters the output much better than just cutting off the lowest ranking similarities.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 229,
                        "end": 237,
                        "text": "Table  5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Improving Lexical Entailment Prediction by ITA (Inclusion Testing Algorithm)",
                "sec_num": "6"
            },
            {
                "text": "We also tried a couple of variations on feature sampling for the web-based procedure. In one of our preliminary experiments we used the top-k RFF features instead of random selection. But we observed that top ranked RFF features are less discriminative than the random ones due to the nature of the RFF weighting strategy, which promotes features shared by many similar words. Then, we attempted doubling the sampling to 40 random features. As expected the recall was slightly decreased, while precision was increased by over 5%. In summary, the behavior of ITA sampling of k=20 and k=40 features is closely comparable (ITA-20 and ITA-40 in Table 5 , respectively) 4 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 641,
                        "end": 648,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Improving Lexical Entailment Prediction by ITA (Inclusion Testing Algorithm)",
                "sec_num": "6"
            },
            {
                "text": "The main contributions of this paper were: 1. We defined two Distributional Inclusion Hypotheses that associate feature inclusion with lexical entailment at the word sense level. The Hypotheses were proposed as a refinement for Harris' Distributional hypothesis and as an extension to the classic distributional similarity scheme. 2. To estimate the empirical validity of the defined hypotheses we developed an automatic inclusion testing algorithm (ITA). The core of the algorithm is a web-based feature inclusion testing procedure, which helped significantly to compensate for data sparseness. 3. Then a thorough analysis of the data behavior with respect to the proposed hypotheses was conducted. The first hypothesis was almost fully attested by the data, particularly at the sense level, while the second hypothesis did not fully hold. 4. Motivated by the empirical analysis we proposed to employ ITA for the practical task of improving lexical entailment acquisition. The algorithm was applied as a filtering technique on the distributional similarity (RFF) output. We ob- 4 The ITA-40 sampling fits the analysis from section 5.2 and 5.3 as well.",
                "cite_spans": [
                    {
                        "start": 1079,
                        "end": 1080,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions and Future Work",
                "sec_num": "7"
            },
            {
                "text": "Precision Recall F1 ITA-20 0.700 0.875 0.777 ITA-40 0.740 0.846 0.789 RFF-top-40 0.520 1.000 0.684 RFF-top-26 0.561 0.701 0.624 Table 5 : Comparative results of using the filter, with 20 and 40 feature sampling, compared to RFF top-40 and RFF top-26 similarities. ITA-20 and ITA-40 denote the websampling method with 20 and random 40 features, respectively. tained 17% increase of precision and succeeded to improve relative F1 by 15% over the baseline.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 128,
                        "end": 135,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Method",
                "sec_num": null
            },
            {
                "text": "Although the results were encouraging our manual data analysis shows that we still have to handle word ambiguity. In particular, this is important in order to be able to learn the direction of entailment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method",
                "sec_num": null
            },
            {
                "text": "To achieve better precision we need to increase feature discriminativeness. To this end syntactic features may be extended to contain more than one word, and ways for automatic extraction of features from the web (rather than from a corpus) may be developed. Finally, further investigation of combining the distributional and the co-occurrence pattern-based approaches over the web is desired.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method",
                "sec_num": null
            },
            {
                "text": "We found that less than 20% of the lexical entailment relations extracted by our method appeared as direct or indirect WordNet relations (synonyms, hyponyms or meronyms).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The Distributional Inclusion HypothesesIn this paper we suggest refined versions of the distributional similarity hypothesis which relate distributional behavior with lexical entailment.3 Since the original data set did not include the direction of entailment, we have enriched it by adding the judgments of entailment direction.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We are grateful to Shachar Mirkin for his help in implementing the web-based sampling procedure heavily employed in our experiments. We thank Idan Szpektor for providing the infrastructure system for web-based data extraction.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgement",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "VERBOCEAN: Mining the Web for Fine-Grained Semantic Verb Relations",
                "authors": [
                    {
                        "first": "Timothy",
                        "middle": [],
                        "last": "Chklovski",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Pantel",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of EMNLP-04. Barcelona",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chklovski, Timothy and Patrick Pantel. 2004. VERBOCEAN: Mining the Web for Fine-Grained Se- mantic Verb Relations. In Proc. of EMNLP-04. Bar- celona, Spain.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Word association norms, mutual information, and Lexicography",
                "authors": [
                    {
                        "first": "Kenneth",
                        "middle": [
                            "W"
                        ],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "Hanks",
                        "middle": [],
                        "last": "Patrick",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Computational Linguistics",
                "volume": "16",
                "issue": "1",
                "pages": "22--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Church, Kenneth W. and Hanks Patrick. 1990. Word association norms, mutual information, and Lexicog- raphy. Computational Linguistics, 16(1), pp. 22-29.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Contextual Word Similarity",
                "authors": [
                    {
                        "first": "Ido",
                        "middle": [],
                        "last": "Dagan",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Handbook of Natural Language Processing",
                "volume": "19",
                "issue": "",
                "pages": "459--476",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dagan, Ido. 2000. Contextual Word Similarity, in Rob Dale, Hermann Moisl and Harold Somers (Eds.), Handbook of Natural Language Processing, Marcel Dekker Inc, 2000, Chapter 19, pp. 459-476.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "The PASCAL Recognizing Textual Entailment Challenge",
                "authors": [
                    {
                        "first": "Ido",
                        "middle": [],
                        "last": "Dagan",
                        "suffix": ""
                    },
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Glickman",
                        "suffix": ""
                    },
                    {
                        "first": "Bernardo",
                        "middle": [],
                        "last": "Magnini",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. of the PASCAL Challenges Workshop for Recognizing Textual Entailment",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dagan, Ido, Oren Glickman and Bernardo Magnini. 2005. The PASCAL Recognizing Textual Entailment Challenge. In Proc. of the PASCAL Challenges Workshop for Recognizing Textual Entailment. Southampton, U.K.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Feature Vector Quality and Distributional Similarity",
                "authors": [
                    {
                        "first": "Maayan",
                        "middle": [],
                        "last": "Geffet",
                        "suffix": ""
                    },
                    {
                        "first": "Ido",
                        "middle": [],
                        "last": "Dagan",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of Coling-04",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Geffet, Maayan and Ido Dagan, 2004. Feature Vector Quality and Distributional Similarity. In Proc. of Col- ing-04. Geneva. Switzerland.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Exploration in Automatic Thesaurus Discovery",
                "authors": [
                    {
                        "first": "Gregory",
                        "middle": [],
                        "last": "Grefenstette",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Grefenstette, Gregory. 1994. Exploration in Automatic Thesaurus Discovery. Kluwer Academic Publishers.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Mathematical structures of language",
                "authors": [
                    {
                        "first": "Zelig",
                        "middle": [
                            "S"
                        ],
                        "last": "Harris",
                        "suffix": ""
                    }
                ],
                "year": 1968,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harris, Zelig S. Mathematical structures of language. Wiley, 1968.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Automatic acquisition of hyponyms from large text corpora",
                "authors": [
                    {
                        "first": "Marti",
                        "middle": [],
                        "last": "Hearst",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Proc. of COLING-92",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hearst, Marti. 1992. Automatic acquisition of hypo- nyms from large text corpora. In Proc. of COLING- 92. Nantes, France.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Using the Web to Overcome Data Sparseness",
                "authors": [
                    {
                        "first": "Frank",
                        "middle": [],
                        "last": "Keller",
                        "suffix": ""
                    },
                    {
                        "first": "Maria",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    },
                    {
                        "first": "Olga",
                        "middle": [],
                        "last": "Ourioupina",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of EMNLP-02",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Keller, Frank, Maria Lapata, and Olga Ourioupina. 2002. Using the Web to Overcome Data Sparseness. In Jan Hajic and Yuji Matsumoto, eds., In Proc. of EMNLP-02. Philadelphia, PA.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Similarity-Based Approaches to Natural Language Processing",
                "authors": [
                    {
                        "first": "Lillian",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lee, Lillian. 1997. Similarity-Based Approaches to Natural Language Processing. Ph.D. thesis, Harvard University, Cambridge, MA.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Principle-Based Parsing without Overgeneration",
                "authors": [
                    {
                        "first": "Dekang",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Proc. of ACL-93",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lin, Dekang. 1993. Principle-Based Parsing without Overgeneration. In Proc. of ACL-93. Columbus, Ohio. .",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Automatic Retrieval and Clustering of Similar Words",
                "authors": [
                    {
                        "first": "Dekang",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proc. of COLING-ACL98",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lin, Dekang. 1998. Automatic Retrieval and Clustering of Similar Words. In Proc. of COLING-ACL98, Montreal, Canada.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Models for the semantic classification of noun phrases",
                "authors": [
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Moldovan",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Badulescu",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Tatu",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Antohe",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Girju",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of HLT/NAACL-2004 Workshop on Computational Lexical Semantics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Moldovan, Dan, Badulescu, A., Tatu, M., Antohe, D., and Girju, R. 2004. Models for the semantic classifi- cation of noun phrases. In Proc. of HLT/NAACL- 2004 Workshop on Computational Lexical Seman- tics. Boston.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Constructing semantic space models from parsed corpora",
                "authors": [
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Pado",
                        "suffix": ""
                    },
                    {
                        "first": "Mirella",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proc. of ACL-03",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pado, Sebastian and Mirella Lapata. 2003. Constructing semantic space models from parsed corpora. In Proc. of ACL-03, Sapporo, Japan.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Discovering Word Senses from Text",
                "authors": [
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Pantel",
                        "suffix": ""
                    },
                    {
                        "first": "Dekang",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-02)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pantel, Patrick and Dekang Lin. 2002. Discovering Word Senses from Text. In Proc. of ACM SIGKDD Conference on Knowledge Discovery and Data Min- ing (KDD-02). Edmonton, Canada.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Distributional clustering of English words",
                "authors": [
                    {
                        "first": "Fernando",
                        "middle": [],
                        "last": "Pereira",
                        "suffix": ""
                    },
                    {
                        "first": "Tishby",
                        "middle": [],
                        "last": "Naftali",
                        "suffix": ""
                    },
                    {
                        "first": "Lee",
                        "middle": [],
                        "last": "Lillian",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Proc. of ACL-93",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pereira, Fernando, Tishby Naftali, and Lee Lillian. 1993. Distributional clustering of English words. In Proc. of ACL-93. Columbus, Ohio.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Learning Surface Text Patterns for a Question Answering System",
                "authors": [
                    {
                        "first": "Deepak",
                        "middle": [],
                        "last": "Ravichandran",
                        "suffix": ""
                    },
                    {
                        "first": "Eduard",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of ACL-02",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ravichandran, Deepak and Eduard Hovy. 2002. Learn- ing Surface Text Patterns for a Question Answering System. In Proc. of ACL-02. Philadelphia, PA.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Experiments on linguisticallybased term associations",
                "authors": [
                    {
                        "first": "Gerda",
                        "middle": [],
                        "last": "Ruge",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Information Processing & Management",
                "volume": "28",
                "issue": "3",
                "pages": "317--332",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ruge, Gerda. 1992. Experiments on linguistically- based term associations. Information Processing & Management, 28(3), pp. 317-332.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "A General Framework for Distributional Similarity",
                "authors": [
                    {
                        "first": "Julie",
                        "middle": [],
                        "last": "Weeds",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Weir",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proc. of EMNLP-03",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Weeds, Julie and David Weir. 2003. A General Frame- work for Distributional Similarity. In Proc. of EMNLP-03. Sapporo, Japan.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Characterizing Measures of Lexical Distributional Similarity",
                "authors": [
                    {
                        "first": "Julie",
                        "middle": [],
                        "last": "Weeds",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Weir",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Mccarthy",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of Coling-04",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Weeds, Julie, D. Weir, D. McCarthy. 2004. Characteriz- ing Measures of Lexical Distributional Similarity. In Proc. of Coling-04. Geneva, Switzerland.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF0": {
                "num": null,
                "type_str": "table",
                "text": ").",
                "html": null,
                "content": "<table><tr><td colspan=\"2\">&lt;=&gt; element, component</td><td colspan=\"2\">&lt;=&gt; gap, spread</td><td>*</td><td>town, airport</td><td>&lt;= loan, mortgage</td></tr><tr><td colspan=\"2\">=&gt; government, body</td><td>*</td><td colspan=\"3\">warplane, bomb &lt;=&gt; program, plan</td><td>*</td><td>tank, warplane</td></tr><tr><td>*</td><td>match, winner</td><td colspan=\"2\">=&gt; bill, program</td><td colspan=\"2\">&lt;= conflict, war</td><td>=&gt; town, location</td></tr></table>"
            },
            "TABREF1": {
                "num": null,
                "type_str": "table",
                "text": "Sample of the data set of top-40 distributionally similar word pairs produced by the RFFbased method of",
                "html": null,
                "content": "<table/>"
            }
        }
    }
}