File size: 95,342 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
{
    "paper_id": "P06-1014",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:25:09.363765Z"
    },
    "title": "Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance",
    "authors": [
        {
            "first": "Roberto",
            "middle": [],
            "last": "Navigli",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Universit\u00e0 di Roma \"La Sapienza\" Roma",
                "location": {
                    "country": "Italy"
                }
            },
            "email": "navigli@di.uniroma1.it"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation. In this paper, we present a method for reducing the granularity of the WordNet sense inventory based on the mapping to a manually crafted dictionary encoding sense hierarchies, namely the Oxford Dictionary of English. We assess the quality of the mapping and the induced clustering, and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task.",
    "pdf_parse": {
        "paper_id": "P06-1014",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation. In this paper, we present a method for reducing the granularity of the WordNet sense inventory based on the mapping to a manually crafted dictionary encoding sense hierarchies, namely the Oxford Dictionary of English. We assess the quality of the mapping and the induced clustering, and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Word Sense Disambiguation (WSD) is undoubtedly one of the hardest tasks in the field of Natural Language Processing. Even though some recent studies report benefits in the use of WSD in specific applications (e.g. Vickrey et al. (2005) and Stokoe (2005) ), the present performance of the best ranking WSD systems does not provide a sufficient degree of accuracy to enable real-world, language-aware applications.",
                "cite_spans": [
                    {
                        "start": 214,
                        "end": 235,
                        "text": "Vickrey et al. (2005)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 240,
                        "end": 253,
                        "text": "Stokoe (2005)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Most of the disambiguation approaches adopt the WordNet dictionary (Fellbaum, 1998) as a sense inventory, thanks to its free availability, wide coverage, and existence of a number of standard test sets based on it. Unfortunately, WordNet is a fine-grained resource, encoding sense distinctions that are often difficult to recognize even for human annotators (Edmonds and Kilgariff, 1998) .",
                "cite_spans": [
                    {
                        "start": 67,
                        "end": 83,
                        "text": "(Fellbaum, 1998)",
                        "ref_id": null
                    },
                    {
                        "start": 358,
                        "end": 387,
                        "text": "(Edmonds and Kilgariff, 1998)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Recent estimations of the inter-annotator agreement when using the WordNet inventory report figures of 72.5% agreement in the preparation of the English all-words test set at Senseval-3 (Snyder and Palmer, 2004) and 67.3% on the Open Mind Word Expert annotation exercise (Chklovski and Mihalcea, 2002) . These numbers lead us to believe that a credible upper bound for unrestricted fine-grained WSD is around 70%, a figure that state-of-the-art automatic systems find it difficult to outperform. Furthermore, even if a system were able to exceed such an upper bound, it would be unclear how to interpret such a result.",
                "cite_spans": [
                    {
                        "start": 175,
                        "end": 211,
                        "text": "Senseval-3 (Snyder and Palmer, 2004)",
                        "ref_id": null
                    },
                    {
                        "start": 271,
                        "end": 301,
                        "text": "(Chklovski and Mihalcea, 2002)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "It seems therefore that the major obstacle to effective WSD is the fine granularity of the Word-Net sense inventory, rather than the performance of the best disambiguation systems. Interestingly, Ng et al. (1999) show that, when a coarse-grained sense inventory is adopted, the increase in interannotator agreement is much higher than the reduction of the polysemy degree.",
                "cite_spans": [
                    {
                        "start": 181,
                        "end": 212,
                        "text": "Interestingly, Ng et al. (1999)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Following these observations, the main question that we tackle in this paper is: can we produce and evaluate coarse-grained sense distinctions and show that they help boost disambiguation on standard test sets? We believe that this is a crucial research topic in the field of WSD, that could potentially benefit several application areas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The contribution of this paper is two-fold. First, we provide a wide-coverage method for clustering WordNet senses via a mapping to a coarse-grained sense inventory, namely the Oxford Dictionary of English (Soanes and Stevenson, 2003) (Section 2) . We show that this method is well-founded and accurate with respect to manually-made clusterings (Section 3). Second, we evaluate the performance of WSD systems when using coarse-grained sense inventories (Section 4). We conclude the paper with an account of related work (Section 5), and some final remarks (Section 6).",
                "cite_spans": [
                    {
                        "start": 206,
                        "end": 234,
                        "text": "(Soanes and Stevenson, 2003)",
                        "ref_id": null
                    },
                    {
                        "start": 235,
                        "end": 246,
                        "text": "(Section 2)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this section, we present an approach to the automatic construction of a coarse-grained sense inventory based on the mapping of WordNet senses to coarse senses in the Oxford Dictionary of English. In section 2.1, we introduce the two dictionaries, in Section 2.2 we illustrate the creation of sense descriptions from both resources, while in Section 2.3 we describe a lexical and a semantic method for mapping sense descriptions of Word-Net senses to ODE coarse entries.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Producing a Coarse-Grained Sense Inventory",
                "sec_num": "2"
            },
            {
                "text": "WordNet (Fellbaum, 1998 ) is a computational lexicon of English which encodes concepts as synonym sets (synsets), according to psycholinguistic principles. For each word sense, WordNet provides a gloss (i.e. a textual definition) and a set of relations such as hypernymy (e.g. apple kind-of edible fruit), meronymy (e.g. computer has-part CPU), etc. The Oxford Dictionary of English (ODE) (Soanes and Stevenson, 2003) 1 provides a hierarchical structure of senses, distinguishing between homonymy (i.e. completely distinct senses, like race as a competition and race as a taxonomic group) and polysemy (e.g. race as a channel and as a current). Each polysemous sense is further divided into a core sense and a set of subsenses. For each sense (both core and subsenses), the ODE provides a textual definition, and possibly hypernyms and domain labels. Excluding monosemous senses, the ODE has an average number of 2.56 senses per word compared to the average polysemy of 3.21 in WordNet on the same words (with peaks for verbs of 2.73 and 3.75 senses, respectively).",
                "cite_spans": [
                    {
                        "start": 8,
                        "end": 23,
                        "text": "(Fellbaum, 1998",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Dictionaries",
                "sec_num": "2.1"
            },
            {
                "text": "In Table 1 we show an excerpt of the sense inventories of the noun race as provided by both dictionaries 2 . The ODE identifies 3 homonyms and 3 polysemous senses for the first homonym, while WordNet encodes a flat list of 6 senses, some of which strongly related (e.g. race#1 and race#3). Also, the ODE provides a sense (ginger root) which is not taken into account in WordNet.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 10,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Dictionaries",
                "sec_num": "2.1"
            },
            {
                "text": "The structure of the ODE senses is clearly hierarchical: if we were able to map with a high accuracy WordNet senses to ODE entries, then a sense clustering could be trivially induced from the mapping. As a result, the granularity of the WordNet inventory would be drastically reduced. Furthermore, disregarding errors, the clustering would be well-founded, as the ODE sense groupings were manually crafted by expert lexicographers. In the next section we illustrate a general way of constructing sense descriptions that we use for determining a complete, automatic mapping between the two dictionaries.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Dictionaries",
                "sec_num": "2.1"
            },
            {
                "text": "For each word w, and for each sense S of w in a given dictionary D \u2208 {WORDNET, ODE}, we construct a sense description d D (S) as a bag of words:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "d D (S) = def D (S) \u222a hyper D (S) \u222a domains D (S)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "where:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "\u2022 def D (S) is the set of words in the textual definition of S (excluding usage examples), automatically lemmatized and partof-speech tagged with the RASP statistical parser (Briscoe and Carroll, 2002) ; \u2022 hyper D (S) is the set of direct hypernyms of S in the taxonomy hierarchy of D (\u2205 if hypernymy is not available); \u2022 domains D (S) includes the set of domain labels possibly assigned to sense S (\u2205 when no domain is assigned).",
                "cite_spans": [
                    {
                        "start": 174,
                        "end": 201,
                        "text": "(Briscoe and Carroll, 2002)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "Specifically, in the case of WordNet, we generate def WN (S) from the gloss of S, hyper WN (S) from the noun and verb taxonomy, and domains WN (S) from the subject field codes, i.e. domain labels produced semi-automatically by Magnini and Cavagli\u00e0 (2000) for each Word-Net synset (we exclude the general-purpose label, called FACTOTUM).",
                "cite_spans": [
                    {
                        "start": 227,
                        "end": 254,
                        "text": "Magnini and Cavagli\u00e0 (2000)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "For example, for the first WordNet sense of race#n we obtain the following description:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "d WN (race#n#1) = {competition#n} \u222a {contest#n} \u222a {POLITICS#N, SPORT#N}",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "In the case of the ODE, def ODE (S) is generated from the definitions of the core sense and the subsenses of the entry S. Hypernymy (for nouns only) and domain labels, when available, are included in the respective sets hyper ODE (S) race#n (WordNet) #1 Any competition (\u2192 contest). #2 People who are believed to belong to the same genetic stock (\u2192 group). #3 A contest of speed (\u2192 contest). #4 The flow of air that is driven backwards by an aircraft propeller (\u2192 flow). #5 A taxonomic group that is a division of a species; usually arises as a consequence of geographical isolation within a species (\u2192 taxonomic group). #6 A canal for a current of water (\u2192 canal).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "race#n (ODE) #1.1 Core: SPORT A competition between runners, horses, vehicles, etc.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "\u2022 RACING A series of such competitions for horses or dogs \u2022 A situation in which individuals or groups compete (\u2192 contest) \u2022 AS-TRONOMY The course of the sun or moon through the heavens (\u2192 trajectory). #1.2 Core: NAUTICAL A strong or rapid current (\u2192 flow). #1.3 Core: A groove, channel, or passage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "\u2022 MECHANICS A water channel \u2022 Smooth groove or guide for balls (\u2192 indentation, conduit) \u2022 FARMING Fenced passageway in a stockyard (\u2192 route) \u2022 TEXTILES The channel along which the shuttle moves. #2.1 Core: ANTHROPOLOGY Division of humankind (\u2192 ethnic group).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "\u2022 The condition of belonging to a racial division or group \u2022 A group of people sharing the same culture, history, language \u2022 BIOLOGY A group of people descended from a common ancestor. #3.1 Core: BOTANY, FOOD A ginger root (\u2192 plant part).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "and domains ODE (S). For example, the first ODE sense of race#n is described as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "d ODE (race#n#1.1) = {competition#n, runner#n, horse#n, vehicle#n, . . . , heavens#n} \u222a {contest#n, trajectory#n} \u222a {SPORT#N, RACING#N, ASTRONOMY#N} Notice that, for every S, d D (S)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "is non-empty as a definition is always provided by both dictionaries. This approach to sense descriptions is general enough to be applicable to any other dictionary with similar characteristics (e.g. the Longman Dictionary of Contemporary English in place of ODE).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constructing Sense Descriptions",
                "sec_num": "2.2"
            },
            {
                "text": "In order to produce a coarse-grained version of the WordNet inventory, we aim at defining an automatic mapping between WordNet and ODE, i.e. a function \u00b5 :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "Senses WN \u2192 Senses ODE \u222a { },",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "where Senses D is the set of senses in the dictionary D and is a special element assigned when no plausible option is available for mapping (e.g. when the ODE encodes no entry corresponding to a WordNet sense).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "Given a WordNet sense S \u2208 Senses WN (w) we definem(S), the best matching sense in the ODE, as:m (S) = arg max",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "S \u2208Senses ODE (w) match(S, S )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "where match :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "Senses WN \u00d7Senses ODE \u2192 [0, 1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "] is a function that measures the degree of matching between the sense descriptions of S and S . We define the mapping \u00b5 as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "\u00b5(S) = m(S) if match(S,m(S)) \u2265 \u03b8 otherwise",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "where \u03b8 is a threshold below which a matching between sense descriptions is considered unreliable. Finally, we define the clustering of senses c(w) of a word w as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "c(w) = {\u00b5 \u22121 (S ) : S \u2208 Senses ODE (w), \u00b5 \u22121 (S ) = \u2205} \u222a {{S} : S \u2208 Senses WN (w), \u00b5(S) = }",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "where \u00b5 \u22121 (S ) is the group of WordNet senses mapped to the same sense S of the ODE, while the second set includes singletons of WordNet senses for which no mapping can be provided according to the definition of \u00b5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "For example, an ideal mapping between entries in Table 1 would be as follows:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 49,
                        "end": 56,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "\u00b5(race#n#1) = race#n#1.1, \u00b5(race#n#2) = race#n#2.1, \u00b5(race#n#3) = race#n#1.1, \u00b5(race#n#5) = race#n#2.1, \u00b5(race#n#4) = race#n#1.2, \u00b5(race#n#6) = race#n#1.3, resulting in the following clustering: c(race#n) = {{race#n#1, race#n#3}, {race#n#2, race#n#5}, {race#n#4}, {race#n#6}}",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "In Sections 2.3.1 and 2.3.2 we describe two different choices for the match function, respectively based on the use of lexical and semantic information.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mapping Word Senses",
                "sec_num": "2.3"
            },
            {
                "text": "As a first approach, we adopted a purely lexical matching function based on the notion of lexical overlap (Lesk, 1986) . The function counts the number of lemmas that two sense descriptions of a word have in common (we neglect parts of speech), and is normalized by the minimum of the two description lengths:",
                "cite_spans": [
                    {
                        "start": 106,
                        "end": 118,
                        "text": "(Lesk, 1986)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical matching",
                "sec_num": "2.3.1"
            },
            {
                "text": "match LESK (S, S ) = |d WN (S)\u2229d ODE (S )| min{|d WN (S)|,|d ODE (S )|}",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical matching",
                "sec_num": "2.3.1"
            },
            {
                "text": "where S \u2208 Senses WN (w) and S \u2208 Senses ODE (w). For instance: match LESK (race#n#1, race#n#1.1) = 3 min{4,20} = 3 4 = 0.75 match LESK (race#n#2, race#n#1.1) = 1 8 = 0.125 Notice that unrelated senses can get a positive score because of an overlap of the sense descriptions. In the example, group#n, the hypernym of race#n#2, is also present in the definition of race#n#1.1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lexical matching",
                "sec_num": "2.3.1"
            },
            {
                "text": "Unfortunately, the very same concept can be defined with entirely different words. To match definitions in a semantic manner we adopted a knowledge-based Word Sense Disambiguation algorithm, Structural Semantic Interconnections (SSI, Navigli and Velardi (2004) ).",
                "cite_spans": [
                    {
                        "start": 234,
                        "end": 260,
                        "text": "Navigli and Velardi (2004)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "SSI 3 exploits an extensive lexical knowledge base, built upon the WordNet lexicon and enriched with collocation information representing semantic relatedness between sense pairs. Collocations are acquired from existing resources (like the Oxford Collocations, the Longman Language Activator, collocation web sites, etc.). Each collocation is mapped to the WordNet sense inventory in a semi-automatic manner and transformed into a relatedness edge (Navigli and Velardi, 2005) .",
                "cite_spans": [
                    {
                        "start": 448,
                        "end": 475,
                        "text": "(Navigli and Velardi, 2005)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "Given a word context",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "C = {w 1 , ..., w n }, SSI builds a graph G = (V, E) such that V = n i=1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "Senses WN (w i ) and (S, S ) \u2208 E if there is at least one semantic interconnection between S and S in the lexical knowledge base. A semantic interconnection pattern is a relevant sequence of edges selected according to a manually-created context-free grammar, i.e. a path connecting a pair of word senses, possibly including a number of intermediate concepts. The grammar consists of a small number of rules, inspired by the notion of lexical chains (Morris and Hirst, 1991) .",
                "cite_spans": [
                    {
                        "start": 450,
                        "end": 474,
                        "text": "(Morris and Hirst, 1991)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "SSI performs disambiguation in an iterative fashion, by maintaining a set C of senses as a semantic context. Initially, C = V (the entire set of senses of words in C). At each step, for each sense S in C, the algorithm calculates a score of the degree of connectivity between S and the other senses in C:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "3 Available online from: http://lcl.di.uniroma1.it/ssi",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "Score SSI (S, C) = S \u2208C\\{S} i\u2208IC(S,S ) 1 length(i) S \u2208C\\{S} |IC(S,S )|",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "where IC(S, S ) is the set of interconnections between senses S and S . The contribution of a single interconnection is given by the reciprocal of its length, calculated as the number of edges connecting its ends. The overall degree of connectivity is then normalized by the number of contributing interconnections. The highest ranking sense S of word w is chosen and the senses of w are removed from the semantic context C. The algorithm terminates when either C = \u2205 or there is no sense such that its score exceeds a fixed threshold.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "Given a word w, semantic matching is performed in two steps. First, for each dictionary D \u2208 {WORDNET, ODE}, and for each sense S \u2208 Senses D (w), the sense description of S is disambiguated by applying SSI to d D (S). As a result, we obtain a semantic description as a bag of concepts d sem D (S). Notice that sense descriptions from both dictionaries are disambiguated with respect to the WordNet sense inventory.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "Second, given a WordNet sense S \u2208 Senses WN (w) and an ODE sense S \u2208 Senses ODE (w), we define match SSI (S, S ) as a function of the direct relations connecting senses in d sem WN (S) and d sem ODE (S ):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "match SSI (S, S ) = |c\u2192c :c\u2208d sem WN (S),c \u2208d sem ODE (S )| |d sem WN (S)|\u2022|d sem ODE (S )|",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "where c \u2192 c denotes the existence of a relation edge in the lexical knowledge base between a concept c in the description of S and a concept c in the description of S . Edges include the WordNet relation set (synonymy, hypernymy, meronymy, antonymy, similarity, nominalization, etc.) and the relatedness edge mentioned above (we adopt only direct relations to maintain a high precision). For example, some of the relations found between concepts in d sem WN (race#n#3) and d sem ODE (race#n#1.1) are:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "race#n#3 relation race#n#1.1 speed#n#1 related\u2212to \u2212\u2192 vehicle#n#1 race#n#3 related\u2212to \u2212\u2192 compete#v#1 racing#n#1 kind\u2212of \u2212\u2192 sport#n#1 race#n#3 kind\u2212of \u2212\u2192 contest#n#1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "contributing to the final value of the function on the two senses: match SSI (race#n#3, race#n#1.1) = 0.41 Due to the normalization factor in the denominator, these values are generally low, but unrelated senses have values much closer to 0. We chose SSI for the semantic matching function as it has the best performance among untrained systems on unconstrained WSD (cf. Section 4.1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Semantic matching",
                "sec_num": "2.3.2"
            },
            {
                "text": "We evaluated the accuracy of the mapping produced with the lexical and semantic methods described in Sections 2.3.1 and 2.3.2, respectively. We produced a gold-standard data set by manually mapping 5,077 WordNet senses of 763 randomlyselected words to the respective ODE entries (distributed as follows: 466 nouns, 231 verbs, 50 adjectives, 16 adverbs). The data set was created by two annotators and included only polysemous words. These words had 2,600 senses in the ODE. Overall, 4,599 out of the 5,077 WordNet senses had a corresponding sense in ODE (i.e. the ODE covered 90.58% of the WordNet senses in the data set), while 2,053 out of the 2,600 ODE senses had an analogous entry in WordNet (i.e. WordNet covered 78.69% of the ODE senses). The WordNet clustering induced by the manual mapping was 49.85% of the original size and the average degree of polysemy decreased from 6.65 to 3.32.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "The reliability of our data set is substantiated by a quantitative assessment: 548 WordNet senses of 60 words were mapped to ODE entries by both annotators, with a pairwise mapping agreement of 92.7%. The average Cohen's \u03ba agreement between the two annotators was 0.874.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "In Table 2 we report the precision and recall of the lexical and semantic functions in providing the appropriate association for the set of senses having a corresponding entry in ODE (i.e. excluding the cases where a sense was assigned by the manual annotators, cf. Section 2.3). We also report in the Table the accuracy of the two functions when we view the problem as a classification task: an automatic association is correct if it corresponds to the manual association provided by the annotators or if both assign no answer (equivalently, if both provide an label). All the differences between Lesk and SSI are statistically significant (p < 0.01).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 10,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "As a second experiment, we used two information-theoretic measures, namely entropy and purity (Zhao and Karypis, 2004) , to compare an automatic clustering c(w) (i.e. the sense groups acquired for word w) with a manual clusterin\u011d c(w). The entropy quantifies the distribution of the senses of a group over manually-defined groups, while the purity measures the extent to which a group contains senses primarily from one manual group.",
                "cite_spans": [
                    {
                        "start": 94,
                        "end": 118,
                        "text": "(Zhao and Karypis, 2004)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "Given a word w, and a sense group G \u2208 c(w), the entropy of G is defined as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "H(G) = \u2212 1 log |\u0109(w)| \u011c \u2208\u0109(w) |\u011c\u2229G| |\u011c| log( |\u011c\u2229G| |\u011c| )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "i.e., the entropy 4 of the distribution of senses of group G over the groups of the manual clusterin\u011d c(w). The entropy of an entire clustering c(w) is defined as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "Entropy(c(w)) = G\u2208c(w) |G| |Senses WN (w)| H(G)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "that is, the entropy of each group weighted by its size. The purity of a sense group G \u2208 c(w) is defined as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "P u(G) = 1 |G| max G\u2208\u0109(w) |\u011c \u2229 G|",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "i.e., the normalized size of the largest subset of G contained in a single group\u011c of the manual clustering. The overall purity of a clustering is obtained as a weighted sum of the individual cluster purities:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "P urity(c(w)) = G\u2208c(w) |G| |Senses WN (w)| P u(G)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "We calculated the entropy and purity of the clustering produced automatically with the lexical and the semantic method, when compared to the grouping induced by our manual mapping (ODE), and to the grouping manually produced for the English all-words task at Senseval-2 (3,499 senses of 403 nouns). We excluded from both gold standards words having a single cluster. The figures are shown in Table 3 (good entropy and purity values should be close to 0 and 1 respectively). Table 3 shows that the quality of the clustering induced with a semantic function outperforms both lexical overlap and a random baseline. The baseline was computed averaging among 200 random clustering solutions for each word. Random clusterings were the result of a random mapping function between WordNet and ODE senses. As expected, the automatic clusterings have a lower purity when compared to the Senseval-2 noun grouping as the granularity of the latter is much finer than ODE (entropy is only partially affected by this difference, indicating that we are producing larger groups). Indeed, our gold standard (ODE), when compared to the Senseval groupings, obtains a low purity as well (0.75) and an entropy of 0.13.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 392,
                        "end": 399,
                        "text": "Table 3",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 474,
                        "end": 481,
                        "text": "Table 3",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluating the Clustering",
                "sec_num": "3"
            },
            {
                "text": "The main reason for building a clustering of Word-Net senses is to make Word Sense Disambiguation a feasible task, thus overcoming the obstacles that even humans encounter when annotating sentences with excessively fine-grained word senses. As the semantic method outperformed the lexical overlap in the evaluations of previous Section, we decided to acquire a clustering on the entire WordNet sense inventory using this approach. As a result, we obtained a reduction of 33.54% in the number of entries (from 60,302 to 40,079 senses) and a decrease of the polysemy degree from 3.14 to 2.09. These figures exclude monosemous senses and derivatives in WordNet. As we are experimenting on an automaticallyacquired clustering, all the figures are affected by the 22.06% error rate resulting from Table 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating Coarse-Grained WSD",
                "sec_num": "4"
            },
            {
                "text": "As a first experiment, we assessed the effect of the automatic sense clustering on the English allwords task at Senseval-3 (Snyder and Palmer, 2004) . This task required WSD systems to provide a sense choice for 2,081 content words in a set of 301 sentences from the fiction, news story, and editorial domains.",
                "cite_spans": [
                    {
                        "start": 123,
                        "end": 148,
                        "text": "(Snyder and Palmer, 2004)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments on Senseval-3",
                "sec_num": "4.1"
            },
            {
                "text": "We considered the three best-ranking WSD systems -GAMBL (Decadt et al., 2004) , Sense-Learner (Mihalcea and Faruque, 2004) , and Koc University (Yuret, 2004) -and the best unsupervised system, namely IRST-DDD (Strapparava et al., 2004) . We also included SSI as it outperforms all the untrained systems (Navigli and Velardi, 2005) . To evaluate the performance of the five systems on our coarse clustering, we considered a fine-grained answer to be correct if it belongs to the same cluster as that of the correct answer. Table 4 reports the performance of the systems, together with the first sense and the random baseline (in the last column we report the performance on the original fine-grained test set).",
                "cite_spans": [
                    {
                        "start": 56,
                        "end": 77,
                        "text": "(Decadt et al., 2004)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 94,
                        "end": 122,
                        "text": "(Mihalcea and Faruque, 2004)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 144,
                        "end": 157,
                        "text": "(Yuret, 2004)",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 209,
                        "end": 235,
                        "text": "(Strapparava et al., 2004)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 303,
                        "end": 330,
                        "text": "(Navigli and Velardi, 2005)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 522,
                        "end": 529,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Experiments on Senseval-3",
                "sec_num": "4.1"
            },
            {
                "text": "The best system, Gambl, obtains almost 78% precision and recall, an interesting figure compared to 65% performance in the fine-grained WSD task. An interesting aspect is that the ranking across systems was maintained when moving from a fine-grained to a coarse-grained sense inventory, although two systems (SSI and IRST-DDD) show the best improvement.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments on Senseval-3",
                "sec_num": "4.1"
            },
            {
                "text": "In order to show that the general improvement is the result of an appropriate clustering, we assessed the performance of Gambl by averaging its results when using 100 randomly-generated different clusterings. We excluded monosemous clusters from the test set (i.e. words with all the senses mapped to the same ODE entry), so as to clarify the real impact of properly grouped clusters. As a result, the random setting obtained 64.56% average accuracy, while the performance when adopting our automatic clustering was 70.84% (1,025/1,447 items).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments on Senseval-3",
                "sec_num": "4.1"
            },
            {
                "text": "To make it clear that the performance improvement is not only due to polysemy reduction, we considered a subset of the Senseval-3 test set including only the incorrect answers given by the fine-grained version of Gambl (623 items). In other words, on this data set Gambl performs with 0% accuracy. We compared the performance of Gambl when adopting our automatic clustering with the accuracy of the random baseline. The results were respectively 34% and 15.32% accuracy. These experiments prove that the performance in Table 4 is not due to chance, but to an effective way of clustering word senses. Furthermore, the systems in the Table are not taking advantage of the information given by the clustering (trained systems could be retrained on the coarse clustering). To assess this aspect, we performed a further experiment. We modified the sense inventory of the SSI lexical knowledge base by adopting the coarse inventory acquired automatically. To this end, we merged the semantic interconnections belonging to the same cluster. We also disabled the first sense baseline heuristic, that most of the systems use as a back-off when they have no information about the word at hand. We call this new setting SSI * (as opposed to SSI used in Table 4 ).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 519,
                        "end": 526,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 632,
                        "end": 641,
                        "text": "Table are",
                        "ref_id": null
                    },
                    {
                        "start": 1242,
                        "end": 1249,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Experiments on Senseval-3",
                "sec_num": "4.1"
            },
            {
                "text": "In Table 5 we report the results. The algorithm obtains an improvement of 9.8% recall and 3.1% precision (both statistically significant, p < 0.05). The increase in recall is mostly due to the fact that different senses belonging to the same cluster now contribute together to the choice of that cluster (rather than individually to the choice of a fine-grained sense). Dolan (1994) describes a method for clustering word senses with the use of information provided in the electronic version of LDOCE (textual definitions, semantic relations, domain labels, etc.). Unfortunately, the approach is not described in detail and no evaluation is provided.",
                "cite_spans": [
                    {
                        "start": 370,
                        "end": 382,
                        "text": "Dolan (1994)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 10,
                        "text": "Table 5",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Experiments on Senseval-3",
                "sec_num": "4.1"
            },
            {
                "text": "Most of the approaches in the literature make use of the WordNet structure to cluster its senses. Peters et al. (1998) exploit specific patterns in the WordNet hierarchy (e.g. sisters, autohyponymy, twins, etc.) to group word senses. They study semantic regularities or generalizations obtained and analyze the effect of clustering on the compatibility of language-specific wordnets. Mihalcea and Moldovan (2001) study the structure of WordNet for the identification of sense regularities: to this end, they provide a set of semantic and probabilistic rules. An evaluation of the heuristics provided leads to a polysemy reduction of 39% and an error rate of 5.6%. A different principle for clustering WordNet senses, based on the Minimum Description Length, is described by Tomuro (2001) . The clustering is evaluated against WordNet cousins and used for the study of inter-annotator disagreement. Another approach exploits the (dis)agreements of human annotators to derive coarse-grained sense clusters (Chklovski and Mihalcea, 2003) , where sense similarity is computed from confusion matrices. Agirre and Lopez (2003) analyze a set of methods to cluster WordNet senses based on the use of confusion matrices from the results of WSD systems, translation equivalences, and topic signatures (word co-occurrences extracted from the web). They assess the acquired clusterings against 20 words from the Senseval-2 sense groupings.",
                "cite_spans": [
                    {
                        "start": 98,
                        "end": 118,
                        "text": "Peters et al. (1998)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 774,
                        "end": 787,
                        "text": "Tomuro (2001)",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 1004,
                        "end": 1034,
                        "text": "(Chklovski and Mihalcea, 2003)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1097,
                        "end": 1120,
                        "text": "Agirre and Lopez (2003)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "Finally, McCarthy (2006) proposes the use of ranked lists, based on distributionally nearest neighbours, to relate word senses. This softer notion of sense relatedness allows to adopt the most appropriate granularity for a specific application.",
                "cite_spans": [
                    {
                        "start": 9,
                        "end": 24,
                        "text": "McCarthy (2006)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "Compared to our approach, most of these methods do not evaluate the clustering produced with respect to a gold-standard clustering. Indeed, such an evaluation would be difficult and timeconsuming without a coarse sense inventory like that of ODE. A limited assessment of coarse WSD is performed by Fellbaum et al. (2001) , who obtain a large improvement in the accuracy of a maximum-entropy system on clustered verbs.",
                "cite_spans": [
                    {
                        "start": 298,
                        "end": 320,
                        "text": "Fellbaum et al. (2001)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "5"
            },
            {
                "text": "In this paper, we presented a study on the construction of a coarse sense inventory for the WordNet lexicon and its effects on unrestricted WSD.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "A key feature in our approach is the use of a well-established dictionary encoding sense hierarchies. As remarked in Section 2.2, the method can employ any dictionary with a sufficiently structured inventory of senses, and can thus be applied to reduce the granularity of, e.g., wordnets of other languages. One could argue that the adoption of the ODE as a sense inventory for WSD would be a better solution. While we are not against this possibility, there are problems that cannot be solved at present: the ODE does not encode semantic re-lations and is not freely available. Also, most of the present research and standard data sets focus on WordNet.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "The fine granularity of the WordNet sense inventory is unsuitable for most applications, thus constituting an obstacle that must be overcome. We believe that the research topic analyzed in this paper is a first step towards making WSD a feasible task and enabling language-aware applications, like information retrieval, question answering, machine translation, etc. In a future work, we plan to investigate the contribution of coarse disambiguation to such real-world applications. To this end, we aim to set up an Open Mind-like experiment for the validation of the entire mapping from WordNet to ODE, so that only a minimal error rate would affect the experiments to come.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "Finally, the method presented here could be useful for lexicographers in the comparison of the quality of dictionaries, and in the detection of missing word senses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "6"
            },
            {
                "text": "The ODE was kindly made available by Ken Litkowski (CL Research) in the context of a license agreement.2 In the following, we denote a WordNet sense with the convention w#p#i where w is a word, p a part of speech and i is a sense number; analogously, we denote an ODE sense with the convention w#p#h.k where h is the homonym number and k is the k-th polysemous entry under homonym h.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Notice that we are comparing clusterings against the manual clustering (rather than viceversa), as otherwise a completely unclustered solution would result in 1.0 entropy and 0.0 purity.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This work is partially funded by the Interop NoE (508011), 6 th European Union FP. We wish to thank Paola Velardi, Mirella Lapata and Samuel Brody for their useful comments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Clustering wordnet word senses",
                "authors": [
                    {
                        "first": "Eneko",
                        "middle": [],
                        "last": "Agirre",
                        "suffix": ""
                    },
                    {
                        "first": "Oier",
                        "middle": [],
                        "last": "Lopez",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proc. of Conf. on Recent Advances on Natural Language (RANLP)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eneko Agirre and Oier Lopez. 2003. Clustering wordnet word senses. In Proc. of Conf. on Recent Advances on Natural Language (RANLP). Borovets, Bulgary.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Robust accurate statistical annotation of general text",
                "authors": [
                    {
                        "first": "Ted",
                        "middle": [],
                        "last": "Briscoe",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Carroll",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of 3 rd Conference on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ted Briscoe and John Carroll. 2002. Robust accurate sta- tistical annotation of general text. In Proc. of 3 rd Confer- ence on Language Resources and Evaluation. Las Palmas, Gran Canaria.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Building a sense tagged corpus with open mind word expert",
                "authors": [
                    {
                        "first": "Tim",
                        "middle": [],
                        "last": "Chklovski",
                        "suffix": ""
                    },
                    {
                        "first": "Rada",
                        "middle": [],
                        "last": "Mihalcea",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of ACL 2002 Workshop on WSD: Recent Successes and Future Directions",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tim Chklovski and Rada Mihalcea. 2002. Building a sense tagged corpus with open mind word expert. In Proc. of ACL 2002 Workshop on WSD: Recent Successes and Fu- ture Directions. Philadelphia, PA.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Exploiting agreement and disagreement of human annotators for word sense disambiguation",
                "authors": [
                    {
                        "first": "Tim",
                        "middle": [],
                        "last": "Chklovski",
                        "suffix": ""
                    },
                    {
                        "first": "Rada",
                        "middle": [],
                        "last": "Mihalcea",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proc. of Recent Advances In NLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tim Chklovski and Rada Mihalcea. 2003. Exploiting agree- ment and disagreement of human annotators for word sense disambiguation. In Proc. of Recent Advances In NLP (RANLP 2003). Borovetz, Bulgaria.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Gambl, genetic algorithm optimization of memory-based wsd",
                "authors": [
                    {
                        "first": "Bart",
                        "middle": [],
                        "last": "Decadt",
                        "suffix": ""
                    },
                    {
                        "first": "V\u00e9ronique",
                        "middle": [],
                        "last": "Hoste",
                        "suffix": ""
                    },
                    {
                        "first": "Walter",
                        "middle": [],
                        "last": "Daelemans",
                        "suffix": ""
                    },
                    {
                        "first": "Antal",
                        "middle": [],
                        "last": "Van Den",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Bosch",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of ACL/SIGLEX Senseval-3",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bart Decadt, V\u00e9ronique Hoste, Walter Daelemans, and Antal van den Bosch. 2004. Gambl, genetic algorithm opti- mization of memory-based wsd. In Proc. of ACL/SIGLEX Senseval-3. Barcelona, Spain.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Word sense ambiguation: Clustering related senses",
                "authors": [
                    {
                        "first": "William",
                        "middle": [
                            "B"
                        ],
                        "last": "Dolan",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proc. of 15th Conference on Computational Linguistics (COLING)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "William B. Dolan. 1994. Word sense ambiguation: Cluster- ing related senses. In Proc. of 15th Conference on Com- putational Linguistics (COLING). Morristown, N.J.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Introduction to the special issue on evaluating word sense disambiguation systems",
                "authors": [
                    {
                        "first": "Philip",
                        "middle": [],
                        "last": "Edmonds",
                        "suffix": ""
                    },
                    {
                        "first": "Adam",
                        "middle": [],
                        "last": "Kilgariff",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Journal of Natural Language Engineering",
                "volume": "8",
                "issue": "4",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philip Edmonds and Adam Kilgariff. 1998. Introduction to the special issue on evaluating word sense disambiguation systems. Journal of Natural Language Engineering, 8(4).",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Manual and automatic semantic annotation with wordnet",
                "authors": [
                    {
                        "first": "Christiane",
                        "middle": [],
                        "last": "Fellbaum",
                        "suffix": ""
                    },
                    {
                        "first": "Martha",
                        "middle": [],
                        "last": "Palmer",
                        "suffix": ""
                    },
                    {
                        "first": "Hoa",
                        "middle": [
                            "Trang"
                        ],
                        "last": "Dang",
                        "suffix": ""
                    },
                    {
                        "first": "Lauren",
                        "middle": [],
                        "last": "Delfs",
                        "suffix": ""
                    },
                    {
                        "first": "Susanne",
                        "middle": [],
                        "last": "Wolf",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. of NAACL Workshop on WordNet and Other Lexical Resources",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christiane Fellbaum, Martha Palmer, Hoa Trang Dang, Lau- ren Delfs, and Susanne Wolf. 2001. Manual and au- tomatic semantic annotation with wordnet. In Proc. of NAACL Workshop on WordNet and Other Lexical Re- sources. Pittsburgh, PA.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "WordNet: an Electronic Lexical Database",
                "authors": [],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: an Electronic Lexical Database. MIT Press.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Automatic sense disambiguation using machine readable dictionaries: how to tell a pine code from an ice cream cone",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Lesk",
                        "suffix": ""
                    }
                ],
                "year": 1986,
                "venue": "Proc. of 5 th Conf. on Systems Documentation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michael Lesk. 1986. Automatic sense disambiguation us- ing machine readable dictionaries: how to tell a pine code from an ice cream cone. In Proc. of 5 th Conf. on Systems Documentation. ACM Press.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Integrating subject field codes into wordnet",
                "authors": [
                    {
                        "first": "Bernardo",
                        "middle": [],
                        "last": "Magnini",
                        "suffix": ""
                    },
                    {
                        "first": "Gabriela",
                        "middle": [],
                        "last": "Cavagli\u00e0",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proc. of the 2 nd Conference on Language Resources and Evaluation (LREC)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bernardo Magnini and Gabriela Cavagli\u00e0. 2000. Integrating subject field codes into wordnet. In Proc. of the 2 nd Con- ference on Language Resources and Evaluation (LREC).",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Relating wordnet senses for word sense disambiguation",
                "authors": [
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Mccarthy",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proc. of ACL Workshop on Making Sense of Sense",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diana McCarthy. 2006. Relating wordnet senses for word sense disambiguation. In Proc. of ACL Workshop on Mak- ing Sense of Sense. Trento, Italy.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Senselearner: Minimally supervised word sense disambiguation for all words in open text",
                "authors": [
                    {
                        "first": "Rada",
                        "middle": [],
                        "last": "Mihalcea",
                        "suffix": ""
                    },
                    {
                        "first": "Ehsanul",
                        "middle": [],
                        "last": "Faruque",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of ACL/SIGLEX Senseval-3",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rada Mihalcea and Ehsanul Faruque. 2004. Senselearner: Minimally supervised word sense disambiguation for all words in open text. In Proc. of ACL/SIGLEX Senseval-3. Barcelona, Spain.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Automatic generation of a coarse grained wordnet",
                "authors": [
                    {
                        "first": "Rada",
                        "middle": [],
                        "last": "Mihalcea",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Moldovan",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. of NAACL Workshop on WordNet and Other Lexical Resources",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rada Mihalcea and Dan Moldovan. 2001. Automatic generation of a coarse grained wordnet. In Proc. of NAACL Workshop on WordNet and Other Lexical Re- sources. Pittsburgh, PA.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text",
                "authors": [
                    {
                        "first": "Jane",
                        "middle": [],
                        "last": "Morris",
                        "suffix": ""
                    },
                    {
                        "first": "Graeme",
                        "middle": [],
                        "last": "Hirst",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Computational Linguistics",
                "volume": "",
                "issue": "1",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jane Morris and Graeme Hirst. 1991. Lexical cohesion com- puted by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1).",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Learning domain ontologies from document warehouses and dedicated websites",
                "authors": [
                    {
                        "first": "Roberto",
                        "middle": [],
                        "last": "Navigli",
                        "suffix": ""
                    },
                    {
                        "first": "Paola",
                        "middle": [],
                        "last": "Velardi",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Computational Linguistics",
                "volume": "",
                "issue": "2",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Roberto Navigli and Paola Velardi. 2004. Learning domain ontologies from document warehouses and dedicated web- sites. Computational Linguistics, 30(2).",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Structural semantic interconnections: a knowledge-based approach to word sense disambiguation",
                "authors": [
                    {
                        "first": "Roberto",
                        "middle": [],
                        "last": "Navigli",
                        "suffix": ""
                    },
                    {
                        "first": "Paola",
                        "middle": [],
                        "last": "Velardi",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Roberto Navigli and Paola Velardi. 2005. Structural se- mantic interconnections: a knowledge-based approach to word sense disambiguation. IEEE Transactions on Pat- tern Analysis and Machine Intelligence (PAMI), 27(7).",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "A case study on the inter-annotator agreement for word sense disambiguation",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Hwee",
                        "suffix": ""
                    },
                    {
                        "first": "Chung",
                        "middle": [
                            "Y"
                        ],
                        "last": "Ng",
                        "suffix": ""
                    },
                    {
                        "first": "Shou",
                        "middle": [
                            "K"
                        ],
                        "last": "Lim",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Foo",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proc. of ACL Workshop: Standardizing Lexical Resources",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hwee T. Ng, Chung Y. Lim, and Shou K. Foo. 1999. A case study on the inter-annotator agreement for word sense dis- ambiguation. In Proc. of ACL Workshop: Standardizing Lexical Resources. College Park, Maryland.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Automatic sense clustering in eurowordnet",
                "authors": [
                    {
                        "first": "Wim",
                        "middle": [],
                        "last": "Peters",
                        "suffix": ""
                    },
                    {
                        "first": "Ivonne",
                        "middle": [],
                        "last": "Peters",
                        "suffix": ""
                    },
                    {
                        "first": "Piek",
                        "middle": [],
                        "last": "Vossen",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proc. of the 1 st Conference on Language Resources and Evaluation (LREC)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wim Peters, Ivonne Peters, and Piek Vossen. 1998. Au- tomatic sense clustering in eurowordnet. In Proc. of the 1 st Conference on Language Resources and Evaluation (LREC). Granada, Spain.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "The english all-words task",
                "authors": [
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Snyder",
                        "suffix": ""
                    },
                    {
                        "first": "Martha",
                        "middle": [],
                        "last": "Palmer",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of ACL 2004 SENSEVAL-3 Workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Benjamin Snyder and Martha Palmer. 2004. The english all-words task. In Proc. of ACL 2004 SENSEVAL-3 Work- shop. Barcelona, Spain.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Oxford Dictionary of English",
                "authors": [],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Catherine Soanes and Angus Stevenson, editors. 2003. Ox- ford Dictionary of English. Oxford University Press.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Differentiating homonymy and polysemy in information retrieval",
                "authors": [
                    {
                        "first": "Christopher",
                        "middle": [],
                        "last": "Stokoe",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christopher Stokoe. 2005. Differentiating homonymy and polysemy in information retrieval. In Proc. of the Confer- ence on Empirical Methods in Natural Language Process- ing. Vancouver, Canada.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Pattern abstraction and term similarity for word sense disambiguation",
                "authors": [
                    {
                        "first": "Carlo",
                        "middle": [],
                        "last": "Strapparava",
                        "suffix": ""
                    },
                    {
                        "first": "Alfio",
                        "middle": [],
                        "last": "Gliozzo",
                        "suffix": ""
                    },
                    {
                        "first": "Claudio",
                        "middle": [],
                        "last": "Giuliano",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of ACL/SIGLEX Senseval-3",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carlo Strapparava, Alfio Gliozzo, and Claudio Giuliano. 2004. Pattern abstraction and term similarity for word sense disambiguation. In Proc. of ACL/SIGLEX Senseval- 3. Barcelona, Spain.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Tree-cut and a lexicon based on systematic polysemy",
                "authors": [
                    {
                        "first": "Noriko",
                        "middle": [],
                        "last": "Tomuro",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. of the Meeting of the NAACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Noriko Tomuro. 2001. Tree-cut and a lexicon based on sys- tematic polysemy. In Proc. of the Meeting of the NAACL. Pittsburgh, USA.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Word sense disambiguation vs. statistical machine translation",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Vickrey",
                        "suffix": ""
                    },
                    {
                        "first": "Luke",
                        "middle": [],
                        "last": "Biewald",
                        "suffix": ""
                    },
                    {
                        "first": "Marc",
                        "middle": [],
                        "last": "Teyssier",
                        "suffix": ""
                    },
                    {
                        "first": "Daphne",
                        "middle": [],
                        "last": "Koller",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. of Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Vickrey, Luke Biewald, Marc Teyssier, and Daphne Koller. 2005. Word sense disambiguation vs. statistical machine translation. In Proc. of Conference on Empiri- cal Methods in Natural Language Processing. Vancouver, Canada.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Some experiments with a naive bayes wsd system",
                "authors": [
                    {
                        "first": "Deniz",
                        "middle": [],
                        "last": "Yuret",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of ACL/SIGLEX Senseval-3",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Deniz Yuret. 2004. Some experiments with a naive bayes wsd system. In Proc. of ACL/SIGLEX Senseval-3. Barcelona, Spain.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Empirical and theoretical comparisons of selected criterion functions for document clustering",
                "authors": [
                    {
                        "first": "Ying",
                        "middle": [],
                        "last": "Zhao",
                        "suffix": ""
                    },
                    {
                        "first": "George",
                        "middle": [],
                        "last": "Karypis",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Machine Learning",
                "volume": "",
                "issue": "3",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ying Zhao and George Karypis. 2004. Empirical and theo- retical comparisons of selected criterion functions for doc- ument clustering. Machine Learning, 55(3).",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF0": {
                "text": "The sense inventory of race#n in WordNet and ODE (definitions are abridged, bullets (\u2022) indicate a subsense in the ODE, arrows (\u2192) indicate hypernymy, DOMAIN LABELS are in small caps).",
                "html": null,
                "content": "<table/>",
                "num": null,
                "type_str": "table"
            },
            "TABREF1": {
                "text": "Performance of the lexical and semantic mapping functions.",
                "html": null,
                "content": "<table><tr><td>Func.</td><td>Prec.</td><td>Recall</td><td>F1</td><td>Acc.</td></tr><tr><td>Lesk</td><td colspan=\"4\">84.74% 65.43% 73.84% 66.08%</td></tr><tr><td>SSI</td><td colspan=\"4\">86.87% 79.67% 83.11% 77.94%</td></tr></table>",
                "num": null,
                "type_str": "table"
            },
            "TABREF2": {
                "text": "Comparison with gold standards.",
                "html": null,
                "content": "<table><tr><td colspan=\"4\">Gold standard Method Entropy Purity</td></tr><tr><td>ODE</td><td>Lesk SSI</td><td>0.15 0.11</td><td>0.87 0.87</td></tr><tr><td/><td>Baseline</td><td>0.28</td><td>0.67</td></tr><tr><td>Senseval</td><td>Lesk SSI</td><td>0.17 0.16</td><td>0.71 0.69</td></tr><tr><td/><td>Baseline</td><td>0.27</td><td>0.57</td></tr></table>",
                "num": null,
                "type_str": "table"
            },
            "TABREF3": {
                "text": "Performance of WSD systems at Senseval-3 on coarse-grained sense inventories.",
                "html": null,
                "content": "<table><tr><td>System</td><td>Prec. Rec.</td><td>F1</td><td>F1 fine</td></tr><tr><td>Gambl</td><td colspan=\"3\">0.779 0.779 0.779 0.652</td></tr><tr><td colspan=\"4\">SenseLearner 0.769 0.769 0.769 0.646</td></tr><tr><td>KOC Univ.</td><td colspan=\"3\">0.768 0.768 0.768 0.641</td></tr><tr><td>SSI</td><td colspan=\"3\">0.758 0.758 0.758 0.612</td></tr><tr><td>IRST-DDD</td><td colspan=\"3\">0.721 0.719 0.720 0.583</td></tr><tr><td>FS baseline</td><td colspan=\"3\">0.769 0.769 0.769 0.624</td></tr><tr><td>Random BL</td><td colspan=\"3\">0.497 0.497 0.497 0.340</td></tr></table>",
                "num": null,
                "type_str": "table"
            },
            "TABREF4": {
                "text": "Performance of SSI on coarse inventories (SSI * uses a coarse-grained knowledge base).",
                "html": null,
                "content": "<table><tr><td>System</td><td colspan=\"2\">Prec. Recall F1</td></tr><tr><td colspan=\"2\">SSI + baseline 0.758 0.758</td><td>0.758</td></tr><tr><td>SSI</td><td>0.717 0.576</td><td>0.639</td></tr><tr><td>SSI  *</td><td>0.748 0.674</td><td>0.709</td></tr></table>",
                "num": null,
                "type_str": "table"
            }
        }
    }
}