File size: 76,151 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
{
    "paper_id": "U07-1018",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:08:49.875592Z"
    },
    "title": "Dictionary Alignment for Context-sensitive Word Glossing",
    "authors": [
        {
            "first": "Willy",
            "middle": [],
            "last": "Yap",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Melbourne",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Timothy",
            "middle": [],
            "last": "Baldwin",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Melbourne",
                "location": {}
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper proposes a method for automatically sense-to-sense aligning dictionaries in different languages (focusing on Japanese and English), based on structural data in the respective dictionaries. The basis of the proposed method is sentence similarity of the sense definition sentences, using a bilingual Japanese-to-English dictionary as a pivot during the alignment process. We experiment with various embellishments to the basic method, including term weighting, stemming/lemmatisation, and ontology expansion.",
    "pdf_parse": {
        "paper_id": "U07-1018",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper proposes a method for automatically sense-to-sense aligning dictionaries in different languages (focusing on Japanese and English), based on structural data in the respective dictionaries. The basis of the proposed method is sentence similarity of the sense definition sentences, using a bilingual Japanese-to-English dictionary as a pivot during the alignment process. We experiment with various embellishments to the basic method, including term weighting, stemming/lemmatisation, and ontology expansion.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "In a multi-lingual environment such as the Internet, users often stumble across webpages authored in an unfamiliar language which potentially contain information of interest. While users can consult dictionaries to help them understand the content of the webpages, the process of looking up words in unfamiliar languages is at best time-consuming, and at worst impossible due to a range of reasons. First, the writing system of the language may be unfamiliar to the user, e.g. the Cyrillic alphabet for a monolingual English speaker. Second, the user may not be familiar with the non-segmenting nature of languages such as Chinese and Japanese, and hence be incapable of delimiting the words to look up in the dictionary in the first place. Third, the user may be unable to lemmatise the word to determine the form in which it is listed in a dictionary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There are several alternatives to help decipher webpages in unfamiliar languages. The first one is to use an online machine translation system such as Altavista's Babel Fish 1 or Google Translate. 2 Figure 1 : Multiple translations for the Japanese word",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 199,
                        "end": 207,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "[ageru] produced by rikai.com. The correct translation in this context is \"to raise\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "While web-based machine translation services occasionally produce good translations for linguisticallysimilar languages such as English and French, they do not perform very well in translating languages which are removed from one another (Koehn, 2005) .",
                "cite_spans": [
                    {
                        "start": 238,
                        "end": 251,
                        "text": "(Koehn, 2005)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The second alternative is a pop-up glossing application. The application takes raw text or a URL, parses the words, and returns the pop-up translation of each word as the mouse hovers over it. Some example pop-up glossing applications for Japanese source text and English glosses are Rikai 3 and POPjisyo. 4 With the aid of these pop-up translations, the manual effort of segmenting words (if necessary) and looking up each can be avoided. This application is also useful as an educational aid for learners of that language.",
                "cite_spans": [
                    {
                        "start": 306,
                        "end": 307,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The drawback with these applications is they display all possible translations of a given word irrespective of context. Faced with the task of determining the correct translation themselves, users frequently misinterpret words. An illustration of this situation is given in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 274,
                        "end": 282,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We propose a context-sensitive dictionary glossing application to enhance the utility of on-line glossing applications by sensitising the presented glosses to the context of use. The proposed method works by combining a monolingual word sense disambiguation (WSD) system (Baldwin et al., to appear) with an automatically induced cross-lingual sense alignment table. Based on the prediction(s) of the WSD system, our application presents the corresponding set of context-sensitive glosses to the user dictionary glossing by analysing the output of the alignment process.",
                "cite_spans": [
                    {
                        "start": 271,
                        "end": 298,
                        "text": "(Baldwin et al., to appear)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This paper focuses on the cross-lingual sense alignment aspect of the application. We take separate sense inventories for two distinct languages (Japanese and English in our case) and align the senses between the two. The basis of the alignment process is overlap in sense definitions. By adjusting a threshold for the required level of match, we are able to adjust the precision and recall of the alignment. In preliminary experimentation, we achieve promising results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The remainder of this paper is structured as follows. We review previous research on dictionary alignment in Section 2, and outline the various resources we utilise during the alignment process in Section 3. We then describe the proposed basic sense-to-sense alignment method, along with various enhancements (Section 4), and present our experimental method and the results of our experiments (Sections 5 and 6, respectively). Finally we discuss our results and future research in Section 7.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There has been a significant amount of research on bilingual dictionary alignment using a third language as a pivot. For example, built Japanese-French and Japanese-Korean dictionaries using English as the pivot language. In other research, used English and Chinese as pivots to generate a Korean-Japanese dictionary: English because of the accessibility of Korean-English and Japanese-English dictionaries, and Chinese because of the high overlap in orthography between Korean and Japanese, based on Chinese hanzi.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Previous Research",
                "sec_num": "2"
            },
            {
                "text": "There have been numerous attempts to manually develop multilingual resources that include crosslingual sense alignments (Vossen, 1998; Stamou et al., 2002) , and the import of cross-lingual semantic alignment has been ably demonstrated by the high impact of these resources. Due to the high overhead in manually constructing such resources, there have been various attempts at automatic crosslingual sense alignment. The methods are predominantly corpus-driven, based either on cross-lingual distributional similarity in a comparable corpus (e.g. Ngai et al. (2002) ) or word alignment over a parallel corpus (e.g. Gliozzo et al. (2005) ).",
                "cite_spans": [
                    {
                        "start": 120,
                        "end": 134,
                        "text": "(Vossen, 1998;",
                        "ref_id": null
                    },
                    {
                        "start": 135,
                        "end": 155,
                        "text": "Stamou et al., 2002)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 547,
                        "end": 565,
                        "text": "Ngai et al. (2002)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 615,
                        "end": 636,
                        "text": "Gliozzo et al. (2005)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Previous Research",
                "sec_num": "2"
            },
            {
                "text": "There is a lesser amount of research on crosslingually aligning ontologies without using largescale corpus data, which we discuss in greater detail as it is more closely related to that proposed in this research. Asanoma (2001) aligned the Japanese Goi-Taikei ontology with WordNet by first translating a significant subset of the WordNet synonym sets (synsets) into Japanese, automatically matching these based on (monolingual Japanese) lexical overlap, and \"filling in the gaps\" for the remaining classes based on their hierarchical positioning relative to the aligned classes. Knight and Luk (1994) aligned Spanish and English senses based on: (1) overlap in sets of translations corresponding to each sense of a given Spanish word, with synsets in Word-Net; and (2) domain codes in the Spanish and English ontologies. They additionally aligned monolingual English dictionaries based on overlap in the definitions of each sense. The former cross-lingual case assumes a sense-discriminated bilingual dictionary, which we do not have access to. The latter case is similar to our research in that it compares definition sentences, but differs in that the definitions are in the same language. The most closely related work to our research is that of Nichols et al. (2005) , who aligned Lexeed senses with WordNet synsets as a by-product of the Lexeed ontology induction task (see Section 3.1), although they do not provide an explicit evaluation of the Lexeed-WordNet alignment for direct comparison.",
                "cite_spans": [
                    {
                        "start": 213,
                        "end": 227,
                        "text": "Asanoma (2001)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 580,
                        "end": 601,
                        "text": "Knight and Luk (1994)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 1250,
                        "end": 1271,
                        "text": "Nichols et al. (2005)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Previous Research",
                "sec_num": "2"
            },
            {
                "text": "\u23a1 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a3 Word ryuu POS noun Sense 1 \u23a1 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a3 Lexical-type noun-lex Definition / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / An imaginary animal.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Previous Research",
                "sec_num": "2"
            },
            {
                "text": "Dragons are like enormous snakes with 4 legs and horns. Dragons live in the sea, lakes and ponds, and are said to form clouds and cause rain when they fly up into the sky. Hypernym ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Previous Research",
                "sec_num": "2"
            },
            {
                "text": "ANIMAL \u23a4 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a6 Sense 2,3,4 ... Sense 5 \u23a1 \u23a2 \u23a2 \u23a3 Lexical-type noun-lex Definition / / / / / / / In shogi, a promoted rook. Domain SHOGI \u23a4 \u23a5 \u23a5 \u23a6 \u23a4 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a6",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Previous Research",
                "sec_num": "2"
            },
            {
                "text": "The Lexeed Semantic Database of Japanese is a machine-readable dictionary consisting of the most commonly-used words in Japanese (Kasahara et al., 2004) . In total, there are 28,000 words in Lexeed, and a total of 46,437 senses. Associated with each sense is a set of definition sentences, constructed entirely using the closed vocabulary of the 28,000 words found in Lexeed, such that 60% of the 28,000 words occur in the definition sentences (Tanaka et al., 2006) . In addition to the definition sentences, Lexeed also contains part of speech (POS), lexical relations between the senses (if any) and an example sentence, also based on the closed vocabulary of 28,000 words. All content words in the definition and example sentences are sense annotated.",
                "cite_spans": [
                    {
                        "start": 129,
                        "end": 152,
                        "text": "(Kasahara et al., 2004)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 444,
                        "end": 465,
                        "text": "(Tanaka et al., 2006)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Lexeed semantic database of Japanese",
                "sec_num": "3.1"
            },
            {
                "text": "Automatic ontology acquisition methods have been applied to Lexeed to induce lexical relations between sense pairs, based on the sense-annotated definition sentences (Nichols et al., 2005) and comparison with both the Goi-Taikei thesaurus and WordNet 2.0.",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 188,
                        "text": "(Nichols et al., 2005)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Lexeed semantic database of Japanese",
                "sec_num": "3.1"
            },
            {
                "text": "An example Lexeed entry for the word ryuu is given in Figure 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 54,
                        "end": 62,
                        "text": "Figure 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Lexeed semantic database of Japanese",
                "sec_num": "3.1"
            },
            {
                "text": "EDICT is a free machine-readable Japanese-to-English dictionary (Breen, 1995) . The project is highly active and has been extended to other target languages such as German, French and Russian.",
                "cite_spans": [
                    {
                        "start": 64,
                        "end": 77,
                        "text": "(Breen, 1995)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EDICT",
                "sec_num": "3.2"
            },
            {
                "text": "EDICT contains more than 170,000 Japanese entries, each of which is associated with one or more English glosses. It also optionally contains information such as the pronunciation of the entry, POS, and domain of application.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EDICT",
                "sec_num": "3.2"
            },
            {
                "text": "WordNet is an electronic semantic lexical database of English (Fellbaum, 1998) . It is made up of more than 100,000 synsets, with each synset representing a group of synonyms. Its entries are categorised into four POS categories: nouns, verbs, adjectives and adverbs. Each POS is described in a discrete lexical network.",
                "cite_spans": [
                    {
                        "start": 62,
                        "end": 78,
                        "text": "(Fellbaum, 1998)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WordNet",
                "sec_num": "3.3"
            },
            {
                "text": "Every synset in WordNet has a definition sentence, and sample sentence(s) are provided for most of the synsets; in combination, these are termed the WordNet gloss. Semantic relations connect one synset to another, and include relation types such as hypernym, hyponymy, antonymy and meronymy. The majority of these relations do not cross POS boundaries.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "WordNet",
                "sec_num": "3.3"
            },
            {
                "text": "Since we only experiment with hypernyms (and, symmetrically, hyponyms), we provide a simple review of this relation. A synset A is a hypernym of a synset B iff B is a kind of A. For example, vehicle is a hypernym of car, while perceive is a hypernym of hear, sight, touch, smell, taste. 5 Figure 3 : Example of normalisation of the translation string; we stop at \"rook\" as WordNet has a matching entry for it When building the baseline for our evaluation, we used the SemCor corpus-a subset of the Brown corpus annotated with WordNet senses-to derive the frequency counts of each WordNet synset (Landes et al., 1998) . Section 5 discusses this process in more detail.",
                "cite_spans": [
                    {
                        "start": 595,
                        "end": 616,
                        "text": "(Landes et al., 1998)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 289,
                        "end": 297,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "WordNet",
                "sec_num": "3.3"
            },
            {
                "text": "Our basic alignment method, along with various extensions, is outlined below.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Proposed Methods",
                "sec_num": "4"
            },
            {
                "text": "In this paper, we align a semantic database of Japanese (Lexeed) with a semantic network of English (WordNet) at the sense level. First, we use Lexeed to find all possible senses of a given word, and retrieve the definition sentences for each. Since all the definition sentences are in Japanese, we use EDICT as a pivot to convert Lexeed definition sentences into English. In this process, all possible translations of all Japanese words found in the definition sentences are returned, along with their POS classes. For every translation returned, we find entries in WordNet that match the translation and POS category. If there is no match for the given POS, we relax this constraint and search for entries in WordNet that match the translation but not the POS.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic alignment method using cosine similarity",
                "sec_num": "4.1"
            },
            {
                "text": "Problems arise when WordNet does not have a matching entry for the translation. This situation doesn't distinguish between hyponyms and troponyms, however, we treat the two identically.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic alignment method using cosine similarity",
                "sec_num": "4.1"
            },
            {
                "text": "usually happens when the translation returned by EDICT is comprised of more than one English word. For a Japanese verb, e.g., the English translation in EDICT almost always begins with the auxiliary to (e.g. nomu is translated as to drink). WordNet does not contain a verbal entry for to drink, but does contain an entry for drink. To handle this case of partial match, we locate the longest right word substring of the EDICT translation which is indexed in WordNet.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic alignment method using cosine similarity",
                "sec_num": "4.1"
            },
            {
                "text": "A related problem is when the translation contains domain or collocational information in parentheses. For example, ryuu is translated as both dragon and promoted rook (shogi). The first translation has a matching entry in WordNet but the second translation does not. In this second case, there is no right word substring which matches in WordNet, as we end up with rook (shogi) and then (shogi), neither of which is contained in WordNet. In order to deal with this situation, we first normalise the translation strings by removing all the brackets and query WordNet with the normalised string. Should there be a matching entry, we stop here. If not, we then remove all strings between brackets, and apply the longest right word substring heuristic as above. An illustration of this process is given in Figure 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 803,
                        "end": 811,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Basic alignment method using cosine similarity",
                "sec_num": "4.1"
            },
            {
                "text": "In the worst case of WordNet not having a matching entry for any right word substring, we discard the translation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Basic alignment method using cosine similarity",
                "sec_num": "4.1"
            },
            {
                "text": "At this point, we have aligned a given Japanese word with (hopefully) one or more English words, but are still no closer to inducing sense alignment pairs. In order to produce the sense alignments, we generate all pairings of Lexeed senses with WordNet synsets for each WordNet-matched word translation. For each such pair, we compile out the Lexeed definition sentence(s) word-translated into English, and the WordNet glosses, and convert each into a simple vector of term frequencies. We then measure the similarity of each vector pair using cosine similarity. An overview of this alignment process is presented in Figure 4 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 617,
                        "end": 625,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Basic alignment method using cosine similarity",
                "sec_num": "4.1"
            },
            {
                "text": "The basic alignment method does not use any form of term weighting, and thus overemphasises common function words such as the, which and and, and downplays the impact of rare words. As we expect to have a large amount of noise in the word- Figure 4 : Overview of the Lexeed-WordNet sense alignment method translated Lexeed definition sentences, including spurious translations for Japanese function words such as ka, ga and no that have no literal translation in English, we predict that an appropriate form of term weighting should improve the performance of our method.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 240,
                        "end": 248,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Weighting terms using TF-IDF mechanism",
                "sec_num": "4.2"
            },
            {
                "text": "As a first attempt at term weighting, we experimented with the classic SMART formulation of TF-IDF (Salton, 1971) , treating the vector associated with each definition sentence as a single document.",
                "cite_spans": [
                    {
                        "start": 99,
                        "end": 113,
                        "text": "(Salton, 1971)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Weighting terms using TF-IDF mechanism",
                "sec_num": "4.2"
            },
            {
                "text": "As mentioned in the previous section, commonlyoccurring semantically-bleached words are a source of noise in the naive cosine similarity scoring method. One conventional way of countering their impact is to filter them out of the vectors, based on a stop word list. For our experiments, we use the stop word list provided by the Snowball project. 6",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word stopping",
                "sec_num": "4.3"
            },
            {
                "text": "Another source of possible noise is the translations of Japanese function words. As all the Lexeed definition sentences are POS tagged, it is a relatively simple process to filter out all Japanese function words, focusing on prefixes, suffixes and particles.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "POS filtering",
                "sec_num": "4.4"
            },
            {
                "text": "In its basic form, our vector space model treats distinct word as a unique term, including ignoring the 6 http://snowball.tartarus.org/ obvious similarity between inflectional variants of the same word, such as dragon and dragons. To remove such inflectional variation, we experiment with lemmatising all words found in both the Lexeed and WordNet vectors, using morph (Minnen et al., 2001 ). For similar reasons, we also experiment with the Porter stemmer, noting that stemming will further reduce the set of terms but potential introduce spurious matches. As part of this process (with both lemmatisation and stemming), we remove all punctuation from the definition sentences.",
                "cite_spans": [
                    {
                        "start": 369,
                        "end": 389,
                        "text": "(Minnen et al., 2001",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Lemmatisation, stemming and normalisation",
                "sec_num": "4.5"
            },
            {
                "text": "Both the Lexeed and WordNet sense inventories are described in the form of hierarchies, making it possible to complement the sense definitions with those from neighbouring senses. The intuition behind this is that the sense granularity in the two sense inventories can vary greatly, such that a single sense in Lexeed is split across multiple WordNet synsets, which we can readily uncover by considering each sense as not a single point in WordNet but a semantic neighbourhood. For example, the second sense of the word kinou in Figure 5 , which literally means \"near past\", should be aligned with the second sense of yesterday, which is defined as \"the recent past\". This alignment is more self-evident, however, when we observe that the hypernym of each of the two senses is defined as \"past\".",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 529,
                        "end": 537,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Lexical relations",
                "sec_num": "4.6"
            },
            {
                "text": "In our current experiments, we only look at the utility of hypernymy. For a given sense Lexeed- Figure 5 : The output of word-translating Japanese definition sentences to English WordNet sense pairing, we extract the hypernyms of the respective senses and expand the definition sentences with the definition sentences from the hypernyms. The term vectors are then based on this expanded term set, similar to query expansion in information retrieval.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 96,
                        "end": 104,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Lexical relations",
                "sec_num": "4.6"
            },
            {
                "text": "To evaluate the performance of our system, we randomly selected 100 words from Lexeed, extracted out the Lexeed-WordNet sense pairings as described above, and manually selected the goldstandard alignments from amongst them. The 100 words were associated with a total of 268 Lexeed senses and 772 WordNet senses, creating a total of 206,896 possible alignment pairs. Of these, 259 alignments were selected as our gold-standard.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Gold-standard data",
                "sec_num": "5.1"
            },
            {
                "text": "We encountered a number of partial matches that were caused by the Japanese word being more specific than its English counterparts (as identified by our WordNet matching method). For example, kakkazan is translated as \"active volcano\". Since WordNet does not have any entry for active volcano, the longest right word substring that matches in WordNet is simply volcano. The definition sentences returned by Lexeed describe kakkazan as \"a volcano which still can erupt\" and \"a volcano that will soon erupt\", while volcano is described as \"a fissure in the earth's crust (or in the surface of some other planet) through which molten lava and gases erupt\" and \"a mountain formed by volcanic material\". Although there is some similarity between these definitions (namely key words such as erupt and volcano), we do not include this pairing in our gold-standard alignment data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Gold-standard data",
                "sec_num": "5.1"
            },
            {
                "text": "As a baseline, we take the most-frequent sense of each of the 100 random words from Lexeed, and match it with the synset with the highest SemCor frequency count out of all the candidate synsets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Baseline",
                "sec_num": "5.2"
            },
            {
                "text": "All our calculations are based on cosine similarity, which returns a similarity between 0 and 1, with 1 being an exact match. In its simplest form, we would identify the unique WordNet sense with highest similarity to each Lexeed sense, irrespective of the magnitude of the similarity. This has the dual disadvantage of allowing only one WordNet sense for each Lexeed sense, and potentially forcing alignments to be made on low similarity values. A more reasonable approach is to apply a threshold x, and treat all WordNet senses with similarity greater than x as being aligned with the Lexeed sense. Thresholding also gives us more flexibility in terms of tuning the performance of our method: at higher threshold values, we can hope to increase precision at the expense of recall, and at lower threshold values, we can hope to increase recall at the expense of precision.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Thresholding",
                "sec_num": "5.3"
            },
            {
                "text": "To evaluate the performance of our system, we use precision, recall and F-score. In an alignment context, precision is defined as the proportion of correct alignments to all alignments returned by the system, and recall is defined as the proportion of the correct alignments returned by our system to all the align- Table 1 : Best system F-score of combination of all features using the basic model vs. the basic model with TF-IDF weighting ments in our gold-standard. F-score is the harmonic mean of precision and recall, and provides a single figure-of-merit rating of the balance between these two factors. We evaluate our system using unbiased (\u03b2 = 1) F-score.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 316,
                        "end": 323,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation metrics",
                "sec_num": "5.4"
            },
            {
                "text": "Throughout our experimentation, we evaluate relative to the 100 manually sense-aligned Japanese words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "Our baseline method predicts 100 alignments (as it is guaranteed to produce a unique alignment per source-language word), of which 60 are correct. Hence, the precision is 60 100 = 0.600, the recall is 60 259 = 0.231, and the F-score is 0.334.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "With the basic alignment model, the highest Fscore achieved with thresholding is 0.228 at a threshold value of 0.19, well below the baseline F-score. The recall and precision value at this threshold are 0.263 and 0.202, respectively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "The basic model with TF-IDF weighting performed considerably better, scoring the highest Fscore of 0.292 (recall = 0.382 and precision = 0.236) at a threshold value of 0.04, but is still well below the baseline F-score. To confirm that TF-IDF term weighting is always beneficial to overall alignment performance, we took the unweighted model combined with each of the proposed extensions, and compared it with the same extension but with the inclusion of TF-IDF (without lexical relations at this point). The result of these experiments can be found in Table 1 . As we can see, TF-IDF weighting constantly improves alignment performance. Also note that, with the exception of simple (punctuation) normalisation, all extensions improve over the basic model both with and without TF-IDF weighting.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 553,
                        "end": 560,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "We extended our experiments by considering all possible combinations of 2 or more proposed extensions (excluding lexical relations for the time being) with TF-IDF weighting. Table 2 . The best result is achieved by combining all the proposed extensions, at an F-score of 0.364, which is significantly above baseline. It is also interesting to see that not all methods are fully complementary. By excluding stemming, e.g., the system actually performs better, producing a higher F-score of 0.372.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 174,
                        "end": 181,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "We then experimented with the addition of lexical relations to the different combinations of extensions explored above. The 5 top-performing combinations are presented in Table 3 . The best F-score of 0.408 is achieved with the combination of all the extensions proposed. When lexical relations are used exclusively or combined with less than three of the proposed extensions, the performance tends to decline.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 171,
                        "end": 178,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "In our best performing combination, we outperformed the baseline F-score by 22%. 349 alignments were returned for this F-score, of which 124 matched the gold-standard. The precision and recall scores are 0.355 and 0.478, respectively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "We carried out more detailed analysis of the precision-recall trade-off. While we expect the pre- cision to go up to 1 as we increase our threshold, we found out that it is in fact not the case. The precision peaks at 0.625 at a threshold level of 0.265. At this level, there are 10 correct alignments out of 16 alignments returned. Upon investigating the six non-matching entries, we found that they all contain similar words but that the literal meaning of the senses are very different. Below, we present two of the six non-matching entries. The first one relates to a sense of the Japanese word shanpuu \"shampoo\". The definition sentences for this sense found in Lexeed are directly translated as \"shampoo medicine, drug, or dose; detergent or washing material that is used to wash hair or fur\". The corresponding match in WordNet is \"the act of washing your hair with shampoo\". We can see that there are similar terms in the two vectors, such as shampoo, washing and hair, but that the literal meaning of the two senses is quite different.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "6"
            },
            {
                "text": "The second example is very similar to the kakkazan example presented in Section 5. One sense of sengetsu (\"last month\") is defined as \"the previous month\", and is aligned to the WordNet synset of month (WordNet does not have an entry for last month). It does not help that the hypernym of sengetsu is tsuki which translates to \"month\", boosting the similarity of this alignment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method",
                "sec_num": null
            },
            {
                "text": "In terms of F-score, the best-performing combination of extensions performed better than the baseline. However, the recall seems to be the dominant factor in the F-score calculations for the proposed method. This is in sharp contrast to what we have in our baseline, where precision dominates the F-score calculation. There are several reasons for the baseline scores. First, there are 259 alignments in our gold-standard for 100 random words, corresponding to approximately 2.6 alignment per word. Given how we created our baseline, with one alignment per word, the maximum recall that the baseline can achieve is 100 259 = 0.386. On the other hand, the first-sense basis of the baseline method leads to high precision, largely due to the design process for ontologies and dictionaries. Namely, there is usually good coverage of frequent word senses in ontologies and dictionaries, and additionally, the translations for a given word are generally selected to be highly biased towards common senses (i.e. even if a polysemous word is chosen as a translation, its predominant sense is almost always that which corresponds to the source language word, for obvious accessibility/usability reasons). For this reason, there is a very high probability that these frequent senses for each of the two languages align with each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "7"
            },
            {
                "text": "In this paper, we proposed a cross-lingual senseto-sense alignment method, based on similarity of definition sentences as calculated via a bilingual dictionary. We explored various extensions to a simple lexical overlap method, and achieved promising results in preliminary experiments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "7"
            },
            {
                "text": "In future work, we plan to exploit more lexical relations, such as synonymy and hyponymy. We also plan to experiment with weighting up alignments where both the sense pairing and the hypernym pairing match well. Nichols et al. (2005) linked Lexeed senses to WordNet in their evaluation on ontology induction. Comparison with their method would be very interesting and is an area for future research.",
                "cite_spans": [
                    {
                        "start": 212,
                        "end": 233,
                        "text": "Nichols et al. (2005)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "7"
            },
            {
                "text": "http://babelfish.altavista.com/ 2 http://www.google.com/translate t",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "http://www.rikai.com/perl/Home.pl 4 http://www.popjisyo.com Proceedings of the Australasian Language Technology Workshop 2007, pages 125-133",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "ResourcesIn this section, we review the key resources used in this research.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Strictly speaking, hear, etc. are troponyms of perceive, i.e. they denote specific ways of perceiving. Because WordNet",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We thank Francis Bond for his insights and suggestions for our experiment and Sanae Fujita for her help with the data. We are also grateful to anonymous reviewers for their helpful comments. This research was supported by NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Alignment of ontologies: Word-Net and Goi-Taikei",
                "authors": [
                    {
                        "first": "Naoki",
                        "middle": [],
                        "last": "Asanoma",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. of the NAACL 2001 Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations",
                "volume": "",
                "issue": "",
                "pages": "89--94",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Naoki Asanoma. 2001. Alignment of ontologies: Word- Net and Goi-Taikei. In Proc. of the NAACL 2001 Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations, pages 89-94, Pittsburgh, USA.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "MRD-based word sense disambiguation: Further #2 extending #1 Lesk",
                "authors": [
                    {
                        "first": "Timothy",
                        "middle": [],
                        "last": "Baldwin",
                        "suffix": ""
                    },
                    {
                        "first": "Su",
                        "middle": [
                            "Nam"
                        ],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "Francis",
                        "middle": [],
                        "last": "Bond",
                        "suffix": ""
                    },
                    {
                        "first": "Sanae",
                        "middle": [],
                        "last": "Fujita",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Martinez",
                        "suffix": ""
                    },
                    {
                        "first": "Takaaki",
                        "middle": [],
                        "last": "Tanaka",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "Proc. of the Third International Joint Conference on Natural Language Prcoessing (IJCNLP 2008)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Timothy Baldwin, Su Nam Kim, Francis Bond, Sanae Fujita, David Martinez, and Takaaki Tanaka. to appear. MRD-based word sense disambiguation: Further #2 extending #1 Lesk. In Proc. of the Third International Joint Conference on Natural Language Prcoessing (IJCNLP 2008), Hyderabad, India.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Building an electronic Japanese-English dictionary",
                "authors": [
                    {
                        "first": "Jim",
                        "middle": [],
                        "last": "Breen",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Japanese Studies Association of Australia Conference",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jim Breen. 1995. Building an electronic Japanese- English dictionary. In Japanese Studies Association of Australia Conference.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "WordNet: An Electronic Lexical Database",
                "authors": [],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, USA.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Crossing parallel corpora and multilingual lexical databases for WSD",
                "authors": [
                    {
                        "first": "Alfio",
                        "middle": [],
                        "last": "Massimiliano Gliozzo",
                        "suffix": ""
                    },
                    {
                        "first": "Marcello",
                        "middle": [],
                        "last": "Ranieri",
                        "suffix": ""
                    },
                    {
                        "first": "Carlo",
                        "middle": [],
                        "last": "Strapparava",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. of the 6th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2005)",
                "volume": "",
                "issue": "",
                "pages": "242--247",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alfio Massimiliano Gliozzo, Marcello Ranieri, and Carlo Strapparava. 2005. Crossing parallel corpora and multilingual lexical databases for WSD. In Proc. of the 6th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing- 2005), pages 242-5, Mexico City, Mexico.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Construction of a Japanese semantic lexicon: Lexeed",
                "authors": [
                    {
                        "first": "Kaname",
                        "middle": [],
                        "last": "Kasahara",
                        "suffix": ""
                    },
                    {
                        "first": "Hiroshi",
                        "middle": [],
                        "last": "Sato",
                        "suffix": ""
                    },
                    {
                        "first": "Francis",
                        "middle": [],
                        "last": "Bond",
                        "suffix": ""
                    },
                    {
                        "first": "Takaaki",
                        "middle": [],
                        "last": "Tanaka",
                        "suffix": ""
                    },
                    {
                        "first": "Sanae",
                        "middle": [],
                        "last": "Fujita",
                        "suffix": ""
                    },
                    {
                        "first": "Tomoko",
                        "middle": [],
                        "last": "Kasunagi",
                        "suffix": ""
                    },
                    {
                        "first": "Shigeaki",
                        "middle": [],
                        "last": "Amano",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of SIG NLC-159",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kaname Kasahara, Hiroshi Sato, Francis Bond, Takaaki Tanaka, Sanae Fujita, Tomoko Kasunagi, and Shigeaki Amano. 2004. Construction of a Japanese seman- tic lexicon: Lexeed. In Proc. of SIG NLC-159, IPSJ, Tokyo, Japan.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Building a largescale knowledge base for machine translation",
                "authors": [
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    },
                    {
                        "first": "Steve",
                        "middle": [
                            "K"
                        ],
                        "last": "Luk",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proc. of the 12th Annual Conference on Artificial Intelligence (AAAI-94)",
                "volume": "",
                "issue": "",
                "pages": "773--781",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kevin Knight and Steve K. Luk. 1994. Building a large- scale knowledge base for machine translation. In Proc. of the 12th Annual Conference on Artificial Intelli- gence (AAAI-94), pages 773-8, Seattle, USA.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Europarl: A parallel corpus for statistical machine translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. of the Tenth Machine Translation Summit",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. of the Tenth Machine Translation Summit (MT Summit X), Phuket, Thailand.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Building semantic concordances",
                "authors": [
                    {
                        "first": "Shari",
                        "middle": [],
                        "last": "Landes",
                        "suffix": ""
                    },
                    {
                        "first": "Claudia",
                        "middle": [],
                        "last": "Leacock",
                        "suffix": ""
                    },
                    {
                        "first": "Randes",
                        "middle": [],
                        "last": "Tengi",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "WordNet: An Electronic Lexical Database",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shari Landes, Claudia Leacock, and Randes Tengi. 1998. Building semantic concordances. In Chris- tiane Fellbaum, editor, WordNet: An Electronic Lex- ical Database. MIT Press, Cambridge, USA.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Applied morphological processing of English",
                "authors": [
                    {
                        "first": "Guido",
                        "middle": [],
                        "last": "Minnen",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Carroll",
                        "suffix": ""
                    },
                    {
                        "first": "Darren",
                        "middle": [],
                        "last": "Pearce",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Natural Language Engineering",
                "volume": "7",
                "issue": "3",
                "pages": "207--230",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natu- ral Language Engineering, 7(3):207-23.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Identifying concepts across languages: A first step towards a corpus-based approach to automatic ontology alignment",
                "authors": [
                    {
                        "first": "Grace",
                        "middle": [],
                        "last": "Ngai",
                        "suffix": ""
                    },
                    {
                        "first": "Marine",
                        "middle": [],
                        "last": "Carpuat",
                        "suffix": ""
                    },
                    {
                        "first": "Pascale",
                        "middle": [],
                        "last": "Fung",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of the 19th International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Grace Ngai, Marine Carpuat, and Pascale Fung. 2002. Identifying concepts across languages: A first step to- wards a corpus-based approach to automatic ontology alignment. In Proc. of the 19th International Confer- ence on Computational Linguistics (COLING 2002), Taipei, Taiwan.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Robust ontology acquisition from machine-readable dictionaries",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Nichols",
                        "suffix": ""
                    },
                    {
                        "first": "Francis",
                        "middle": [],
                        "last": "Bond",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Flickinger",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. of the 19th International Join Conference on Artificial Intelligence (IJCAI-2005)",
                "volume": "",
                "issue": "",
                "pages": "1111--1117",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eric Nichols, Francis Bond, and Daniel Flickinger. 2005. Robust ontology acquisition from machine-readable dictionaries. In Proc. of the 19th International Join Conference on Artificial Intelligence (IJCAI-2005), pages 1111-6, Edinburgh, UK.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Using multiple pivots to align Korean and Japanese lexical resources",
                "authors": [
                    {
                        "first": "Kyonghee",
                        "middle": [],
                        "last": "Paik",
                        "suffix": ""
                    },
                    {
                        "first": "Francis",
                        "middle": [],
                        "last": "Bond",
                        "suffix": ""
                    },
                    {
                        "first": "Shirai",
                        "middle": [],
                        "last": "Satoshi",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proc. of the NLPRS-2001 Workshop on Language Resources in Asia",
                "volume": "",
                "issue": "",
                "pages": "63--70",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kyonghee Paik, Francis Bond, and Shirai Satoshi. 2001. Using multiple pivots to align Korean and Japanese lexical resources. In Proc. of the NLPRS-2001 Work- shop on Language Resources in Asia, pages 63-70, Tokyo.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "The SMART Retrieval System: Experiments in Automatic Document Processing",
                "authors": [
                    {
                        "first": "Gerald",
                        "middle": [],
                        "last": "Salton",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gerald Salton. 1971. The SMART Retrieval Sys- tem: Experiments in Automatic Document Processing. Prentice-Hall.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Overlapping constraints of two step selection to generate a transfer dictionary",
                "authors": [
                    {
                        "first": "Satoshi",
                        "middle": [],
                        "last": "Shirai",
                        "suffix": ""
                    },
                    {
                        "first": "Kazuhide",
                        "middle": [],
                        "last": "Yamamoto",
                        "suffix": ""
                    },
                    {
                        "first": "Kyonghee",
                        "middle": [],
                        "last": "Paik",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "ICSP-2001",
                "volume": "",
                "issue": "",
                "pages": "731--736",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Satoshi Shirai, Kazuhide Yamamoto, and Kyonghee Paik. 2001. Overlapping constraints of two step selection to generate a transfer dictionary. In ICSP-2001, pages 731-736.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "BALKANET: A multilingual semantic network for the Balkan languages",
                "authors": [
                    {
                        "first": "Sofia",
                        "middle": [],
                        "last": "Stamou",
                        "suffix": ""
                    },
                    {
                        "first": "Kemal",
                        "middle": [],
                        "last": "Oflazer",
                        "suffix": ""
                    },
                    {
                        "first": "Karel",
                        "middle": [],
                        "last": "Pala",
                        "suffix": ""
                    },
                    {
                        "first": "Dimitris",
                        "middle": [],
                        "last": "Christoudoulakis",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Cristea",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Tufi\u015f",
                        "suffix": ""
                    },
                    {
                        "first": "Svetla",
                        "middle": [],
                        "last": "Koeva",
                        "suffix": ""
                    },
                    {
                        "first": "George",
                        "middle": [],
                        "last": "Totkov",
                        "suffix": ""
                    },
                    {
                        "first": "Dominique",
                        "middle": [],
                        "last": "Dutoit",
                        "suffix": ""
                    },
                    {
                        "first": "Maria",
                        "middle": [],
                        "last": "Grigoriadou",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of the International Wordnet Conference",
                "volume": "",
                "issue": "",
                "pages": "12--16",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sofia Stamou, Kemal Oflazer, Karel Pala, Dimitris Chris- toudoulakis, Dan Cristea, Dan Tufi\u015f, Svetla Koeva, George Totkov, Dominique Dutoit, and Maria Grigo- riadou. 2002. BALKANET: A multilingual semantic network for the Balkan languages. In Proc. of the In- ternational Wordnet Conference, pages 12-4, Mysore, India.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "The hinoki sensebank -a large-scale word sense tagged corpus of Japanese",
                "authors": [
                    {
                        "first": "Takaaki",
                        "middle": [],
                        "last": "Tanaka",
                        "suffix": ""
                    },
                    {
                        "first": "Francis",
                        "middle": [],
                        "last": "Bond",
                        "suffix": ""
                    },
                    {
                        "first": "Sanae",
                        "middle": [],
                        "last": "Fujita",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora",
                "volume": "",
                "issue": "",
                "pages": "62--71",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Takaaki Tanaka, Francis Bond, and Sanae Fujita. 2006. The hinoki sensebank -a large-scale word sense tagged corpus of Japanese -. In Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006, pages 62-9, Sydney, Australia.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "EuroWordNet: A Multilingual Database with Lexical Semantic Networks",
                "authors": [],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Piek Vossen, editor. 1998. EuroWordNet: A Multilingual Database with Lexical Semantic Networks. Kluwer Academic Publishers, Dordrecht, Netherlands.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "A partial view of the Lexeed entry for [ryuu] (with English glosses)",
                "num": null,
                "type_str": "figure",
                "uris": null
            },
            "FIGREF1": {
                "text": "2: Top-5 combinations of extensions, excluding lexical relations (WS = Word stopping, PF = POS filtering, L = Lemmatisation, S = Stemming, N = Normalisation) periment is to investigate whether the proposed extensions are complementary in improving alignment performance. The 5 top-performing combinations are presented in",
                "num": null,
                "type_str": "figure",
                "uris": null
            },
            "TABREF2": {
                "num": null,
                "text": "",
                "html": null,
                "type_str": "table",
                "content": "<table/>"
            },
            "TABREF4": {
                "num": null,
                "text": "",
                "html": null,
                "type_str": "table",
                "content": "<table><tr><td>: Top-5 performing combinations of exten-</td></tr><tr><td>sions, including lexical relations (WS = stopping, PF</td></tr><tr><td>= POS filtering, L = Lemmatisation, S = Stemming,</td></tr><tr><td>N = Normalisation, H = Hypernym)</td></tr></table>"
            }
        }
    }
}