File size: 97,152 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
{
    "paper_id": "P13-1040",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:33:26.005796Z"
    },
    "title": "Extracting bilingual terminologies from comparable corpora",
    "authors": [
        {
            "first": "Ahmet",
            "middle": [],
            "last": "Aker",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Sheffield",
                "location": {}
            },
            "email": "ahmet.aker@sheffield.ac.uk"
        },
        {
            "first": "Monica",
            "middle": [],
            "last": "Paramita",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Sheffield",
                "location": {}
            },
            "email": "m.paramita@sheffield.ac.uk"
        },
        {
            "first": "Robert",
            "middle": [],
            "last": "Gaizauskas",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Sheffield",
                "location": {}
            },
            "email": "r.gaizauskas@sheffield.ac.uk"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In this paper we present a method for extracting bilingual terminologies from comparable corpora. In our approach we treat bilingual term extraction as a classification problem. For classification we use an SVM binary classifier and training data taken from the EUROVOC thesaurus. We test our approach on a held-out test set from EUROVOC and perform precision, recall and f-measure evaluations for 20 European language pairs. The performance of our classifier reaches the 100% precision level for many language pairs. We also perform manual evaluation on bilingual terms extracted from English-German term-tagged comparable corpora. The results of this manual evaluation showed 60-83% of the term pairs generated are exact translations and over 90% exact or partial translations.",
    "pdf_parse": {
        "paper_id": "P13-1040",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In this paper we present a method for extracting bilingual terminologies from comparable corpora. In our approach we treat bilingual term extraction as a classification problem. For classification we use an SVM binary classifier and training data taken from the EUROVOC thesaurus. We test our approach on a held-out test set from EUROVOC and perform precision, recall and f-measure evaluations for 20 European language pairs. The performance of our classifier reaches the 100% precision level for many language pairs. We also perform manual evaluation on bilingual terms extracted from English-German term-tagged comparable corpora. The results of this manual evaluation showed 60-83% of the term pairs generated are exact translations and over 90% exact or partial translations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Bilingual terminologies are important for various applications of human language technologies, including cross-language information search and retrieval, statistical machine translation (SMT) in narrow domains and computer-aided assistance to human translators. Automatic construction of bilingual terminology mappings has been investigated in many earlier studies and various methods have been applied to this task. These methods may be distinguished by whether they work on parallel or comparable corpora, by whether they assume monolingual term recognition in source and target languages (what Moore (2003) calls symmetrical approaches) or only in the source (asymmetric approaches), and by the extent to which they rely on linguistic knowledge as opposed to simply statistical techniques.",
                "cite_spans": [
                    {
                        "start": 597,
                        "end": 609,
                        "text": "Moore (2003)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We focus on techniques for bilingual term extraction from comparable corpora -collections of source-target language document pairs that are not direct translations but are topically related. We choose to focus on comparable corpora because for many less widely spoken languages and for technical domains where new terminology is constantly being introduced, parallel corpora are simply not available. Techniques that can exploit such corpora to deliver bilingual terminologies are of significant practical interest in these cases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The rest of the paper is structured as follows. In Section 2 we outline our method. In Section 3 we review related work on bilingual term extraction. Section 4 describes feature extraction for term pair classification. In Section 5 we present the data used in our evaluations and discuss our results. Section 6 concludes the paper.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The method we present below for bilingual term extraction is a symmetric approach, i.e. it assumes a method exists for monolingual term extraction in both source and target languages. We do not prescribe what a term must be. In particular we do not place any particular syntactic restrictions on what constitutes an allowable term, beyond the requirement that terms must be contiguous sequences of words in both source and target languages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method",
                "sec_num": "2"
            },
            {
                "text": "Our method works by first pairing each term extracted from a source language document S with each term extracted from a target language document T aligned with S in the comparable corpus. We then treat term alignment as a binary classification task, i.e. we extract features for each source-target language potential term pair and decide whether to classify the pair as a term equivalent or not. For classification purposes we use an SVM binary classifier. The training data for the classifier is derived from EUROVOC (Steinberger et al., 2002) , a term thesaurus covering the activities of the EU and the European Parliament. We have run our approach on the 21 official EU languages covered by EUROVOC, constructing 20 language pairs with English as the source language. Considering all these languages allows us to directly compare our method's performance on resource-rich (e.g. German, French, Spanish) and under-resourced languages (e.g. Latvian, Bulgarian, Estonian). We perform two different tests. First, we evaluate the performance of the classifier on a held-out term-pair list from EUROVOC using the standard measures of recall, precision and F-measure. We run this evaluation on all 20 language pairs. Secondly, we test the system's performance on obtaining bilingual terms from comparable corpora. This second test simulates the situation of using the term alignment system in a real world scenario. For this evaluation we collected English-German comparable corpora from Wikipedia, performed monolingual term tagging and ran our tool over the term tagged corpora to extract bilingual terms.",
                "cite_spans": [
                    {
                        "start": 518,
                        "end": 544,
                        "text": "(Steinberger et al., 2002)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method",
                "sec_num": "2"
            },
            {
                "text": "Previous studies have investigated the extraction of bilingual terms from parallel and comparable corpora. For instance, Kupiec (1993) uses statistical techniques and extracts bilingual noun phrases from parallel corpora tagged with terms. Daille et al. (1994) , Fan et al. (2009) and Okita et al. (2010) also apply statistical methods to extract terms/phrases from parallel corpora. In addition to statistical methods Daille et al. use word translation information between two words within the extracted terms as a further indicator of the correct alignment. More recently, Bouamor et al. (2012) use vector space models to align terms. The entries in the vectors are co-occurrence statistics between the terms computed over the entire corpus.",
                "cite_spans": [
                    {
                        "start": 121,
                        "end": 134,
                        "text": "Kupiec (1993)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 240,
                        "end": 260,
                        "text": "Daille et al. (1994)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 263,
                        "end": 280,
                        "text": "Fan et al. (2009)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 285,
                        "end": 304,
                        "text": "Okita et al. (2010)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 575,
                        "end": 596,
                        "text": "Bouamor et al. (2012)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "Bilingual term alignment methods that work on comparable corpora use essentially three sorts of information: (1) cognate information, typically estimated using some sort of transliteration similarity measure (2) context congruence, a measure of the extent to which the words that the source term co-occurs with have the same sort of distribution and co-occur with words with the same sort distribution as do those words that co-occur with the candidate term and (3) translation of component words in the term and/or in context words, where some limited dictionary exists. For example, in Rapp (1995) , Fung and McKeown (1997), Morin et. al. (2007) , Cao and Li (2002) and Ismail and Manandhar (2010) the context of text units is used to identify term mappings. Transliteration and cognate-based information is exploited in Al-Onaizan and Knight (2002) , Graehl (1998), Udupa et. al. (2008) and Aswani and Gaizauskas (2010) .",
                "cite_spans": [
                    {
                        "start": 588,
                        "end": 599,
                        "text": "Rapp (1995)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 602,
                        "end": 610,
                        "text": "Fung and",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 611,
                        "end": 647,
                        "text": "McKeown (1997), Morin et. al. (2007)",
                        "ref_id": null
                    },
                    {
                        "start": 650,
                        "end": 667,
                        "text": "Cao and Li (2002)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 672,
                        "end": 699,
                        "text": "Ismail and Manandhar (2010)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 838,
                        "end": 851,
                        "text": "Knight (2002)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 854,
                        "end": 889,
                        "text": "Graehl (1998), Udupa et. al. (2008)",
                        "ref_id": null
                    },
                    {
                        "start": 894,
                        "end": 922,
                        "text": "Aswani and Gaizauskas (2010)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "Very few approaches have treated term alignment as a classification problem suitable for machine learning (ML) techniques. So far as we are aware, only Cao and Li (2002) , who treat only base noun phrase (NP) mapping, consider the problem this way. However, it naturally lends itself to being viewed as a classification task, assuming a symmetric approach, since the different information sources mentioned above can be treated as features and each source-target language potential term pairing can be treated as an instance to be fed to a binary classifier which decides whether to align them or not. Our work differs from that of Cao and Li (2002) in several ways. First they consider only terms consisting of nounnoun pairs. Secondly for a given source language term N 1 , N 2 , target language candidate terms are proposed by composing all translations (given by a bilingual dictionary) of N 1 into the target language with all translations of N 2 . We remove both these restrictions. By considering all terms proposed by monolingual term extractors we consider terms that are syntactically much richer than nounnoun pairs. In addition, the term pairs we align are not constrained by an assumption that their component words must be translations of each other as found in a particular dictionary resource.",
                "cite_spans": [
                    {
                        "start": 152,
                        "end": 169,
                        "text": "Cao and Li (2002)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 632,
                        "end": 649,
                        "text": "Cao and Li (2002)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3"
            },
            {
                "text": "To align or map source and target terms we use an SVM binary classifier (Joachims, 2002) with a linear kernel and the trade-off between training error and margin parameter c = 10. Within the classifier we use language dependent and independent features described in the following sections.",
                "cite_spans": [
                    {
                        "start": 72,
                        "end": 88,
                        "text": "(Joachims, 2002)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature extraction",
                "sec_num": "4"
            },
            {
                "text": "The dictionary based features are language dependent and are computed using bilingual dictionaries which are created with GIZA++ (Och and Ney, 2000; Och and Ney, 2003) . The DGT-TM parallel data (Steinberger et al., 2012) was input to GIZA++ to obtain the dictionaries. Dictionary entries have the form s, t i , p i , where s is a source word, t i is the i-th translation of s in the dictionary and p i is the probability that s is translated by t i , the p i 's summing to 1 for each s in the dictionary. From the dictionaries we removed all entries with p i < 0.05. In addition we also removed every entry from the dictionary where the source word was less than four characters and the target word more than five characters in length and vice versa. This step is performed to try to eliminate translation pairs where a stop word is translated into a non-stop word. After performing these filtering steps we use the dictionaries to extract the following language dependent features:",
                "cite_spans": [
                    {
                        "start": 129,
                        "end": 148,
                        "text": "(Och and Ney, 2000;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 149,
                        "end": 167,
                        "text": "Och and Ney, 2003)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 195,
                        "end": 221,
                        "text": "(Steinberger et al., 2012)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dictionary based features",
                "sec_num": "4.1"
            },
            {
                "text": "\u2022 isFirstWordTranslated is a binary feature indicating whether the first word in the source term is a translation of the first word in the target term. To address the issue of compounding, e.g. for languages like German where what is a multi-word term in English may be expressed as a single compound word, we check whether the compound source term has an initial prefix that matches the translation of the first target word, provided that translation is at least 5 character in length. \u2022 isLastWordTranslated is a binary feature indicating whether the last word in the source term is a translation of the last word in the target term. As with the previous feature in case of compound terms we check whether the source term ends with the translation of the target last word. \u2022 percentageOfTranslatedWords returns the percentage of words in the source term which have their translations in the target term. To address compound terms we check for each source word translation whether it appears anywhere within the target term. \u2022 percentageOfNotTranslatedWords returns the percentage of words of the source term which have no translations in the target term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dictionary based features",
                "sec_num": "4.1"
            },
            {
                "text": "the ratio of the number of words within the longest contiguous sequence of source words which has a translation in the target term to the length of the source term, expressed as a percentage. For compound terms we proceed as with percentageOfTranslatedWords. \u2022 longestNotTranslatedUnitInPercentage returns the percentage of the number of words within the longest sequence of source words which have no translations in the target term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 longestTranslatedUnitInPercentage returns",
                "sec_num": null
            },
            {
                "text": "These six features are direction-dependent and are computed in both directions, reversing which language is taken as the source and which as the target. We also compute another feature av-eragePercentageOfTranslatedWords which builds the average between the feature values of percent-ageOfTranslatedWords from source to target and target to source. Thus in total we have 13 dictionary based features. Note for non-compound terms if we compare two words for equality we do not perform string match but rather use the Levenshtein Distance (see Section 4.2) between the two words and treat them as equal if the Levenshtein Distance returns >= 0.95. This is performed to capture words with morphological differences. We set 0.95 experimentally.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 longestTranslatedUnitInPercentage returns",
                "sec_num": null
            },
            {
                "text": "Dictionaries mostly fail to return translation entries for named entities (NEs) or specialized terminology. Because of this we also use cognate based methods to perform the mapping between source and target words or vice versa. Aker et al. 2012have applied (1) Longest Common Subsequence Ratio, (2) Longest Common Substring Ratio, (3) Dice Similarity, (4) Needleman-Wunsch Distance and (5) Levenshtein Distance in order to extract parallel phrases from comparable corpora. We adopt these measures within our classifier. Each of them returns a score between 0 and 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Longest Common Subsequence Ratio (LCSR): The longest common subsequence (LCS) measure measures the longest common non-consecutive sequence of characters between two strings. For instance, the words \"dollars\" and \"dolari\" share a sequence of 5 non-consecutive characters in the same ordering. We make use of dynamic programming (Cormen et al., 2001) to implement LCS, so that its computation is efficient and can be applied to a large number of possible term pairs quickly. We normalize relative to the length of the longest term:",
                "cite_spans": [
                    {
                        "start": 329,
                        "end": 350,
                        "text": "(Cormen et al., 2001)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "LCSR(X, Y ) = len[LCS(X, Y )] max[len(X), len(Y )]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "where LCS is the longest common subsequence between two strings and characters in this subsequence need not be contiguous. The shorthand len stands for length. \u2022 Longest Common Substring Ratio (LC-STR): The longest common substring (LCST) measure is similar to the LCS measure, but measures the longest common consecutive string of characters that two strings have in common. I.e. given two terms we need to find the longest character n-gram the terms share. The formula we use for the LCSTR measure is a ratio as in the previous measure:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "LCST R(X, Y ) = len[LCST (X, Y )] max[len(X), len(Y )]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Dice Similarity:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "dice = 2 * LCST len(X) + len(Y )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Needlemann Wunsch Distance (NWD):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "N W D = LCST min[len(X) + len(Y )]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Levenshtein Distance (LD): This method computes the minimum number of operations necessary to transform one string into another. The allowable operations are insertion, deletion, and substitution. Compared to the previous methods, which all return scores between 0 and 1, this method returns a score s that lies between 0 and n. The number n represents the maximum number of operations to convert an arbitrarily dissimilar string to a given string. To have a uniform score across all cognate methods we normalize s so that it lies between 0 and 1, subtracting from 1 to convert it from a distance measure to a similarty measure:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "LD normalized = 1 \u2212 LD max[len(X), len(Y )]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features",
                "sec_num": "4.2"
            },
            {
                "text": "The cognate methods assume that the source and target language strings being compared are drawn from the same character set and fail to capture the corresponding terms if this is not the case. For instance, the cognate methods are not directly applicable to the English-Bulgarian and English-Greek language pairs, as both the Bulgarian and Greek alphabets, which are Cyrillic-based, differ from the English Latin-based alphabet. However, the use of distinct alphabets is not the only problem when comparing source and target terms. Although most EU languages use the Latin alphabet, the occurrence of special characters and diacritics, as well spelling and phonetic variations, are further challenges which are faced by term or entity mapping methods, especially in determining the variants of the same mention of the entity (Snae, 2007; Karimi et al., 2011) . 1 We address this problem by mapping a source term to the target language writing system or vice versa. For mapping we use simple character mappings between the writing systems, such as \u03b1 \u2192 a, \u03c6 \u2192 ph, etc., from Greek to English. The rules allow one character on the lefthand side (source language) to map onto one or more characters on the righthand side (target language). We created our rules manually based on sound similarity between source and target language characters. We created mapping rules for 20 EU language pairs using primarily Wikipedia as a resource for describing phonetic mappings to English. After mapping a term from source to target language we apply the cognate metrics described in 4.2 to the resulting mapped term and the original term in the other language. Since we perform both target to source and source to target mapping, the number of cognate feature scores on the mapped terms is 10 -5 due to source to target mapping and 5 due to target to source mapping.",
                "cite_spans": [
                    {
                        "start": 825,
                        "end": 837,
                        "text": "(Snae, 2007;",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 838,
                        "end": 858,
                        "text": "Karimi et al., 2011)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 861,
                        "end": 862,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cognate based features with term matching",
                "sec_num": "4.3"
            },
            {
                "text": "We also combined dictionary and cognate based features. The combined features are as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combined features",
                "sec_num": "4.4"
            },
            {
                "text": "\u2022 isFirstWordCovered is a binary feature indicating whether the first word in the source term has a translation (i.e. has a translation entry in the dictionary regardless of the score) or transliteration (i.e. if one of the cognate metric scores is above 0.7 2 ) in the target term. The threshold 0.7 for transliteration similarity is set experimentally using the training data. To do this we iteratively ran feature extraction, trained the classifier and recorded precision on the training data using a threshold value chosen from the interval [0, 1] in steps of 0.1. We selected as final threshold value, the lowest value for which the precision score was the same as when the threshold value was set to 1. \u2022 isLastWordCovered is similar to the previous feature one but indicates whether the last word in the source term has a translation or transliteration in the target term. If this is the case, 1 is returned otherwise 0. \u2022 percentageOfCoverage returns the percentage of source term words which have a translation or transliteration in the target term. \u2022 percentageOfNonCoverage returns the percentage of source term words which have neither a translation nor transliteration in the target term.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combined features",
                "sec_num": "4.4"
            },
            {
                "text": "returns the difference between the last two features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 difBetweenCoverageAndNonCoverage",
                "sec_num": null
            },
            {
                "text": "Like the dictionary based features, these five features are direction-dependent and are computed in both directions -source to target and target to source, resulting in 10 combined features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 difBetweenCoverageAndNonCoverage",
                "sec_num": null
            },
            {
                "text": "In total we have 38 features -13 features based on dictionary translation as described in Section 4.1, 5 cognate related features as outlined in Section 4.2, 10 cognate related features derived from character mappings over terms as described in Section 4.3 and 10 combined features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "\u2022 difBetweenCoverageAndNonCoverage",
                "sec_num": null
            },
            {
                "text": "In our experiments we use two different data resources: EUROVOC terms and comparable corpora collected from Wikipedia.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data Sources",
                "sec_num": "5.1"
            },
            {
                "text": "EUROVOC is a term thesaurus covering the activities of the EU and the European Parliament in particular. It contains 6797 term entries in 24 different languages including 22 EU languages and Croatian and Serbian (Steinberger et al., 2002) .",
                "cite_spans": [
                    {
                        "start": 212,
                        "end": 238,
                        "text": "(Steinberger et al., 2002)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EUROVOC terms",
                "sec_num": "5.1.1"
            },
            {
                "text": "We also built comparable corpora in the information technology (IT) and automotive domains by gathering documents from Wikipedia for the English-German language pair. First, we manually chose one seed document in English as a starting point for crawling in each domain 3 . We then identified all articles to which the seed document is linked and added them to the crawling queue. This process is performed recursively for each document in the queue. Since our aim is to build a comparable corpus, we only added English 3 http://en.wikipedia.org/wiki/Information technology for IT and http://en.wikipedia.org/wiki/Automotive industry for automotive domain. documents which have an inter-language link in Wikipedia to a German document. We set a maximum depth of 3 in the recursion to limit size of the crawling set, i.e. documents are crawled only if they are within 3 clicks of the seed documents. A score is then calculated to represent the importance of each document d i in this domain:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparable Corpora",
                "sec_num": "5.1.2"
            },
            {
                "text": "score d i = n j=1 f req d ij depth d j",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparable Corpora",
                "sec_num": "5.1.2"
            },
            {
                "text": "where n is the total number of documents in the queue, f req d ij is 1 if d i is linked to d j , or 0 otherwise, and depth d j is the number of clicks between d j and the seed document. After all documents in the queue were assigned a score, we gathered the top 1000 documents and used inter-language link information to extract the corresponding article in the target language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparable Corpora",
                "sec_num": "5.1.2"
            },
            {
                "text": "We pre-processed each Wikipedia article by performing monolingual term tagging using TWSC (Pinnis et al., 2012) . TWSC is a term extraction tool which identifies terms ranging from one to four tokens in length. First, it POS-tags each document. For German POS-tagging we use TreeTagger (Schmid, 1995) . Next, it uses term grammar rules, in the form of sequences of POS tags or non-stop words, to identify candidate terms. Finally, it filters the candidate terms using various statistical measures, such as pointwise mutual information and TF*IDF.",
                "cite_spans": [
                    {
                        "start": 90,
                        "end": 111,
                        "text": "(Pinnis et al., 2012)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 286,
                        "end": 300,
                        "text": "(Schmid, 1995)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparable Corpora",
                "sec_num": "5.1.2"
            },
            {
                "text": "To test the classifier's performance we evaluated it against a list of positive and negative examples of bilingual term pairs using the measures of precision, recall and F -measure. We used 21 EU official languages, including English, and paired each non-English language with English, leading to 20 language pairs. 4 In the evaluation we used 600 positive term pairs taken randomly from the EU-ROVOC term list. We also created around 1.3M negative term pairs by pairing a source term with 200 randomly chosen distinct target terms. We select such a large number to simulate the real application scenario where the classifier will be confronted with a huge number of negative cases and a relatively small number of positive pairs. The 600 positive examples contain 200 single term pairs (i.e. single word on both sides), 200 term pairs with a single word on only one side (either source or target) and 200 term pairs with more than one word on each side. For training we took the remaining 6200 positive term pairs from EU-ROVOC and constructed another 6200 term pairs as negative examples, leading to total of 12400 term pairs. To construct the 6200 negative examples we used the 6200 terms on the source side and paired each source term with an incorrect target term. Note that we ensure that in both training and testing the set of negative and positive examples do not overlap. Furthermore, we performed data selection for each language pair separately. This means that the same pairs found in, e.g., English-German are not necessarily the same as in English-Italian. The reason for this is that the translation lengths, in number of words, vary between language pairs. For instance adult education is translated into Erwachsenenbildung in German and contains just a single word (although compound). The same term is translated into istruzione degli adulti in Italian and contains three words. For this reason we carry out the data preparation process separately for each language pair in order to obtain the three term pair sets consisting of term pairs with only a single word on each side, term pairs with a single word on just one side and term pairs with multiple words on both sides.",
                "cite_spans": [
                    {
                        "start": 316,
                        "end": 317,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Performance test of the classifier",
                "sec_num": "5.2"
            },
            {
                "text": "For this evaluation we used the Wikipedia comparable corpora collected for the English-German (EN-DE) language pair. For each pair of Wikipedia articles we used the terms tagged by TWSC and aligned each source term with every target term. This means if both source and target articles contain 100 terms then this leads to 10K term pairs. We extracted features for each pair of terms and ran the classifier to decide whether the pair is positive or negative. Table 1 shows the number of term pairs processed and the count of pairs classified as positive. Table 2 shows five positive term pairs extracted from the English-German comparable corpora for each of the IT and automotive domains. We manually assessed a subset of the positive examples. We asked human assessors to categorize each term pair into one of the following categories:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 458,
                        "end": 465,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    },
                    {
                        "start": 554,
                        "end": 561,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.3"
            },
            {
                "text": "1. Equivalence: The terms are exact translations/transliterations of each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.3"
            },
            {
                "text": "Not an exact translation/transliteration, but an exact translation/transliteration of one term is entirely contained within the term in the other language, e.g: \"F1 car racing\" vs \"Autorennen (car racing)\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inclusion:",
                "sec_num": "2."
            },
            {
                "text": "3. Overlap: Not category 1 or 2, but the terms share at least one translated/transliterated word, e.g: \"hybrid electric vehicles\" vs \"hybride bauteile (hybrid components)\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inclusion:",
                "sec_num": "2."
            },
            {
                "text": "No word in either term is a translation/transliteration of a word in the other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unrelated:",
                "sec_num": "4."
            },
            {
                "text": "In the evaluation we randomly selected 300 pairs for each domain and showed them to two German native speakers who were fluent in English. We asked the assessors to place each of the term pair into one of the categories 1 to 4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unrelated:",
                "sec_num": "4."
            },
            {
                "text": "The results of the classifier evaluation are shown in Table 3 . The results show that the overall performance of the classifier is very good. In many cases the precision scores reach 100%. The lowest precision score is obtained for Lithuanian (LT) with 67%. For this language we performed an error analysis. In total there are 221 negative examples classified as positive. All these terms are multi-term, i.e. each term pair contains at least two words on each side. For the majority of the misclassified terms -209 in total -50% or more of the words on one side are either translations or cognates of words on the other side. Of these, 187 contained 50% or more translation due to cognate words -examples of such cases are capital increase -kapitalo eksportas or Arab organisation -Arabu lyga with the cognates capital -kapitalo and Arab -Arabu respectively. For the remainder, 50% or more of the words on one side are dictionary translations of words on the other side. In order to understand the reason why the classifier treats such cases as positive we examined the ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 54,
                        "end": 61,
                        "text": "Table 3",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Performance test of the classifier",
                "sec_num": "5.4.1"
            },
            {
                "text": "Automotive chromatographic technique -chromatographie methode distribution infrastructure -versorgungsinfrastruktur electrolytic capacitor -elektrolytkondensatoren ambient temperature -au\u00dfenlufttemperatur natural user interfaces -nat\u00fcrliche benutzerschnittstellen higher cetane number -erh\u00f6hter cetanzahl anode voltage -anodenspannung fuel tank -kraftstoffpumpe digital subscriber loop -digitaler teilnehmeranschluss hydrogen powered vehicle -wasserstoff fahrzeug .67 .72 .82 .69 .81 .77 .78 .65 .82 .66 .66 .7 .77 .84 .72 .78 .69 .8 .78 .79 F .80 .83 .89 .81 .89 .86 .87 .78 .75 .79 .79 .82 .71 .91 .83 .87 .81 .88 .87 .88 training data and found 467 positive pairs which had the same characteristics as the negative examples in the testing set classified. We removed these 467 entries from the training set and re-trained the classifier. The results with the new classifier are 99% precision, 68% recall and 80% F score. In addition to Lithuanian, two further languages, Portuguese (PT) and Slovak (SK), also had substantially lower precision scores. For these languages we also removed positive entries falling into the same problem categories as the LT ones and trained new classifiers with the filtered training data. The precision results increased substantially for both PT and SK -95% precision, 76% recall, 84% F score for PT and 94% precision, 72% recall, 81% F score for SK. The recall scores are lower than the precision scores, ranging from 65% to 84%. We have investigated the recall problem for FI, which has the lowest recall score at 65%. We observed that all the missing term pairs were not cognates. Thus, the only way these terms could be recognized as positive is if they are found in the GIZA++ dictionaries. However, due to data sparsity in these dictionaries this did not happen in these cases. For these term pairs either the source or target terms were not found in the dictionaries. For instance, for the term pair offshoringuudelleensijoittautuminen the GIZA++ dictionary contains the entry offshoring but according to the dictionary it is not translated into uudelleensijoittautuminen, which is the matching term in EU-ROVOC.",
                "cite_spans": [
                    {
                        "start": 464,
                        "end": 623,
                        "text": ".67 .72 .82 .69 .81 .77 .78 .65 .82 .66 .66 .7 .77 .84 .72 .78 .69 .8 .78 .79 F .80 .83 .89 .81 .89 .86 .87 .78 .75 .79 .79 .82 .71 .91 .83 .87 .81 .88 .87 .88",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "IT",
                "sec_num": null
            },
            {
                "text": "The results of the manual evaluation are shown in Table 4 . From the results we can see that both assessors judge above 80% of the IT domain terms as category 1 -the category containing equivalent term pairs. Only a small proportion of the term pairs are judged as belonging to category 4 (3-7%) -the category containing unrelated term pairs. For the automotive domain the proportion of equivalent term pairs varies between 60 and 66%. For unrelated term pairs this is below 10% for both assessors.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 50,
                        "end": 57,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "We investigated the inter-annotator agreement. Across the four classes the percentage agreement was 83% for the automotive domain term pairs and 86% for the IT domain term pairs. The kappa statistic, \u03ba, was .69 for the automotive domain pairs and .52 for the IT domain. We also considered two class agreement where we treated term pairs within categories 2 and 3 as belonging to category 4 (i.e. as \"incorrect\" translations). In this case, for the automotive domain the percentage agreement was 90% and \u03ba = 0.72 and for the IT domain percentage agreement was 89% with \u03ba = 0.55. The agreement in the automotive domain is higher than in the IT one although both judges were computer scientists. We analyzed the differences and found that they differ in cases where the German and the English term are both in English. One of the annotators treated such cases as correct translation, whereas the other did not.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "We also checked to ensure our technique was not simply rediscovering our dictionaries. Since the GIZA++ dictionaries contain only single word-single word mappings, we examined the newly aligned term pairs that consisted of one word on both source and target sides. Taking both the IT and automotive domains together, our algorithm proposed 5021 term pairs of which 2751 (55%) were word-word term pairs. 462 of these (i.e. 17% of the word-word term pairs or 9% of the overall set of aligned term pairs) were already in either the EN-DE or DE-EN GIZA++ dictionaries. Thus, of our newly extracted term pairs a relatively small proportion are rediscovered dictionary entries. We also checked our evaluation data to see what proportion of the assessed term pairs were already to be found in the GIZA++ dictionaries. A total of 600 term pairs were put in front of the judges of which 198 (33%) were word-word term pairs. Of these 15 (less than 8% of the word-word pairs and less then 3% of the overall assessed set of assessed term pairs) were word-word pairs already in the dictionaries. We conclude that our evaluation results are not unduly affected by assessing term pairs which were given to the algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "Error analysis For both domains we performed an error analysis for the unrelated, i.e. category 4 term pairs. We found that in both domains the main source of errors is due to terms with different meanings but similar spellings such as the following example (1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "(1) accelerator -decelerator For this example the cognate methods, e.g. the Levenshtein similarity measure, returns a score of 0.81. This problem could be addressed in different ways. First, it could be resolved by applying a very high threshold for the cognate methods. Any cognate score below that threshold could be regarded as zero -as we did for the combined features (cf. Section 4.4). However, setting a similarity threshold higher than 0.9 -to filter out cases as in (1) -will cause real cognates with greater variation in the spellings to be missed. This will, in particular, affect languages with a lot of inflection, such as Latvian. Another approach to address this problem would be to take the contextual or distributional properties of the terms into consideration. To achieve this, training data consisting of term pairs along with contextual information is required. However, such training data does not currently exist (i.e. resources like EUROVOC do not contain contextual information) and it would need to be collected as a first step towards applying this approach to the problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "Partial Translation The assessors assigned 6 -7% of the term pairs in the IT domain and 12 -16% in the automotive domain to categories 2 and 3. In both categories the term pairs share translations or cognates.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "Clearly, if humans such as professional translators are the end users of these terms, then it could be helpful for them to find some translation units within the terms. In category 2 this will be the entire translation of one term in the other such as the following examples. 5 (2) visible graphical interface -grafische benutzerschnittstelle",
                "cite_spans": [
                    {
                        "start": 276,
                        "end": 277,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "(3) modern turbocharger systems -moderne turbolader",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "In example (3) the a translation of the German term is to be found entirely within in the English term but the English term has the additional word visible, a translation of which is not found in the German term. In example (4), again the translation of the German term is entirely found in the English term, but as in the previous example, one of the English words -systems -in this case, has no match within the German term. In category 3 there are only single word translation overlaps between the terms as shown in the following examples.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "(4) national standard language niederl\u00e4ndischen standardsprache (5) thermoplastic material -thermoplastische elastomere",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "In example (5) standard language is translated to standardsprache and in example (6) thermoplastic to thermoplastische. The other words within the terms are not translations of each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "Another application of the extracted term pairs is to use them to enhance existing parallel corpora to train SMT systems. In this case, including the partially correct terms may introduce noise. This is especially the case for the terms within category 3. However, the usefulness of terms in both these scenarios requires further investigation, which we aim to do in future work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Manual evaluation",
                "sec_num": "5.4.2"
            },
            {
                "text": "In this paper we presented an approach to align terms identified by a monolingual term extractor in bilingual comparable corpora using a binary classifier. We trained the classifier using data from the EUROVOC thesaurus. Each candidate term pair was pre-processed to extract various features which are cognate-based or dictionary-based. We measured the performance of our classifier using Information Retrieval (IR) metrics and a manual evaluation. In the IR evaluation we tested the performance of the classifier on a held out test set taken from EUROVOC. We used 20 EU language pairs with English being always the source language. The performance of our classifier in this evaluation reached the 100% precision level for many language pairs. In the manual evaluation we had our algorithm extract pairs of terms from Wikipedia articles -articles forming comparable corpora in the IT and automotive domains -and asked native speakers to categorize a selection of the term pairs into categories reflecting the level of translation of the terms. In the manual evaluation we used the English-German language pair and showed that over 80% of the extracted term pairs were exact translations in the IT domain and over 60% in the automotive domain. For both domains over 90% of the extracted term pairs were either exact or partial translations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "We also performed an error analysis and highlighted problem cases, which we plan to address in future work. Exploring ways to add contextual or distributional features to our term representations is also an avenue for future work, though it clearly significantly complicates the approach, one of whose advantages is its simplicitiy. Furthermore, we aim to extend the existing dictionaries and possibly our training data with terms extracted from comparable corpora. Finally, we plan to investigate the usefulness of the terms in different application scenarios, including computer assisted translation and machine translation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "Assuming the terms are correctly spelled, otherwise the misspelling is another problem.2 Note that we use the cognate scores obtained on the character mapped terms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Note that we do not use the Maltese-English language pair, as for this pair we found that 5861 out of 6797 term pairs were identical, i.e. the English and the Maltese terms were the same. Excluding Maltese, the average number of identical terms between a non-English language and English in the EUROVOC data is 37.7 (out of a possible 6797).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "In our data it is always the case that the target term is entirely translated within the English one and the other way round.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The research reported was funded by the TaaS project, European Union Seventh Framework Programme, grant agreement no. 296312. The authors would like to thank the manual annotators for their helpful contributions. We would also like to thank partners at Tilde SIA and at the University of Zagreb for supplying the TWSC term extraction tool, developed within the EU funded project AC-CURAT.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Automatic bilingual phrase extraction from comparable corpora",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Aker",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Feng",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Gaizauskas",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "24th International Conference on Computational Linguistics (COLING 2012), IIT Bombay",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Aker, Y. Feng, and R. Gaizauskas. 2012. Auto- matic bilingual phrase extraction from comparable corpora. In 24th International Conference on Com- putational Linguistics (COLING 2012), IIT Bom- bay, Mumbai, India, 2012. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Machine transliteration of names in arabic text",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Al-Onaizan",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the ACL-02 workshop on Computational approaches to semitic languages",
                "volume": "",
                "issue": "",
                "pages": "1--13",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Al-Onaizan and K. Knight. 2002. Machine translit- eration of names in arabic text. In Proceedings of the ACL-02 workshop on Computational approaches to semitic languages, pages 1-13. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "English-hindi transliteration using multiple similarity metrics",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Aswani",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Gaizauskas",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "N. Aswani and R. Gaizauskas. 2010. English-hindi transliteration using multiple similarity metrics. In Proceedings of the Seventh International Confer- ence on Language Resources and Evaluation (LREC 2010), Valetta, Malta.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Identifying bilingual multi-word expressions for statistical machine translation",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Bouamor",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Semmar",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Zweigenbaum",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "LREC 2012, Eigth International Conference on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "674--679",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D. Bouamor, N. Semmar, and P. Zweigenbaum. 2012. Identifying bilingual multi-word expressions for sta- tistical machine translation. In LREC 2012, Eigth International Conference on Language Resources and Evaluation, pages 674-679, Istanbul, Turkey, 2012. ELRA.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Base noun phrase translation using web data and the em algorithm",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Cao",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 19th international conference on Computational linguistics",
                "volume": "1",
                "issue": "",
                "pages": "1--7",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Cao and H. Li. 2002. Base noun phrase translation using web data and the em algorithm. In Proceed- ings of the 19th international conference on Com- putational linguistics-Volume 1, pages 1-7. Associ- ation for Computational Linguistics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Introduction to Algorithms",
                "authors": [
                    {
                        "first": "T",
                        "middle": [
                            "H"
                        ],
                        "last": "Cormen",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "E"
                        ],
                        "last": "Leiserson",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "L"
                        ],
                        "last": "Rivest",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Stein",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. 2001. Introduction to Algorithms. The MIT Press, 2nd revised edition, September.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Towards automatic extraction of monolingual and bilingual terminology",
                "authors": [
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Daille",
                        "suffix": ""
                    },
                    {
                        "first": "\u00c9",
                        "middle": [],
                        "last": "Gaussier",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Lang\u00e9",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the 15th conference on Computational linguistics",
                "volume": "1",
                "issue": "",
                "pages": "515--521",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "B. Daille,\u00c9. Gaussier, and J.M. Lang\u00e9. 1994. Towards automatic extraction of monolingual and bilingual terminology. In Proceedings of the 15th conference on Computational linguistics-Volume 1, pages 515- 521. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Automatic extraction of bilingual terms from a chinesejapanese parallel corpus",
                "authors": [
                    {
                        "first": "X",
                        "middle": [],
                        "last": "Fan",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Shimizu",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Nakagawa",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the 3rd International Universal Communication Symposium",
                "volume": "",
                "issue": "",
                "pages": "41--45",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "X. Fan, N. Shimizu, and H. Nakagawa. 2009. Auto- matic extraction of bilingual terms from a chinese- japanese parallel corpus. In Proceedings of the 3rd International Universal Communication Sympo- sium, pages 41-45. ACM.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Finding terminology translations from non-parallel corpora",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Fung",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Mckeown",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the 5th Annual Workshop on Very Large Corpora",
                "volume": "",
                "issue": "",
                "pages": "192--202",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. Fung and K. McKeown. 1997. Finding terminol- ogy translations from non-parallel corpora. In Pro- ceedings of the 5th Annual Workshop on Very Large Corpora, pages 192-202.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Bilingual lexicon extraction from comparable corpora using indomain terms",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Ismail",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Manandhar",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
                "volume": "",
                "issue": "",
                "pages": "481--489",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Ismail and S. Manandhar. 2010. Bilingual lexi- con extraction from comparable corpora using in- domain terms. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics: Posters, pages 481-489. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Learning to classify text using support vector machines: Methods, theory and algorithms",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Joachims",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "",
                "volume": "186",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Joachims. 2002. Learning to classify text using sup- port vector machines: Methods, theory and algo- rithms, volume 186. Kluwer Academic Publishers Norwell, MA, USA:.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Machine transliteration survey",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Karimi",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Scholer",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Turpin",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "ACM Computing Surveys (CSUR)",
                "volume": "43",
                "issue": "3",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Karimi, F. Scholer, and A. Turpin. 2011. Ma- chine transliteration survey. ACM Computing Sur- veys (CSUR), 43(3):17.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Machine transliteration",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Graehl",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Computational Linguistics",
                "volume": "24",
                "issue": "4",
                "pages": "599--612",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "K. Knight and J. Graehl. 1998. Machine translitera- tion. Computational Linguistics, 24(4):599-612.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "An algorithm for finding noun phrase correspondences in bilingual corpora",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Kupiec",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Proceedings of the 31st annual meeting on Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "17--22",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceed- ings of the 31st annual meeting on Association for Computational Linguistics, pages 17-22. Associa- tion for Computational Linguistics.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Learning translations of namedentity phrases from parallel corpora",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Moore",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Moore. 2003. Learning translations of named- entity phrases from parallel corpora. In In Proceed- ings of the tenth conference on European chapter of the Association for Computational Linguistics- Volume 1, pages 259266. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Bilingual terminology mining -using brain, not brawn comparable corpora",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Morin",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Daille",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Takeuchi",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Kageura",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "664--671",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E. Morin, B. Daille, K. Takeuchi, and K. Kageura. 2007. Bilingual terminology mining -using brain, not brawn comparable corpora. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 664-671, Prague, Czech Republic, June. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "A comparison of alignment models for statistical machine translation",
                "authors": [
                    {
                        "first": "F",
                        "middle": [
                            "J"
                        ],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 18th conference on Computational linguistics",
                "volume": "",
                "issue": "",
                "pages": "1086--1090",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. J. Och and H. Ney. 2000. A comparison of align- ment models for statistical machine translation. In Proceedings of the 18th conference on Computa- tional linguistics, pages 1086-1090, Morristown, NJ, USA. Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "A systematic comparison of various statistical alignment models",
                "authors": [
                    {
                        "first": "F",
                        "middle": [
                            "J"
                        ],
                        "last": "Och Och",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Computational Linguistics",
                "volume": "29",
                "issue": "1",
                "pages": "19--51",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. J. Och Och and H. Ney. 2003. A systematic compar- ison of various statistical alignment models. Com- putational Linguistics, 29(1):19-51.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Multi-word expression-sensitive word alignment",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Okita",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Guerra",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Graham",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Way",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Okita, A. Maldonado Guerra, Y. Graham, and A. Way. 2010. Multi-word expression-sensitive word alignment. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Term extraction, tagging, and mapping tools for under-resourced languages",
                "authors": [
                    {
                        "first": "M\u0101rcis",
                        "middle": [],
                        "last": "Pinnis",
                        "suffix": ""
                    },
                    {
                        "first": "Nikola",
                        "middle": [],
                        "last": "Ljube\u0161i\u0107",
                        "suffix": ""
                    },
                    {
                        "first": "Inguna",
                        "middle": [],
                        "last": "Dan \u015e Tef\u0203nescu",
                        "suffix": ""
                    },
                    {
                        "first": "Marko",
                        "middle": [],
                        "last": "Skadi\u0146a",
                        "suffix": ""
                    },
                    {
                        "first": "Tatiana",
                        "middle": [],
                        "last": "Tadi\u0107",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Gornostay",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proc. of the 10th Conference on Terminology and Knowledge Engineering (TKE 2012)",
                "volume": "",
                "issue": "",
                "pages": "20--21",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M\u0101rcis Pinnis, Nikola Ljube\u0161i\u0107, Dan \u015e tef\u0203nescu, In- guna Skadi\u0146a, Marko Tadi\u0107, and Tatiana Gornostay. 2012. Term extraction, tagging, and mapping tools for under-resourced languages. In Proc. of the 10th Conference on Terminology and Knowledge Engi- neering (TKE 2012), June, pages 20-21.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Identifying word translations in nonparallel texts",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Rapp",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "320--322",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Rapp. 1995. Identifying word translations in non- parallel texts. In Proceedings of the 33rd annual meeting on Association for Computational Linguis- tics, pages 320-322. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Treetagger-a language independent part-of-speech tagger",
                "authors": [
                    {
                        "first": "Helmut",
                        "middle": [],
                        "last": "Schmid",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Institut f\u00fcr Maschinelle Sprachverarbeitung",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Helmut Schmid. 1995. Treetagger-a lan- guage independent part-of-speech tagger. Insti- tut f\u00fcr Maschinelle Sprachverarbeitung, Universit\u00e4t Stuttgart, page 43.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "A comparison and analysis of name matching algorithms",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Snae",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "International Journal of Applied Science. Engineering and Technology",
                "volume": "4",
                "issue": "1",
                "pages": "252--257",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. Snae. 2007. A comparison and analysis of name matching algorithms. International Journal of Applied Science. Engineering and Technology, 4(1):252-257.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Cross-lingual document similarity calculation using the multilingual thesaurus eurovoc",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Steinberger",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Pouliquen",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Hagman",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Computational Linguistics and Intelligent Text Processing",
                "volume": "",
                "issue": "",
                "pages": "101--121",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Steinberger, B. Pouliquen, and J. Hagman. 2002. Cross-lingual document similarity calculation using the multilingual thesaurus eurovoc. Computational Linguistics and Intelligent Text Processing, pages 101-121.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Dgt-tm: A freely available translation memory in 22 languages",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Steinberger",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Eisele",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Klocek",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Pilos",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Schlter",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of LREC",
                "volume": "",
                "issue": "",
                "pages": "454--459",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Steinberger, A. Eisele, S. Klocek, S. Pilos, and P. Schlter. 2012. Dgt-tm: A freely available trans- lation memory in 22 languages. In Proceedings of LREC, pages 454-459.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Mining named entity transliteration equivalents from comparable corpora",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Udupa",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Saravanan",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Kumaran",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Jagarlamudi",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceeding of the 17th ACM conference on Information and knowledge management",
                "volume": "",
                "issue": "",
                "pages": "1423--1424",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Udupa, K. Saravanan, A. Kumaran, and J. Jagarla- mudi. 2008. Mining named entity transliteration equivalents from comparable corpora. In Proceed- ing of the 17th ACM conference on Information and knowledge management, pages 1423-1424. ACM.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF0": {
                "type_str": "table",
                "text": "Wikipedia term pairs processed and judged as positive by the classifier.",
                "html": null,
                "num": null,
                "content": "<table><tr><td>Processed 11597K DE Automotive 12307K DE IT</td><td>Positive 3249 1772</td></tr></table>"
            },
            "TABREF1": {
                "type_str": "table",
                "text": "Example positive pairs for English-German.",
                "html": null,
                "num": null,
                "content": "<table/>"
            },
            "TABREF2": {
                "type_str": "table",
                "text": "Classifier performance results on EUROVOC data (P stands for precision, R for recall and F for F -measure). Each language is paired with English. The test set contains 600 positive and 1359400 negative examples.ET HU NL DA SV DE LV FI PT SL FR IT LT SK CS RO PL ES EL BG",
                "html": null,
                "num": null,
                "content": "<table><tr><td>P R</td><td>1</td><td>1</td><td>.98 1</td><td>1</td><td>.98 1</td><td>1</td><td>.7</td><td>1</td><td>1</td><td>1</td><td>.67 .81 1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td></tr></table>"
            },
            "TABREF3": {
                "type_str": "table",
                "text": "Results of the EN-DE manual evaluation by two annotators. Numbers reported per category are percentages.",
                "html": null,
                "num": null,
                "content": "<table><tr><td>Domain IT Automotive</td><td>Ann. 1 P1 81 P2 83 P1 66 P2 60</td><td>2 6 7 12 15</td><td>3 6 7 16 16</td><td>4 7 3 6 9</td></tr></table>"
            }
        }
    }
}