File size: 106,747 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
{
    "paper_id": "P13-1002",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:33:42.506308Z"
    },
    "title": "Integrating Translation Memory into Phrase-Based Machine Translation during Decoding",
    "authors": [
        {
            "first": "Kun",
            "middle": [],
            "last": "Wang",
            "suffix": "",
            "affiliation": {
                "laboratory": "National Laboratory of Pattern Recognition",
                "institution": "Chinese Academy of Sciences",
                "location": {
                    "settlement": "Beijing",
                    "country": "China"
                }
            },
            "email": "kunwang@nlpr.ia.ac.cn"
        },
        {
            "first": "Chengqing",
            "middle": [],
            "last": "Zong",
            "suffix": "",
            "affiliation": {
                "laboratory": "National Laboratory of Pattern Recognition",
                "institution": "Chinese Academy of Sciences",
                "location": {
                    "settlement": "Beijing",
                    "country": "China"
                }
            },
            "email": "cqzong@nlpr.ia.ac.cn"
        },
        {
            "first": "Keh-Yih",
            "middle": [],
            "last": "Su",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Behavior Design Corporation",
                "location": {
                    "country": "Taiwan"
                }
            },
            "email": "kysu@bdc.com.tw"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Since statistical machine translation (SMT) and translation memory (TM) complement each other in matched and unmatched regions, integrated models are proposed in this paper to incorporate TM information into phrase-based SMT. Unlike previous multi-stage pipeline approaches, which directly merge TM result into the final output, the proposed models refer to the corresponding TM information associated with each phrase at SMT decoding. On a Chinese-English TM database, our experiments show that the proposed integrated Model-III is significantly better than either the SMT or the TM systems when the fuzzy match score is above 0.4. Furthermore, integrated Model-III achieves overall 3.48 BLEU points improvement and 2.62 TER points reduction in comparison with the pure SMT system. Besides, the proposed models also outperform previous approaches significantly.",
    "pdf_parse": {
        "paper_id": "P13-1002",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Since statistical machine translation (SMT) and translation memory (TM) complement each other in matched and unmatched regions, integrated models are proposed in this paper to incorporate TM information into phrase-based SMT. Unlike previous multi-stage pipeline approaches, which directly merge TM result into the final output, the proposed models refer to the corresponding TM information associated with each phrase at SMT decoding. On a Chinese-English TM database, our experiments show that the proposed integrated Model-III is significantly better than either the SMT or the TM systems when the fuzzy match score is above 0.4. Furthermore, integrated Model-III achieves overall 3.48 BLEU points improvement and 2.62 TER points reduction in comparison with the pure SMT system. Besides, the proposed models also outperform previous approaches significantly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Statistical machine translation (SMT), especially the phrase-based model (Koehn et al., 2003) , has developed very fast in the last decade. For certain language pairs and special applications, SMT output has reached an acceptable level, especially in the domains where abundant parallel corpora are available (He et al., 2010) . However, SMT is rarely applied to professional translation because its output quality is still far from satisfactory. Especially, there is no guarantee that a SMT system can produce translations in a consistent manner .",
                "cite_spans": [
                    {
                        "start": 73,
                        "end": 93,
                        "text": "(Koehn et al., 2003)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 309,
                        "end": 326,
                        "text": "(He et al., 2010)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In contrast, translation memory (TM), which uses the most similar translation sentence (usually above a certain fuzzy match threshold) in the database as the reference for post-editing, has been widely adopted in professional translation field for many years (Lagoudaki, 2006) . TM is very useful for repetitive material such as updated product manuals, and can give high quality and consistent translations when the similarity of fuzzy match is high. Therefore, professional translators trust TM much more than SMT. However, high-similarity fuzzy matches are available unless the material is very repetitive.",
                "cite_spans": [
                    {
                        "start": 259,
                        "end": 276,
                        "text": "(Lagoudaki, 2006)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In general, for those matched segments 1 , TM provides more reliable results than SMT does. One reason is that the results of TM have been revised by human according to the global context, but SMT only utilizes local context. However, for those unmatched segments, SMT is more reliable. Since TM and SMT complement each other in those matched and unmatched segments, the output quality is expected to be raised significantly if they can be combined to supplement each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In recent years, some previous works have incorporated TM matched segments into SMT in a pipelined manner (Koehn and Senellart, 2010; Zhechev and van Genabith, 2010; . All these pipeline approaches translate the sentence in two stages. They first determine whether the extracted TM sentence pair should be adopted or not. Most of them use fuzzy match score as the threshold, but and use a classifier to make the judgment. Afterwards, they merge the relevant translations of matched segments into the source sentence, and then force the SMT system to only translate those unmatched segments at decoding.",
                "cite_spans": [
                    {
                        "start": 106,
                        "end": 133,
                        "text": "(Koehn and Senellart, 2010;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 134,
                        "end": 165,
                        "text": "Zhechev and van Genabith, 2010;",
                        "ref_id": "BIBREF28"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There are three obvious drawbacks for the above pipeline approaches. Firstly, all of them determine whether those matched segments should be adopted or not at sentence level. That is, they are either all adopted or all abandoned regardless of their individual quality. Secondly, as several TM target phrases might be available for one given TM source phrase due to insertions, the incorrect selection made in the merging stage cannot be remedied in the following translation stage. For example, there are six possible corresponding TM target phrases for the given TM source phrase \"\u5173\u8054 4 \u7684 5 \u5bf9\u8c61 6 \" (as shown in Figure 1 ) such as \"object 2 that 3 is 4 associated 5 \", and \"an 1 object 2 that 3 is 4 associated 5 with 6 \", etc. And it is hard to tell which one should be adopted in the merging stage. Thirdly, the pipeline approach does not utilize the SMT probabilistic information in deciding whether a matched TM phrase should be adopted or not, and which target phrase should be selected when we have multiple candidates. Therefore, the possible improvements resulted from those pipeline approaches are quite limited.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 611,
                        "end": 619,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "On the other hand, instead of directly merging TM matched phrases into the source sentence, some approaches (Bi\u00e7 ici and Dymetman, 2008; Simard and Isabelle, 2009 ) simply add the longest matched pairs into SMT phrase table, and then associate them with a fixed large probability value to favor the corresponding TM target phrase at SMT decoding. However, since only one aligned target phrase will be added for each matched source phrase, they share most drawbacks with the pipeline approaches mentioned above and merely achieve similar performance.",
                "cite_spans": [
                    {
                        "start": 108,
                        "end": 136,
                        "text": "(Bi\u00e7 ici and Dymetman, 2008;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 137,
                        "end": 162,
                        "text": "Simard and Isabelle, 2009",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To avoid the drawbacks of the pipeline approach (mainly due to making a hard decision before decoding), we propose several integrated models to completely make use of TM information during decoding. For each TM source phrase, we keep all its possible corresponding target phrases (instead of keeping only one of them). The integrated models then consider all corresponding TM target phrases and SMT preference during decoding. Therefore, the proposed integrated models combine SMT and TM at a deep level (versus the surface level at which TM result is directly plugged in under previous pipeline approaches).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "On a Chinese-English computer technical documents TM database, our experiments have shown that the proposed Model-III improves the translation quality significantly over either the pure phrase-based SMT or the TM systems when the fuzzy match score is above 0.4. Compared with the pure SMT system, the proposed integrated Model-III achieves 3.48 BLEU points improvement and 2.62 TER points reduction overall. Furthermore, the proposed models significantly outperform previous pipeline approaches.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Compared with the standard phrase-based machine translation model, the translation problem is reformulated as follows (only based on the best TM, however, it is similar for multiple TM sentences):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "(1)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "Where is the given source sentence to be translated, is the corresponding target sentence and is the final translation; are the associated information of the best TM sentence-pair; and denote the corresponding TM sentence pair;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "denotes its associated fuzzy match score (from 0.0 to 1.0); is the editing operations between and ; and denotes the word alignment between and . Let and denote the k-th associated source phrase and target phrase, respectively. Also, and denote the associated source phrase sequence and the target phrase sequence, respectively (total phrases without insertion). Then the above formula (1) can be decomposed as below:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "(2) Afterwards, for any given source phrase , we can find its corresponding TM source phrase and all possible TM target phrases (each of them is denoted by ) with the help of corresponding editing operations and word alignment . As mentioned above, we can have six different possible TM target phrases for the TM source phrase \"\u5173\u8054 4 \u7684 5 \u5bf9\u8c61 6 \". This 2, we first segment the given source sentence into various phrases, and then translate the sentence based on those source phrases. Also, is replaced by , as they are actually the same segmentation sequence. Assume that the segmentation probability is a uniform distribution, with the corresponding TM source and target phrases obtained above, this problem can be further simplified as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "(3)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "is the corresponding TM phrase matching status for , which is a vector consisting of various indicators (e.g., Target Phrase Content Matching Status, etc., to be defined later), and reflects the quality of the given candidate;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Where",
                "sec_num": null
            },
            {
                "text": "is the linking status vector of (the aligned source phrase of within ), and indicates the matching and linking status in the source side (which is closely related to the status in the target side); also, indicates the corresponding TM fuzzy match interval specified later.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Where",
                "sec_num": null
            },
            {
                "text": "In the second line of Equation 3, we convert the fuzzy match score into its corresponding interval , and incorporate all possible combinations of TM target phrases. Afterwards, we select the best one in the third line. Last, in the fourth line, we introduce the source matching status and the target linking status (detailed features would be defined later). Since we might have several possible TM target phrases , the one with the maximum score will be adopted during decoding.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Where",
                "sec_num": null
            },
            {
                "text": "The first factor in the above formula (3) is just the typical phrase-based SMT model, and the second factor (to be specified in the Section 3) is the information derived from the TM sentence pair. Therefore, we can still keep the original phrase-based SMT model and only pay attention to how to extract useful information from the best TM sentence pair to guide SMT decoding.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Where",
                "sec_num": null
            },
            {
                "text": "Three integrated models are proposed to incorporate different features as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Proposed Models",
                "sec_num": "3"
            },
            {
                "text": "In this simplest model, we only consider Target Phrase Content Matching Status (TCM) for . For , we consider four different features at the same time: Source Phrase Content Matching Status (SCM), Number of Linking Neighbors (NLN), Source Phrase Length (SPL), and Sentence End Punctuation Indicator (SEP). Those features will be defined below.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-I",
                "sec_num": "3.1"
            },
            {
                "text": "is then specified as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-I",
                "sec_num": "3.1"
            },
            {
                "text": "All features incorporated in this model are specified as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-I",
                "sec_num": "3.1"
            },
            {
                "text": "The fuzzy match score (FMS) between source sentence and TM source sentence indicates the reliability of the given TM sentence, and is defined as (Sikes, 2007) :",
                "cite_spans": [
                    {
                        "start": 145,
                        "end": 158,
                        "text": "(Sikes, 2007)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "TM Fuzzy Match Interval (z):",
                "sec_num": null
            },
            {
                "text": "is the word-based Levenshtein Distance (Levenshtein, 1966) between and . We equally divide FMS into ten fuzzy match intervals such as: [0.9, 1.0), [0.8, 0.9) etc., and the index specifies the corresponding interval. For example, since the fuzzy match score between and in Figure 1 is 0.667, then .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 272,
                        "end": 280,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Where",
                "sec_num": null
            },
            {
                "text": "It indicates the content matching status between and , and reflects the quality of . Because is nearly perfect when FMS is high, if the similarity between and is high, it implies that the given is possibly a good candidate. It is a member of {Same, High, Low, NA (Not-Applicable)}, and is specified as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase Content Matching Status (TCM):",
                "sec_num": null
            },
            {
                "text": "(1) If is not null: (a) if , ; (b) else if , ; (c) else, ; (2) If is null, ;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase Content Matching Status (TCM):",
                "sec_num": null
            },
            {
                "text": "Here is null means that either there is no corresponding TM source phrase or there is no corresponding TM target phrase aligned with . In the example of Figure 1 , assume that the given is \"\u5173\u8054 5 \u7684 6 \u5bf9\u8c61 7 \" and is \"object that is associated\". If is \"object 2 that 3 is 4 associated 5 \", ; if is \"an 1 object 2 that 3 is 4 associated 5 \", .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 153,
                        "end": 161,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Target Phrase Content Matching Status (TCM):",
                "sec_num": null
            },
            {
                "text": "Source Phrase Content Matching Status (SCM): Which indicates the content matching status between and , and it affects the matching status of and greatly. The more similar is to , the more similar is to . It is a member of {Same, High, Low, NA} and is defined as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase Content Matching Status (TCM):",
                "sec_num": null
            },
            {
                "text": "(1) If is not null: (a) if , ; (b) else if , ; (c) else, ; (2) If is null, ;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Target Phrase Content Matching Status (TCM):",
                "sec_num": null
            },
            {
                "text": "Here is null means that there is no corresponding TM source phrase for the given source phrase . Take the source phrase \"\u5173\u8054 5 \u7684 6 \u5bf9\u8c61 7 \" in Figure 1 for an example, since its corresponding is \"\u5173\u8054 4 \u7684 5 \u5bf9\u8c61 6 \", then .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 140,
                        "end": 148,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Target Phrase Content Matching Status (TCM):",
                "sec_num": null
            },
            {
                "text": "Usually, the context of a source phrase would affect its target translation. The more similar the context are, the more likely that the translations are the same. Therefore, this NLN feature reflects the number of matched neighbors (words) and it is a vector of <x, y>. Where \"x\" denotes the number of matched source neighbors; and \"y\" denotes how many those neighbors are also linked to target words (not null), which also affects the TM target phrase selection. This feature is a member of {<x, y>: <2, 2>, <2, 1>, <2, 0>, <1, 1>, <1, 0>, <0, 0>}. For the source phrase \"\u5173\u8054 5 \u7684 6 \u5bf9\u8c61 7 \" in Figure 1 , the corresponding TM source phrase is \"\u5173\u8054 4 \u7684 5 \u5bf9\u8c61 6 \" . As only their right neighbors \"\u3002 8 \" and \"\u3002 7 \" are matched, and \"\u3002 7 \" is aligned with \". 10 \", NLN will be <1, 1>.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 592,
                        "end": 600,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Number of Linking Neighbors (NLN):",
                "sec_num": null
            },
            {
                "text": "Usually the longer the source phrase is, the more reliable the TM target phrase is. For example, the corresponding for the source phrase with 5 words would be more reliable than that with only one word. This feature denotes the number of words included in , and is a member of {1, 2, 3, 4, \u22655}. For the case \"\u5173\u8054 5 \u7684 6 \u5bf9\u8c61 7 \", SPL will be 3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Source Phrase Length (SPL):",
                "sec_num": null
            },
            {
                "text": "Which indicates whether the current phrase is a punctuation at the end of the sentence, and is a member of {Yes, No}. For example, the SEP for \"\u5173\u8054 5 \u7684 6 \u5bf9\u8c61 7 \" will be \"No\". It is introduced because the SCM and TCM for a sentence-end-punctuation are always \"Same\" regardless of other features. Therefore, it is used to distinguish this special case from other cases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sentence End Punctuation Indicator (SEP):",
                "sec_num": null
            },
            {
                "text": "As Model-I ignores the relationship among various possible TM target phrases, we add two features TM Candidate Set Status (CSS) and Longest TM Candidate Indicator (LTC) to incorporate this relationship among them. Since CSS is redundant after LTC is known, we thus ignore it for evaluating TCM probability in the following derivation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-II",
                "sec_num": "3.2"
            },
            {
                "text": "The two new features CSS and LTC adopted in Model-II are defined as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-II",
                "sec_num": "3.2"
            },
            {
                "text": "TM Candidate Set Status (CSS): Which restricts the possible status of , and is a member of {Single, Left-Ext, Right-Ext, Both-Ext, NA}. Where \"Single\" means that there is only one candidate for the given source phrase ; \"Left-Ext\" means that there are multiple candidates, and all the candidates are generated by extending only the left boundary; \"Right-Ext\" means that there are multiple candidates, and all the candidates are generated by only extending to the right; \"Both-Ext\" means that there are multiple candidates, and the candidates are generated by extending to both sides; \"NA\" means that is null. For \"\u5173\u8054 4 \u7684 5 \u5bf9\u8c61 6 \" in Figure 1 , the linked TM target phrase is \"object 2 that 3 is 4 associated 5 \", and there are 5 other candidates by extending to both sides. Therefore, .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 633,
                        "end": 641,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Model-II",
                "sec_num": "3.2"
            },
            {
                "text": "Which indicates whether the given is the longest candidate or not, and is a member of {Original, Left-Longest, Right-Longest, Both-Longest, Medium, NA}. Where \"Original\" means that the given is the one without extension; \"Left-Longest\" means that the given is only extended to the left and is the longest one; \"Right-Longest\" means that the given is only extended to the right and is the longest one; \"Both-Longest\" means that the given is extended to both sides and is the longest one; \"Medium\" means that the given has been extended but not the longest one; \"NA\" means that is null. For \"object 2 that 3 is 4 associated 5 \" in Figure 1, ; for \"an 1 object 2 that 3 is 4 associated 5 \", ; for the longest \"an 1 object 2 that 3 is 4 associated 5 with 6 the 7 \", .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 629,
                        "end": 638,
                        "text": "Figure 1,",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Longest TM Candidate Indicator (LTC):",
                "sec_num": null
            },
            {
                "text": "The abovementioned integrated models ignore the reordering information implied by TM. We assume that CPM is independent with SPL and SEP, because the length of source phrase would not affect reordering too much and SEP is used to distinguish the sentence end punctuation with other phrases. (2) If is null but is not null, then find the first which is not null ( starts from 2) 2 : (a) If is on the right of , ; (b) If is not on the right of : i. If there are cross parts between and , ; ii. Otherwise, .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-III",
                "sec_num": "3.3"
            },
            {
                "text": "(3) If is null, .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-III",
                "sec_num": "3.3"
            },
            {
                "text": "In Figure 1 , assume that , and are \"gets an\", \"object that is associated with\" and \"gets 0 an 1 \", respectively. For \"object 2 that 3 is 4 associated 5 \", because is on the right of and they are adjacent pair, and both boundary words (\"an\" and \"an 1 \"; \"object\" and \"object 2 \") are matched,",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 11,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Model-III",
                "sec_num": "3.3"
            },
            {
                "text": "; for \"an 1 object 2 that 3 is 4 associated 5 \", because there are cross parts \"an 1 \" between and , . On the other hand, assume that , and are \"gets\", \"object that is associated with\" and \"gets 0 \", respectively. For \"an 1 object 2 that 3 is 4 associated 5 \", because and are adjacent pair, but the left boundary words of and (\"object\" and \"an 1 \") are not matched, ; for \"object 2 that 3 is 4 associated 5 \", because is on the right of but they are not adjacent pair, therefore,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-III",
                "sec_num": "3.3"
            },
            {
                "text": ". One more example, assume that , and are \"the annotation label\", \"object that is associated with\" and \"the 7 annotation 8 label 9 \", respectively. For \"an 1 object 2 that 3 is 4 associated 5 \", because is on the left of , and there are no cross parts, .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model-III",
                "sec_num": "3.3"
            },
            {
                "text": "Our TM database consists of computer domain Chinese-English translation sentence-pairs, which contains about 267k sentence-pairs. The average length of Chinese sentences is 13.85 words and that of English sentences is 13.86 words. We randomly selected a development set and a test set, and then the remaining sentence pairs are for training set. The detailed corpus statistics are shown in Table 1 . Furthermore, development set and test set are divided into various intervals according to their best fuzzy match scores. Corpus statistics for each interval in the test set are shown in Table 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 390,
                        "end": 397,
                        "text": "Table 1",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 586,
                        "end": 593,
                        "text": "Table 2",
                        "ref_id": "TABREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "4.1"
            },
            {
                "text": "For the phrase-based SMT system, we adopted the Moses toolkit (Koehn et al., 2007) . The system configurations are as follows: GIZA++ (Och and Ney, 2003) is used to obtain the bidirectional word alignments. Afterwards, \"intersection\" 3 refinement (Koehn et al., 2003 ) is adopted to extract phrase-pairs. We use the SRI Language Model toolkit (Stolcke, 2002) to train a 5-gram model with modified Kneser-Ney smoothing (Kneser and Ney, 1995; Chen and Goodman, 1998) on the target-side (English) training corpus. All the feature weights and the weight for each probability factor (3 factors for Model-III) are tuned on the development set with minimumerror-rate training (MERT) (Och, 2003) . The maximum phrase length is set to 7 in our experiments.",
                "cite_spans": [
                    {
                        "start": 62,
                        "end": 82,
                        "text": "(Koehn et al., 2007)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 134,
                        "end": 153,
                        "text": "(Och and Ney, 2003)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 247,
                        "end": 266,
                        "text": "(Koehn et al., 2003",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 343,
                        "end": 358,
                        "text": "(Stolcke, 2002)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 418,
                        "end": 440,
                        "text": "(Kneser and Ney, 1995;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 441,
                        "end": 464,
                        "text": "Chen and Goodman, 1998)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 676,
                        "end": 687,
                        "text": "(Och, 2003)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "4.1"
            },
            {
                "text": "In this work, the translation performance is measured with case-insensitive BLEU-4 score (Papineni et al., 2002) and TER score (Snover et al., 2006) . Statistical significance test is conducted with re-sampling (1,000 times) approach (Koehn, 2004) in 95% confidence level.",
                "cite_spans": [
                    {
                        "start": 89,
                        "end": 112,
                        "text": "(Papineni et al., 2002)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 127,
                        "end": 148,
                        "text": "(Snover et al., 2006)",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 234,
                        "end": 247,
                        "text": "(Koehn, 2004)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "4.1"
            },
            {
                "text": "To estimate the probabilities of proposed models, the corresponding phrase segmentations for bilingual sentences are required. As we want to check what actually happened during decoding in the real situation, cross-fold translation is used to obtain the corresponding phrase segmentations. We first extract 95% of the bilingual sentences as a new training corpus to train a SMT system. Afterwards, we generate the corresponding phrase segmentations for the remaining 5% bi-lingual sentences with Forced Decoding (Li et al., 2000; Zollmann et al., 2008; Auli et al., 2009; Wisniewski et al., 2010) , which searches the best phrase segmentation for the specified output. Having repeated the above steps 20 times 4 , we obtain the corresponding phrase segmentations for the SMT training data (which will then be used to train the integrated models).",
                "cite_spans": [
                    {
                        "start": 512,
                        "end": 529,
                        "text": "(Li et al., 2000;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 530,
                        "end": 552,
                        "text": "Zollmann et al., 2008;",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 553,
                        "end": 571,
                        "text": "Auli et al., 2009;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 572,
                        "end": 596,
                        "text": "Wisniewski et al., 2010)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cross-Fold Translation",
                "sec_num": "4.2"
            },
            {
                "text": "Due to OOV words and insertion words, not all given source sentences can generate the desired results through forced decoding. Fortunately, in our work, 71.7% of the training bilingual sentences can generate the corresponding target results. The remaining 28.3% of the sentence pairs are thus not adopted for generating training samples. Furthermore, more than 90% obtained source phrases are observed to be less than 5 words, which explains why five different quantization levels are adopted for Source Phrase Length (SPL) in section 3.1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Cross-Fold Translation",
                "sec_num": "4.2"
            },
            {
                "text": "After obtaining all the training samples via crossfold translation, we use Factored Language Model toolkit (Kirchhoff et al., 2007) to estimate the probabilities of integrated models with Witten-Bell smoothing (Bell et al., 1990; Witten et al., 1991) Table 3 : Translation Results (BLEU%). Scores marked by \"*\" are significantly better (p < 0.05) than both TM and SMT systems, and those marked by \"#\" are significantly better (p < 0.05) than Koehn-10.",
                "cite_spans": [
                    {
                        "start": 107,
                        "end": 131,
                        "text": "(Kirchhoff et al., 2007)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 210,
                        "end": 229,
                        "text": "(Bell et al., 1990;",
                        "ref_id": null
                    },
                    {
                        "start": 230,
                        "end": 250,
                        "text": "Witten et al., 1991)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 251,
                        "end": 258,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Translation Results",
                "sec_num": "4.3"
            },
            {
                "text": "Intervals TM SMT Model-I Model-II Model-III 38.10 34.49 Table 4 : Translation Results (TER%). Scores marked by \"*\" are significantly better (p < 0.05) than both TM and SMT systems, and those marked by \"#\" are significantly better (p < 0.05) than conducted using the Moses phrase-based decoder (Koehn et al., 2007) . Table 3 and 4 give the translation results of TM, SMT, and three integrated models in the test set. In the tables, the best translation results (either in BLEU or TER) at each interval have been marked in bold. Scores marked by \"*\" are significantly better (p < 0.05) than both the TM and the SMT systems.",
                "cite_spans": [
                    {
                        "start": 293,
                        "end": 313,
                        "text": "(Koehn et al., 2007)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 56,
                        "end": 63,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 316,
                        "end": 323,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Translation Results",
                "sec_num": "4.3"
            },
            {
                "text": "It can be seen that TM significantly exceeds SMT at the interval [0.9, 1.0) in TER score, which illustrates why professional translators prefer TM rather than SMT as their assistant tool. Compared with TM and SMT, Model-I is significantly better than the SMT system in either BLEU or TER when the fuzzy match score is above 0.7; Model-II significantly outperforms both the TM and the SMT systems in either BLEU or TER when the fuzzy match score is above 0.5; Model-III significantly exceeds both the TM and the SMT systems in either BLEU or TER when the fuzzy match score is above 0.4. All these improvements show that our integrated models have combined the strength of both TM and SMT.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation Results",
                "sec_num": "4.3"
            },
            {
                "text": "However, the improvements from integrated models get less when the fuzzy match score decreases. For example, Model-III outperforms SMT 8.03 BLEU points at interval [0.9, 1.0), while the advantage is only 2.97 BLEU points at interval [0.6, 0.7). This is because lower fuzzy match score means that there are more unmatched parts between and ; the output of TM is thus less reliable.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation Results",
                "sec_num": "4.3"
            },
            {
                "text": "Across all intervals (the last row in the table), Model-III not only achieves the best BLEU score (56.51), but also gets the best TER score (33.26). If intervals are evaluated separately, when the fuzzy match score is above 0.4, Model-III outperforms both Model-II and Model-I in either BLEU or TER. Model-II also exceeds Model-I in either BLEU or TER. The only exception is at interval [0.5, 0.6), in which Model-I achieves the best TER score. This might be due to that the optimization criterion for MERT is BLEU rather than TER in our work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation Results",
                "sec_num": "4.3"
            },
            {
                "text": "In order to compare our proposed models with previous work, we re-implement two XML-Markup approaches: (Koehn and Senellart, 2010) and , which are denoted as Koehn-10 and Ma-11, respectively. They are selected because they report superior performances in the literature. A brief description of them is as follows: Koehn et al. (2010) first find out the unmatched parts between the given source sentence and TM source sentence. Afterwards, for each unmatched phrase in the TM source sentence, they replace its corresponding translation in the TM target sentence by the corresponding source phrase in the input sentence, and then mark the substitution part. After replacing the corresponding translations of all unmatched source phrases in the TM target sentence, an XML input sentence (with mixed TM target phrases and marked input source phrases) is thus obtained. The SMT decoder then only translates the unmatched/marked source phrases and gets the desired results. Therefore, the inserted parts in the TM target sentence are automatically included. They use fuzzy match score to determine whether the current sentence should be marked or not; and their experiments show that this method is only effective when the fuzzy match score is above 0.8. think fuzzy match score is not reliable and use a discriminative learning method to decide whether the current sentence should be marked or not. Another difference between Ma-11 and Koehn-10 is how the XML input is constructed. In constructing the XML input sentence, Ma-11 replaces each matched source phrase in the given source sentence with the corresponding TM target phrase. Therefore, the inserted parts in the TM target sentence are not included. In Ma's another paper , more linguistic features for discriminative learning are also added. In our work, we only re-implement the XML-Markup method used in , but do not implement the discriminative learning method. This is because the features adopted in their discriminative learning are complicated and difficult to re-implement. However, the proposed Model-III even outperforms the upper bound of their methods, which will be discussed later. Table 3 and 4 give the translation results of Koehn-10 and Ma-11 (without the discriminator). Scores marked by \"#\" are significantly better (p < 0.05) than Koehn-10. Besides, the upper bound of is also given in the tables, which is denoted as Ma-11-U. We calculate this upper bound according to the method described in . Since only add more linguistic features to the discriminative learning method, the upper bound of is still the same with ; therefore, Ma-11-U applies for both cases.",
                "cite_spans": [
                    {
                        "start": 103,
                        "end": 130,
                        "text": "(Koehn and Senellart, 2010)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 314,
                        "end": 333,
                        "text": "Koehn et al. (2010)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 2150,
                        "end": 2157,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Comparison with Previous Work",
                "sec_num": "4.4"
            },
            {
                "text": "Source \u5982\u679c 0 \u7981\u7528 1 \u6b64 2 \u7b56\u7565 3 \u8bbe\u7f6e 4 \uff0c",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Previous Work",
                "sec_num": "4.4"
            },
            {
                "text": "It is observed that Model-III significantly exceeds Koehn-10 at all intervals. More importantly, the proposed models achieve much better TER score than the TM system does at interval [0.9, 1.0), but Koehn-10 does not even exceed the TM system at this interval. Furthermore, Model-III is much better than Ma-11-U at most intervals. Therefore, it can be concluded that the proposed models outperform the pipeline approaches significantly. Figure 2 gives an example at interval [0.9, 1.0), which shows the difference among different system outputs. It can be seen that \"you do\" is redundant for Koehn-10, because they are insertions and thus are kept in the XML input. However, SMT system still inserts another \"you\", regardless of \"you do\" has already existed. This problem does not occur at Ma-11, but it misses some words and adopts one wrong permutation. Besides, Model-I selects more right words than SMT does but still puts them in wrong positions due to ignoring TM reordering information. In this example, Model-II obtains the same results with Model-I because it also lacks reordering information. Last, since Model-III considers both TM content and TM position information, it gives a perfect translation.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 437,
                        "end": 445,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Comparison with Previous Work",
                "sec_num": "4.4"
            },
            {
                "text": "Unlike the previous pipeline approaches, which directly merge TM phrases into the final translation result, we integrate TM information of each source phrase into the phrase-based SMT at decoding. In addition, all possible TM target phrases are kept and the proposed models select the best one during decoding via referring SMT information. Besides, the integrated model considers the probability information of both SMT and TM factors.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "5"
            },
            {
                "text": "The experiments show that the proposed Model-III outperforms both the TM and the SMT systems significantly (p < 0.05) in either BLEU or TER when fuzzy match score is above 0.4. Compared with the pure SMT system, Model-III achieves overall 3.48 BLEU points improvement and 2.62 TER points reduction on a Chinese-English TM database. Furthermore, Model-III significantly exceeds all previous pipeline ap-proaches. Similar improvements are also observed on the Hansards parts of LDC2004T08 (not shown in this paper due to space limitation). Since no language-dependent feature is adopted, the proposed approaches can be easily adapted for other language pairs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "5"
            },
            {
                "text": "Moreover, following the approaches of Koehn-10 and Ma-11 (to give a fair comparison), training data for SMT and TM are the same in the current experiments. However, the TM is expected to play an even more important role when the SMT training-set differs from the TM database, as additional phrase-pairs that are unseen in the SMT phrase table can be extracted from TM (which can then be dynamically added into the SMT phrase table at decoding time). Our another study has shown that the integrated model would be even more effective when the TM database and the SMT training data-set are from different corpora in the same domain (not shown in this paper). In addition, more source phrases can be matched if a set of high-FMS sentences, instead of only the sentence with the highest FMS, can be extracted and referred at the same time. And it could further raise the performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "5"
            },
            {
                "text": "Last, some related approaches (Smith and Clark, 2009; Phillips, 2011) combine SMT and example-based machine translation (EBMT) (Nagao, 1984) . It would be also interesting to compare our integrated approach with that of theirs.",
                "cite_spans": [
                    {
                        "start": 30,
                        "end": 53,
                        "text": "(Smith and Clark, 2009;",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 54,
                        "end": 69,
                        "text": "Phillips, 2011)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 127,
                        "end": 140,
                        "text": "(Nagao, 1984)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "5"
            },
            {
                "text": "We mean \"sub-sentential segments\" in this work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "It can be identified by simply memorizing the index of nearest non-null during search.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "\"grow-diag-final\" and \"grow-diag-final-and\" are also tested. However, \"intersection\" is the best option in our experiments, especially for those high fuzzy match intervals.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "This training process only took about 10 hours on our Ubuntu server (Intel 4-core Xeon 3.47GHz, 132 GB of RAM).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The research work has been funded by the Hi- The authors would like to thank the anonymous reviewers for their insightful comments and suggestions. Our sincere thanks are also extended to Dr. Yanjun Ma and Dr. Yifan He for their valuable discussions during this study.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "A systematic analysis of translation model search spaces",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Auli",
                        "suffix": ""
                    },
                    {
                        "first": "Adam",
                        "middle": [],
                        "last": "Lopez",
                        "suffix": ""
                    },
                    {
                        "first": "Hieu",
                        "middle": [],
                        "last": "Hoang",
                        "suffix": ""
                    },
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "224--232",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michael Auli, Adam Lopez, Hieu Hoang and Philipp Koehn, 2009. A systematic analysis of translation model search spaces. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pag- es 224-232.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Dynamic translation memory: using statistical machine translation to improve translation memory fuzzy matches",
                "authors": [
                    {
                        "first": "Ergun",
                        "middle": [],
                        "last": "Bi\u00e7",
                        "suffix": ""
                    },
                    {
                        "first": "Marc",
                        "middle": [],
                        "last": "Dymetman",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 9th International Conference on Intelligent Text Processing and Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "454--465",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ergun Bi\u00e7 ici and Marc Dymetman. 2008. Dynamic translation memory: using statistical machine trans- lation to improve translation memory fuzzy match- es. In Proceedings of the 9th International Confer- ence on Intelligent Text Processing and Computa- tional Linguistics (CICLing 2008), pages 454-465.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "An empirical study of smoothing techniques for language modeling",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Stanley",
                        "suffix": ""
                    },
                    {
                        "first": "Joshua",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Goodman",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stanley F. Chen and Joshua Goodman. 1998. An em- pirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Bridging SMT and TM with translation recommendation",
                "authors": [
                    {
                        "first": "Yifan",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Yanjun",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Van Genabith",
                        "suffix": ""
                    },
                    {
                        "first": "Andy",
                        "middle": [],
                        "last": "Way",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL)",
                "volume": "",
                "issue": "",
                "pages": "622--630",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yifan He, Yanjun Ma, Josef van Genabith and Andy Way, 2010. Bridging SMT and TM with transla- tion recommendation. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 622-630.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Rich linguistic features for translation memory-inspired consistent translation",
                "authors": [
                    {
                        "first": "Yifan",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Yanjun",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Andy",
                        "middle": [],
                        "last": "Way",
                        "suffix": ""
                    },
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Van Genabith",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the Thirteenth Machine Translation Summit",
                "volume": "",
                "issue": "",
                "pages": "456--463",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yifan He, Yanjun Ma, Andy Way and Josef van Genabith. 2011. Rich linguistic features for transla- tion memory-inspired consistent translation. In Proceedings of the Thirteenth Machine Translation Summit, pages 456-463.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Improved backing-off for m-gram language modeling",
                "authors": [
                    {
                        "first": "Reinhard",
                        "middle": [],
                        "last": "Kneser",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing",
                "volume": "",
                "issue": "",
                "pages": "181--184",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 181-184.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Factored language models tutorial",
                "authors": [
                    {
                        "first": "Katrin",
                        "middle": [],
                        "last": "Kirchhoff",
                        "suffix": ""
                    },
                    {
                        "first": "Jeff",
                        "middle": [
                            "A"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Katrin Kirchhoff, Jeff A. Bilmes and Kevin Duh. 2007. Factored language models tutorial. Technical report, Department of Electrical Engineering, Uni- versity of Washington, Seattle, Washington, USA.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Statistical significance tests for machine translation evaluation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
                "volume": "",
                "issue": "",
                "pages": "388--395",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 388-395, Barcelona, Spain.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Moses: Open source toolkit for statistical machine translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Hieu",
                        "middle": [],
                        "last": "Hoang",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [],
                        "last": "Birch",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Callison-Burch",
                        "suffix": ""
                    },
                    {
                        "first": "Marcello",
                        "middle": [],
                        "last": "Federico",
                        "suffix": ""
                    },
                    {
                        "first": "Nicola",
                        "middle": [],
                        "last": "Bertoldi",
                        "suffix": ""
                    },
                    {
                        "first": "Brooke",
                        "middle": [],
                        "last": "Cowan",
                        "suffix": ""
                    },
                    {
                        "first": "Wade",
                        "middle": [],
                        "last": "Shen",
                        "suffix": ""
                    },
                    {
                        "first": "Christine",
                        "middle": [],
                        "last": "Moran",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Zens",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the ACL 2007 Demo and Poster Sessions",
                "volume": "",
                "issue": "",
                "pages": "177--180",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer and Ond\u0159ej Bojar. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the ACL 2007 Demo and Poster Sessions, pages 177-180.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Statistical phrase-based translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Franz",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Marcu",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
                "volume": "",
                "issue": "",
                "pages": "48--54",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Franz Josef Och and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology, pages 48-54.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Convergence of translation memory and statistical machine translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Jean",
                        "middle": [],
                        "last": "Senellart",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "AMTA Workshop on MT Research and the Translation Industry",
                "volume": "",
                "issue": "",
                "pages": "21--31",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn and Jean Senellart. 2010. Convergence of translation memory and statistical machine translation. In AMTA Workshop on MT Research and the Translation Industry, pages 21-31.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Translation memories survey 2006: Users' perceptions around tm use",
                "authors": [
                    {
                        "first": "Elina",
                        "middle": [],
                        "last": "Lagoudaki",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the ASLIB International Conference Translating and the Computer",
                "volume": "28",
                "issue": "",
                "pages": "1--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Elina Lagoudaki. 2006. Translation memories survey 2006: Users' perceptions around tm use. In Pro- ceedings of the ASLIB International Conference Translating and the Computer 28, pages 1-29.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Automatic verbal information verification for user authentication",
                "authors": [
                    {
                        "first": "Qi",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Biing-Hwang",
                        "middle": [],
                        "last": "Juang",
                        "suffix": ""
                    },
                    {
                        "first": "Qiru",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Chin-Hui",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "IEEE transactions on speech and audio processing",
                "volume": "8",
                "issue": "5",
                "pages": "1063--6676",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Qi Li, Biing-Hwang Juang, Qiru Zhou, and Chin-Hui Lee. 2000. Automatic verbal information verifica- tion for user authentication. IEEE transactions on speech and audio processing, Vol. 8, No. 5, pages 1063-6676.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Binary codes capable of correcting deletions, insertions, and reversals",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Vladimir Iosifovich Levenshtein",
                        "suffix": ""
                    }
                ],
                "year": 1966,
                "venue": "Soviet Physics Doklady",
                "volume": "",
                "issue": "8",
                "pages": "707--710",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Vladimir Iosifovich Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and re- versals. Soviet Physics Doklady, 10 (8). pages 707- 710.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Consistent translation using discriminative learning: a translation memory-inspired approach",
                "authors": [
                    {
                        "first": "Yanjun",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Yifan",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Andy",
                        "middle": [],
                        "last": "Way",
                        "suffix": ""
                    },
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Van Genabith",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1239--1248",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yanjun Ma, Yifan He, Andy Way and Josef van Genabith. 2011. Consistent translation using dis- criminative learning: a translation memory-inspired approach. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1239-1248, Portland, Oregon.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "A framework of a mechanical translation between Japanese and English by analogy principle",
                "authors": [
                    {
                        "first": "Makoto",
                        "middle": [],
                        "last": "Nagao",
                        "suffix": ""
                    }
                ],
                "year": 1984,
                "venue": "Artifiical and Human Intelligence: Edited Review Papers Presented at the International NATO Symposium on Artificial and Human Intelligence",
                "volume": "",
                "issue": "",
                "pages": "173--180",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Makoto Nagao, 1984. A framework of a mechanical translation between Japanese and English by anal- ogy principle. In: Banerji, Alick Elithorn and Ran- an (ed). Artifiical and Human Intelligence: Edited Review Papers Presented at the International NATO Symposium on Artificial and Human Intelli- gence. North-Holland, Amsterdam, 173-180.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Minimum error rate training in statistical machine translation",
                "authors": [
                    {
                        "first": "Franz Josef",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "160--167",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "A systematic comparison of various statistical alignment models",
                "authors": [
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Franz",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Computational Linguistics",
                "volume": "",
                "issue": "1",
                "pages": "19--51",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment models. Computational Linguistics, 29 (1). pages 19-51.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "BLEU: a method for automatic evaluation of machine translation",
                "authors": [
                    {
                        "first": "Kishore",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    },
                    {
                        "first": "Salim",
                        "middle": [],
                        "last": "Roukos",
                        "suffix": ""
                    },
                    {
                        "first": "Todd",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "Wei-Jing",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu. 2002. BLEU: a method for automat- ic evaluation of machine translation. In Proceed- ings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311- 318.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Cunei: open-source machine translation with relevance-based models of each translation instance",
                "authors": [
                    {
                        "first": "Aaron",
                        "middle": [
                            "B"
                        ],
                        "last": "Phillips",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Machine Translation",
                "volume": "",
                "issue": "2",
                "pages": "166--177",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aaron B. Phillips, 2011. Cunei: open-source machine translation with relevance-based models of each translation instance. Machine Translation, 25 (2). pages 166-177.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Fuzzy matching in theory and practice",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Sikes",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Multilingual",
                "volume": "18",
                "issue": "6",
                "pages": "39--43",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard Sikes. 2007, Fuzzy matching in theory and practice. Multilingual, 18(6):39-43.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Phrasebased machine translation in a computer-assisted translation environment",
                "authors": [
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Simard",
                        "suffix": ""
                    },
                    {
                        "first": "Pierre",
                        "middle": [],
                        "last": "Isabelle",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the Twelfth Machine Translation Summit (MT Summit XII)",
                "volume": "",
                "issue": "",
                "pages": "120--127",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michel Simard and Pierre Isabelle. 2009. Phrase- based machine translation in a computer-assisted translation environment. In Proceedings of the Twelfth Machine Translation Summit (MT Summit XII), pages 120-127.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "EBMT for SMT: a new EBMT-SMT hybrid",
                "authors": [
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Smith",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Clark",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the 3rd International Workshop on Example-Based Machine Translation (EBMT'09)",
                "volume": "",
                "issue": "",
                "pages": "3--10",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "James Smith and Stephen Clark. 2009. EBMT for SMT: a new EBMT-SMT hybrid. In Proceedings of the 3rd International Workshop on Example- Based Machine Translation (EBMT'09), pages 3- 10, Dublin, Ireland.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "A study of translation edit rate with targeted human annotation",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Snover",
                        "suffix": ""
                    },
                    {
                        "first": "Bonnie",
                        "middle": [],
                        "last": "Dorr",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Schwartz",
                        "suffix": ""
                    },
                    {
                        "first": "Linnea",
                        "middle": [],
                        "last": "Micciulla",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Makhoul",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of Association for Machine Translation in the Americas (AMTA-2006)",
                "volume": "",
                "issue": "",
                "pages": "223--231",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Ma- chine Translation in the Americas (AMTA-2006), pages 223-231.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "SRILM-an extensible language modeling toolkit",
                "authors": [
                    {
                        "first": "Andreas",
                        "middle": [],
                        "last": "Stolcke",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the International Conference on Spoken Language Processing",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andreas Stolcke. 2002. SRILM-an extensible lan- guage modeling toolkit. In Proceedings of the In- ternational Conference on Spoken Language Pro- cessing, pages 311-318.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Assessing phrase-based translation models with oracle decoding",
                "authors": [
                    {
                        "first": "Guillaume",
                        "middle": [],
                        "last": "Wisniewski",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandre",
                        "middle": [],
                        "last": "Allauzen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Fran\u00e7 Ois Yvon",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "933--943",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Guillaume Wisniewski, Alexandre Allauzen and Fran\u00e7 ois Yvon, 2010. Assessing phrase-based translation models with oracle decoding. In Pro- ceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 933-943.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "The zerofrequency problem: estimating the probabilities of novel events in adaptive test compression",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Ian",
                        "suffix": ""
                    },
                    {
                        "first": "Timothy",
                        "middle": [
                            "C"
                        ],
                        "last": "Witten",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Bell",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "IEEE Transactions on Information Theory",
                "volume": "37",
                "issue": "4",
                "pages": "1085--1094",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ian H. Witten and Timothy C. Bell. 1991. The zero- frequency problem: estimating the probabilities of novel events in adaptive test compression. IEEE Transactions on Information Theory, 37(4): 1085- 1094, July.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "Seeding statistical machine translation with translation memory output through tree-based structural alignment",
                "authors": [
                    {
                        "first": "Ventsislav",
                        "middle": [],
                        "last": "Zhechev",
                        "suffix": ""
                    },
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Van Genabith",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation",
                "volume": "",
                "issue": "",
                "pages": "43--51",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ventsislav Zhechev and Josef van Genabith. 2010. Seeding statistical machine translation with transla- tion memory output through tree-based structural alignment. In Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation, pages 43-51.",
                "links": null
            },
            "BIBREF29": {
                "ref_id": "b29",
                "title": "A systematic comparison of phrase-based, hierarchical and syntaxaugmented statistical MT",
                "authors": [
                    {
                        "first": "Andreas",
                        "middle": [],
                        "last": "Zollmann",
                        "suffix": ""
                    },
                    {
                        "first": "Ashish",
                        "middle": [],
                        "last": "Venugopal",
                        "suffix": ""
                    },
                    {
                        "first": "Franz",
                        "middle": [
                            "Josef"
                        ],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "Jay",
                        "middle": [],
                        "last": "Ponte",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1145--1152",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andreas Zollmann, Ashish Venugopal, Franz Josef Och and Jay Ponte, 2008. A systematic comparison of phrase-based, hierarchical and syntax- augmented statistical MT. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 1145-1152.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF2": {
                "num": null,
                "content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"2\">boundary words of and</td><td>are</td></tr><tr><td/><td/><td/><td/><td/><td>the same,</td><td>;</td></tr><tr><td/><td/><td/><td/><td/><td>ii. Otherwise,</td><td>;</td></tr><tr><td/><td/><td/><td/><td/><td>(b) If</td><td>is on the right of</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">but they are not adjacent to each other,</td></tr><tr><td/><td/><td/><td/><td/><td/><td>;</td></tr><tr><td/><td/><td/><td/><td/><td>(c) If</td><td>is not on the right of</td></tr><tr><td/><td/><td/><td/><td/><td>:</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">i. If there are cross parts between</td></tr><tr><td/><td/><td/><td/><td/><td>and</td><td>,</td><td>;</td></tr><tr><td/><td/><td/><td/><td/><td>ii. Otherwise,</td><td>;</td></tr><tr><td/><td/><td/><td/><td/><td>indi-</td></tr><tr><td colspan=\"6\">cates the matching status between the relative</td></tr><tr><td>position of</td><td/><td colspan=\"4\">and the relative position of</td></tr><tr><td/><td/><td colspan=\"3\">. It checks if</td><td>are</td></tr><tr><td>positioned</td><td>in</td><td>the</td><td>same</td><td>order</td><td>with</td></tr><tr><td/><td/><td colspan=\"4\">, and reflects the quality of</td></tr><tr><td colspan=\"6\">ordering the given target candidate . It is a</td></tr><tr><td colspan=\"6\">member of {Adjacent-Same, Adjacent-Substitute,</td></tr><tr><td colspan=\"2\">Linked-Interleaved,</td><td colspan=\"3\">Linked-Cross,</td><td>Linked-</td></tr><tr><td colspan=\"6\">Reversed, Skip-Forward, Skip-Cross, Skip-</td></tr><tr><td colspan=\"6\">Reversed, NA}. Recall that is always right ad-</td></tr><tr><td>jacent to</td><td colspan=\"5\">, then various cases are defined as</td></tr><tr><td>follows:</td><td/><td/><td/><td/><td/></tr><tr><td>(1) If both</td><td/><td>and</td><td/><td colspan=\"2\">are not null:</td></tr><tr><td>(a) If</td><td colspan=\"4\">is on the right of</td><td/></tr><tr><td colspan=\"6\">and they are also adjacent to each other:</td></tr><tr><td colspan=\"5\">i. If the right boundary words of</td><td>and</td></tr><tr><td/><td/><td colspan=\"4\">are the same, and the left</td></tr></table>",
                "text": "The new feature CPM adopted in Model-III is defined",
                "type_str": "table",
                "html": null
            },
            "TABREF3": {
                "num": null,
                "content": "<table><tr><td/><td>Train</td><td>Develop</td><td>Test</td></tr><tr><td>#Sentences</td><td>261,906</td><td>2,569</td><td>2,576</td></tr><tr><td>#Chn. Words</td><td>3,623,516</td><td>38,585</td><td>38,648</td></tr><tr><td>#Chn. VOC.</td><td>43,112</td><td>3,287</td><td>3,460</td></tr><tr><td>#Eng. Words</td><td>3,627,028</td><td>38,329</td><td>38,510</td></tr><tr><td>#Eng. VOC.</td><td>44,221</td><td>3,993</td><td>4,046</td></tr></table>",
                "text": "and Back-off method. Afterwards, we incorporate the TM information for each phrase at decoding. All experiments are",
                "type_str": "table",
                "html": null
            },
            "TABREF4": {
                "num": null,
                "content": "<table><tr><td>Intervals</td><td>#Sentences</td><td>#Words</td><td>W/S</td></tr><tr><td>[0.9, 1.0)</td><td>269</td><td>4,468</td><td>16.6</td></tr><tr><td>[0.8, 0.9)</td><td>362</td><td>5,004</td><td>13.8</td></tr><tr><td>[0.7, 0.8)</td><td>290</td><td>4,046</td><td>14.0</td></tr><tr><td>[0.6, 0.7)</td><td>379</td><td>4,998</td><td>13.2</td></tr><tr><td>[0.5, 0.6)</td><td>472</td><td>6,073</td><td>12.9</td></tr><tr><td>[0.4, 0.5)</td><td>401</td><td>5,921</td><td>14.8</td></tr><tr><td>[0.3, 0.4)</td><td>305</td><td>5,499</td><td>18.0</td></tr><tr><td>(0.0, 0.3)</td><td>98</td><td>2,639</td><td>26.9</td></tr><tr><td>(0.0, 1.0)</td><td>2,576</td><td>38,648</td><td>15.0</td></tr></table>",
                "text": "Corpus Statistics",
                "type_str": "table",
                "html": null
            },
            "TABREF5": {
                "num": null,
                "content": "<table><tr><td>Intervals</td><td>TM</td><td>SMT Model-I</td><td>Model-II</td><td>Model-III</td><td>Koehn-10</td><td>Ma-11</td><td>Ma-11-U</td></tr><tr><td>[0.9, 1.0)</td><td colspan=\"2\">81.31 81.38 85.44 *</td><td>86.47 *#</td><td>89.41 *#</td><td>82.79</td><td>77.72</td><td>82.78</td></tr><tr><td>[0.8, 0.9)</td><td colspan=\"2\">73.25 76.16 79.97 *</td><td>80.89 *</td><td>84.04 *#</td><td>79.74 *</td><td>73.00</td><td>77.66</td></tr><tr><td>[0.7, 0.8)</td><td colspan=\"2\">63.62 67.71 71.65 *</td><td>72.39 *</td><td>74.73 *#</td><td>71.02 *</td><td>66.54</td><td>69.78</td></tr><tr><td>[0.6, 0.7)</td><td colspan=\"2\">43.64 54.56 54.88 #</td><td>55.88 *#</td><td>57.53 *#</td><td>53.06</td><td>54.00</td><td>56.37</td></tr><tr><td>[0.5, 0.6)</td><td colspan=\"2\">27.37 46.32 47.32 *#</td><td>47.45 *#</td><td>47.54 *#</td><td>39.31</td><td>46.06</td><td>47.73</td></tr><tr><td>[0.4, 0.5)</td><td colspan=\"2\">15.43 37.18 37.25 #</td><td>37.60 #</td><td>38.18 *#</td><td>28.99</td><td>36.23</td><td>37.93</td></tr><tr><td>[0.3, 0.4)</td><td>8.24</td><td>29.27 29.52 #</td><td>29.38 #</td><td>29.15 #</td><td>23.58</td><td>29.40</td><td>30.20</td></tr><tr><td>(0.0, 0.3)</td><td>4.13</td><td>26.38 25.61 #</td><td>25.32 #</td><td>25.57 #</td><td>18.56</td><td>26.30</td><td>26.92</td></tr><tr><td>(0.0, 1.0)</td><td colspan=\"2\">40.17 53.03 54.57 *#</td><td>55.10 *#</td><td>56.51 *#</td><td>50.31</td><td>51.98</td><td>54.32</td></tr></table>",
                "text": "Corpus Statistics for Test-Set",
                "type_str": "table",
                "html": null
            },
            "TABREF6": {
                "num": null,
                "content": "<table><tr><td colspan=\"2\">TM Target if 0 you 1 do 2 not 3 configure 4 this 5 policy 6 setting 7 TM 0-0 1-3 2-4 3-5 4-6 5-7 6-8 7-9 8-10 9-11 11-15 13-21 14-19 15-17 16-18 17-22 18-23 19-24</td></tr><tr><td>Alignment</td><td>21-26 22-27 23-29 24-31</td></tr><tr><td/><td>if you disable this policy setting , internet explorer does not prompt users to install internet for</td></tr><tr><td>SMT</td><td>new versions of the browser . [Miss 7 target words: 9~12, 20~21, 28; Has one wrong permuta-</td></tr><tr><td/><td>tion]</td></tr><tr><td/><td>if you do you disable this policy setting , internet explorer does not check the internet for new</td></tr><tr><td>Koehn-10</td><td>versions of the browser , so does not prompt users to install them . [Insert two spurious target</td></tr><tr><td/><td>words]</td></tr><tr><td/><td>if you disable this policy setting , internet explorer does not prompt users to install internet for</td></tr><tr><td>Ma-11</td><td>new versions of the browser . [Miss 7 target words: 9~12, 20~21, 28; Has one wrong permuta-</td></tr><tr><td/><td>tion]</td></tr><tr><td/><td>if you disable this policy setting , internet explorer does not prompt users to install new ver-</td></tr><tr><td>Model-I</td><td>sions of the browser , so does not check the internet . [Miss 2 target words: 14, 28; Has one</td></tr><tr><td/><td>wrong permutation]</td></tr><tr><td/><td>if you disable this policy setting , internet explorer does not prompt users to install new ver-</td></tr><tr><td>Model-II</td><td>sions of the browser , so does not check the internet . [Miss 2 target words: 14, 28; Has one</td></tr><tr><td/><td>wrong permutation]</td></tr><tr><td>Model-III</td><td>if you disable this policy setting , internet explorer does not check the internet for new versions of the browser , so does not prompt users to install them . [Exactly the same as the reference]</td></tr><tr><td/><td>Figure 2: A Translation Example at Interval [0.9, 1.0] (with FMS=0.920)</td></tr></table>",
                "text": "5 internet 6 explorer 7 \u4e0d 8 \u641c\u7d22 9 internet 10 \u67e5\u627e 11 \u6d4f\u89c8\u5668 12 \u7684 13 \u65b0 14 \u7248\u672c 15 \uff0c 16 \u56e0\u6b64 17 \u4e0d 18 \u4f1a 19 \u63d0\u793a 20 \u7528\u6237 21 \u5b89\u88c5 22 \u3002 23 Reference if 0 you 1 disable 2 this 3 policy 4 setting 5 , 6 internet 7 explorer 8 does 9 not 10 check 11 the 12 internet 13 for 14 new 15 versions 16 of 17 the 18 browser 19 , 20 so 21 does 22 not 23 prompt 24 users 25 to 26 install 27 them 28 . 29 TM Source \u5982\u679c 0 \u4e0d 1 \u914d\u7f6e 2 \u6b64 3 \u7b56\u7565 4 \u8bbe\u7f6e 5 \uff0c 6 internet 7 explorer 8 \u4e0d 9 \u641c\u7d22 10 internet 11 \u67e5\u627e 12 \u6d4f\u89c8 \u5668 13 \u7684 14 \u65b0 15 \u7248\u672c 16 \uff0c 17 \u56e0\u6b64 18 \u4e0d 19 \u4f1a 20 \u63d0\u793a 21 \u7528\u6237 22 \u5b89\u88c5 23 \u3002 24 , 8 internet 9 explorer 10 does 11 not 12 check 13 the 14 internet 15 for 16 new 17 versions 18 of 19 the 20 browser 21 , 22 so 23 does 24 not 25 prompt 26 users 27 to 28 install 29 them 30 . 31",
                "type_str": "table",
                "html": null
            }
        }
    }
}