File size: 86,793 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
{
    "paper_id": "P09-1018",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:55:04.008639Z"
    },
    "title": "Revisiting Pivot Language Approach for Machine Translation",
    "authors": [
        {
            "first": "Hua",
            "middle": [],
            "last": "Wu",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Oriental Plaza",
                "location": {
                    "postCode": "W2, 100738",
                    "settlement": "Tower, Beijing",
                    "country": "China"
                }
            },
            "email": "wuhua@rdc.toshiba.com.cn"
        },
        {
            "first": "Haifeng",
            "middle": [],
            "last": "Wang",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Oriental Plaza",
                "location": {
                    "postCode": "W2, 100738",
                    "settlement": "Tower, Beijing",
                    "country": "China"
                }
            },
            "email": "wanghaifeng@rdc.toshiba.com.cn"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper revisits the pivot language approach for machine translation. First, we investigate three different methods for pivot translation. Then we employ a hybrid method combining RBMT and SMT systems to fill up the data gap for pivot translation, where the sourcepivot and pivot-target corpora are independent. Experimental results on spoken language translation show that this hybrid method significantly improves the translation quality, which outperforms the method using a source-target corpus of the same size. In addition, we propose a system combination approach to select better translations from those produced by various pivot translation methods. This method regards system combination as a translation evaluation problem and formalizes it with a regression learning model. Experimental results indicate that our method achieves consistent and significant improvement over individual translation outputs.",
    "pdf_parse": {
        "paper_id": "P09-1018",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper revisits the pivot language approach for machine translation. First, we investigate three different methods for pivot translation. Then we employ a hybrid method combining RBMT and SMT systems to fill up the data gap for pivot translation, where the sourcepivot and pivot-target corpora are independent. Experimental results on spoken language translation show that this hybrid method significantly improves the translation quality, which outperforms the method using a source-target corpus of the same size. In addition, we propose a system combination approach to select better translations from those produced by various pivot translation methods. This method regards system combination as a translation evaluation problem and formalizes it with a regression learning model. Experimental results indicate that our method achieves consistent and significant improvement over individual translation outputs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Current statistical machine translation (SMT) systems rely on large parallel and monolingual training corpora to produce translations of relatively higher quality. Unfortunately, large quantities of parallel data are not readily available for some languages pairs, therefore limiting the potential use of current SMT systems. In particular, for speech translation, the translation task often focuses on a specific domain such as the travel domain. It is especially difficult to obtain such a domain-specific corpus for some language pairs such as Chinese to Spanish translation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To circumvent the data bottleneck, some researchers have investigated to use a pivot language approach (Cohn and Lapata, 2007; Utiyama and Isahara, 2007; Bertoldi et al., 2008) . This approach introduces a third language, named the pivot language, for which there exist large source-pivot and pivot-target bilingual corpora. A pivot task was also designed for spoken language translation in the evaluation campaign of IWSLT 2008 (Paul, 2008) , where English is used as a pivot language for Chinese to Spanish translation.",
                "cite_spans": [
                    {
                        "start": 103,
                        "end": 126,
                        "text": "(Cohn and Lapata, 2007;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 127,
                        "end": 153,
                        "text": "Utiyama and Isahara, 2007;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 154,
                        "end": 176,
                        "text": "Bertoldi et al., 2008)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 429,
                        "end": 441,
                        "text": "(Paul, 2008)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Three different pivot strategies have been investigated in the literature. The first is based on phrase table multiplication (Cohn and Lapata 2007; . It multiples corresponding translation probabilities and lexical weights in source-pivot and pivot-target translation models to induce a new source-target phrase table. We name it the triangulation method. The second is the sentence translation strategy, which first translates the source sentence to the pivot sentence, and then to the target sentence (Utiyama and Isahara, 2007; Khalilov et al., 2008) . We name it the transfer method. The third is to use existing models to build a synthetic source-target corpus, from which a source-target model can be trained (Bertoldi et al., 2008) . For example, we can obtain a source-pivot corpus by translating the pivot sentence in the source-pivot corpus into the target language with pivot-target translation models. We name it the synthetic method.",
                "cite_spans": [
                    {
                        "start": 125,
                        "end": 147,
                        "text": "(Cohn and Lapata 2007;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 503,
                        "end": 530,
                        "text": "(Utiyama and Isahara, 2007;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 531,
                        "end": 553,
                        "text": "Khalilov et al., 2008)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 715,
                        "end": 738,
                        "text": "(Bertoldi et al., 2008)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The working condition with the pivot language approach is that the source-pivot and pivot-target parallel corpora are independent, in the sense that they are not derived from the same set of sentences, namely independently sourced corpora. Thus, some linguistic phenomena in the sourcepivot corpus will lost if they do not exist in the pivot-target corpus, and vice versa. In order to fill up this data gap, we make use of rule-based machine translation (RBMT) systems to translate the pivot sentences in the source-pivot or pivot-target corpus into target or source sentences. As a result, we can build a synthetic multilingual corpus, which can be used to improve the translation quality. The idea of using RBMT systems to improve the translation quality of SMT sysems has been explored in Hu et al. (2007) . Here, we re-examine the hybrid method to fill up the data gap for pivot translation.",
                "cite_spans": [
                    {
                        "start": 792,
                        "end": 808,
                        "text": "Hu et al. (2007)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Although previous studies proposed several pivot translation methods, there are no studies to combine different pivot methods for translation quality improvement. In this paper, we first compare the individual pivot methods and then investigate to improve pivot translation quality by combining the outputs produced by different systems. We propose to regard system combination as a translation evaluation problem. For translations from one of the systems, this method uses the outputs from other translation systems as pseudo references. A regression learning method is used to infer a function that maps a feature vector (which measures the similarity of a translation to the pseudo references) to a score that indicates the quality of the translation. Scores are first generated independently for each translation, then the translations are ranked by their respective scores. The candidate with the highest score is selected as the final translation. This is achieved by optimizing the regression learning model's output to correlate against a set of training examples, where the source sentences are provided with several reference translations, instead of manually labeling the translations produced by various systems with quantitative assessments as described in (Albrecht and Hwa, 2007; Duh, 2008) . The advantage of our method is that we do not need to manually label the translations produced by each translation system, therefore enabling our method suitable for translation selection among any systems without additional manual work.",
                "cite_spans": [
                    {
                        "start": 1270,
                        "end": 1294,
                        "text": "(Albrecht and Hwa, 2007;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 1295,
                        "end": 1305,
                        "text": "Duh, 2008)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We conducted experiments for spoken language translation on the pivot task in the IWSLT 2008 evaluation campaign, where Chinese sentences in travel domain need to be translated into Spanish, with English as the pivot language. Experimental results show that (1) the performances of the three pivot methods are comparable when only SMT systems are used. However, the triangulation method and the transfer method significantly outperform the synthetic method when RBMT systems are used to improve the translation qual-ity; (2) The hybrid method combining SMT and RBMT system for pivot translation greatly improves the translation quality. And this translation quality is higher than that of those produced by the system trained with a real Chinese-Spanish corpus; (3) Our sentence-level translation selection method consistently and significantly improves the translation quality over individual translation outputs in all of our experiments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Section 2 briefly introduces the three pivot translation methods. Section 3 presents the hybrid method combining SMT and RBMT systems. Section 4 describes the translation selection method. Experimental results are presented in Section 5, followed by a discussion in Section 6. The last section draws conclusions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "2 Pivot Methods for Phrase-based SMT",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Following the method described in , we train the source-pivot and pivot-target translation models using the source-pivot and pivot-target corpora, respectively. Based on these two models, we induce a source-target translation model, in which two important elements need to be induced: phrase translation probability and lexical weight.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "Phrase Translation Probability We induce the phrase translation probability by assuming the independence between the source and target phrases when given the pivot phrase.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u03c6(s|t) = p \u03c6(s|p)\u03c6(p|t)",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "Wheres,p andt represent the phrases in the languages L s , L p and L t , respectively. Lexical Weight According to the method described in Koehn et al. (2003) , there are two important elements in the lexical weight: word alignment information a in a phrase pair (s,t) and lexical translation probability w(s|t).",
                "cite_spans": [
                    {
                        "start": 139,
                        "end": 158,
                        "text": "Koehn et al. (2003)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "Let a 1 and a 2 represent the word alignment information inside the phrase pairs (s,p) and (p,t) respectively, then the alignment information inside (s,t) can be obtained as shown in Eq. (2).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "a = {(s, t)|\u2203p : (s, p) \u2208 a 1 & (p, t) \u2208 a 2 } (2)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "Based on the the induced word alignment information, we estimate the co-occurring frequencies of word pairs directly from the induced phrase pairs. Then we estimate the lexical translation probability as shown in Eq. 3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "w(s|t) = count(s, t) s count(s , t)",
                        "eq_num": "(3)"
                    }
                ],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "Where count(s, t) represents the co-occurring frequency of the word pair (s, t).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Triangulation Method",
                "sec_num": "2.1"
            },
            {
                "text": "The transfer method first translates from the source language to the pivot language using a source-pivot model, and then from the pivot language to the target language using a pivot-target model. Given a source sentence s, we can translate it into n pivot sentences p 1 , p 2 , ..., p n using a source-pivot translation system. Each p i can be translated into m target sentences t i1 , t i2 , ..., t im .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transfer Method",
                "sec_num": "2.2"
            },
            {
                "text": "We rescore all the n \u00d7 m candidates using both the source-pivot and pivot-target translation scores following the method described in Utiyama and Isahara (2007) . If we use h f p and h pt to denote the features in the source-pivot and pivot-target systems, respectively, we get the optimal target translation according to the following formula.",
                "cite_spans": [
                    {
                        "start": 134,
                        "end": 160,
                        "text": "Utiyama and Isahara (2007)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transfer Method",
                "sec_num": "2.2"
            },
            {
                "text": "t = argmax t L k=1 (\u03bb sp k h sp k (s, p)+\u03bb pt k h pt k (p, t)) (4)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transfer Method",
                "sec_num": "2.2"
            },
            {
                "text": "Where L is the number of features used in SMT systems. \u03bb sp and \u03bb pt are feature weights set by performing minimum error rate training as described in Och (2003) .",
                "cite_spans": [
                    {
                        "start": 151,
                        "end": 161,
                        "text": "Och (2003)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Transfer Method",
                "sec_num": "2.2"
            },
            {
                "text": "There are two possible methods to obtain a sourcetarget corpus using the source-pivot and pivottarget corpora. One is to obtain target translations for the source sentences in the source-pivot corpus. This can be achieved by translating the pivot sentences in source-pivot corpus to target sentences with the pivot-target SMT system. The other is to obtain source translations for the target sentences in the pivot-target corpus using the pivot-source SMT system. And we can combine these two source-target corpora to produced a final synthetic corpus. Given a pivot sentence, we can translate it into n source or target sentences. These n translations together with their source or target sentences are used to create a synthetic bilingual corpus. Then we build a source-target translation model using this corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Synthetic Method",
                "sec_num": "2.3"
            },
            {
                "text": "Since the source-pivot and pivot-target parallel corpora are independent, the pivot sentences in the two corpora are distinct from each other. Thus, some linguistic phenomena in the source-pivot corpus will lost if they do not exist in the pivottarget corpus, and vice versa. Here we use RBMT systems to fill up this data gap. For many sourcetarget language pairs, the commercial pivot-source and/or pivot-target RBMT systems are available on markets. For example, for Chinese to Spanish translation, English to Chinese and English to Spanish RBMT systems are available.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using RBMT Systems for Pivot Translation",
                "sec_num": "3"
            },
            {
                "text": "With the RBMT systems, we can create a synthetic multilingual source-pivot-target corpus by translating the pivot sentences in the pivot-source or pivot-target corpus. The source-target pairs extracted from this synthetic multilingual corpus can be used to build a source-target translation model. Another way to use the synthetic multilingual corpus is to add the source-pivot or pivot-target sentence pairs in this corpus to the training data to rebuild the source-pivot or pivot-target SMT model. The rebuilt models can be applied to the triangulation method and the transfer method as described in Section 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using RBMT Systems for Pivot Translation",
                "sec_num": "3"
            },
            {
                "text": "Moreover, the RBMT systems can also be used to enlarge the size of bilingual training data. Since it is easy to obtain monolingual corpora than bilingual corpora, we use RBMT systems to translate the available monolingual corpora to obtain synthetic bilingual corpus, which are added to the training data to improve the performance of SMT systems. Even if no monolingual corpus is available, we can also use RBMT systems to translate the sentences in the bilingual corpus to obtain alternative translations. For example, we can use source-pivot RBMT systems to provide alternative translations for the source sentences in the sourcepivot corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using RBMT Systems for Pivot Translation",
                "sec_num": "3"
            },
            {
                "text": "In addition to translating training data, the source-pivot RBMT system can be used to translate the test set into the pivot language, which can be further translated into the target language with the pivot-target RBMT system. The translated test set can be added to the training data to further improve translation quality. The advantage of this method is that the RBMT system can provide translations for sentences in the test set and cover some out-of-vocabulary words in the test set that are uncovered by the training data. It can also change the distribution of some phrase pairs and reinforce some phrase pairs relative to the test set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using RBMT Systems for Pivot Translation",
                "sec_num": "3"
            },
            {
                "text": "We propose a method to select the optimal translation from those produced by various translation systems. We regard sentence-level translation selection as a machine translation (MT) evaluation problem and formalize this problem with a regression learning model. For each translation, this method uses the outputs from other translation systems as pseudo references. The regression objective is to infer a function that maps a feature vector (which measures the similarity of a translation from one system to the pseudo references) to a score that indicates the quality of the translation. Scores are first generated independently for each translation, then the translations are ranked by their respective scores. The candidate with the highest score is selected.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation Selection",
                "sec_num": "4"
            },
            {
                "text": "The similar ideas have been explored in previous studies. Albrecht and Hwa (2007) proposed a method to evaluate MT outputs with pseudo references using support vector regression as the learner to evaluate translations. Duh (2008) proposed a ranking method to compare the translations proposed by several systems. These two methods require quantitative quality assessments by human judges for the translations produced by various systems in the training set. When we apply such methods to translation selection, the relative values of the scores assigned by the subject systems are important. In different data conditions, the relative values of the scores assigned by the subject systems may change. In order to train a reliable learner, we need to prepare a balanced training set, where the translations produced by different systems under different conditions are required to be manually evaluated. In extreme cases, we need to relabel the training data to obtain better performance. In this paper, we modify the method in Albrecht and Hwa (2007) to only prepare human reference translations for the training examples, and then evaluate the translations produced by the subject systems against the references using BLEU score (Papineni et al., 2002) . We use smoothed sentence-level BLEU score to replace the human assessments, where we use additive smoothing to avoid zero BLEU scores when we calculate the n-gram precisions. In this case, we ID Description 1-4 n-gram precisions against pseudo references (1 \u2264 n \u2264 4) 5-6 PER and WER 7-8 precision, recall, fragmentation from METEOR (Lavie and Agarwal, 2007) 9-12 precisions and recalls of nonconsecutive bigrams with a gap size of m (1 \u2264 m \u2264 2) 13-14 longest common subsequences 15-19 n-gram precision against a target corpus (1 \u2264 n \u2264 5) In regression learning, we infer a function f that maps a multi-dimensional input vector x to a continuous real value y, such that the error over a set of m training examples,",
                "cite_spans": [
                    {
                        "start": 58,
                        "end": 81,
                        "text": "Albrecht and Hwa (2007)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 219,
                        "end": 229,
                        "text": "Duh (2008)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1025,
                        "end": 1048,
                        "text": "Albrecht and Hwa (2007)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 1228,
                        "end": 1251,
                        "text": "(Papineni et al., 2002)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation Selection",
                "sec_num": "4"
            },
            {
                "text": "(x 1 , y 1 ), (x 2 , y 2 ), ..., (x m , y m )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation Selection",
                "sec_num": "4"
            },
            {
                "text": ", is minimized according to a loss function. In the context of translation selection, y is assigned as the smoothed BLEU score. The function f represents a mathematic model of the automatic evaluation metrics. The input sentence is represented as a feature vector x, which are extracted from the input sentence and the comparisons against the pseudo references. We use the features as shown in Table 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 394,
                        "end": 401,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Translation Selection",
                "sec_num": "4"
            },
            {
                "text": "We performed experiments on spoken language translation for the pivot task of IWSLT 2008. This task translates Chinese to Spanish using English as the pivot language. BTEC CE2. BTEC CE1 was distributed for the pivot task in IWSLT 2008 while BTEC CE2 was for the BTEC CE task, which is parallel to the BTEC ES corpus. For Chinese-English translation, we mainly used BTEC CE1 corpus. We used the BTEC CE2 corpus and the HIT Olympic corpus for comparison experiments only. We used the English parts of the BTEC CE1 corpus, the BTEC ES corpus, and the HIT Olympic corpus (if involved) to train a 5-gram English language model (LM) with interpolated Kneser-Ney smoothing. For English-Spanish translation, we selected 400k sentence pairs from the Europarl corpus that are close to the English parts of both the BTEC CE corpus and the BTEC ES corpus. Then we built a Spanish LM by interpolating an out-of-domain LM trained on the Spanish part of this selected corpus with the in-domain LM trained with the BTEC corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "5.1"
            },
            {
                "text": "For Chinese-English-Spanish translation, we used the development set (devset3) released for the pivot task as the test set, which contains 506 source sentences, with 7 reference translations in English and Spanish. To be capable of tuning parameters on our systems, we created a development set of 1,000 sentences taken from the training sets, with 3 reference translations in both English and Spanish. This development set is also used to train the regression learning model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "5.1"
            },
            {
                "text": "We used two commercial RBMT systems in our experiments: System A for Chinese-English bidirectional translation and System B for English-Chinese and English-Spanish translation. For phrase-based SMT translation, we used the Moses decoder (Koehn et al., 2007) and its support training scripts. We ran the decoder with its default settings and then used Moses' implementation of minimum error rate training (Och, 2003) to tune the feature weights on the development set.",
                "cite_spans": [
                    {
                        "start": 237,
                        "end": 257,
                        "text": "(Koehn et al., 2007)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 404,
                        "end": 415,
                        "text": "(Och, 2003)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Systems and Evaluation Method",
                "sec_num": "5.2"
            },
            {
                "text": "To select translation among outputs produced by different pivot translation systems, we used SVM-light (Joachins, 1999) to perform support vector regression with the linear kernel.",
                "cite_spans": [
                    {
                        "start": 103,
                        "end": 119,
                        "text": "(Joachins, 1999)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Systems and Evaluation Method",
                "sec_num": "5.2"
            },
            {
                "text": "Translation quality was evaluated using both the BLEU score proposed by Papineni et al. (2002) and also the modified BLEU (BLEU-Fix) score 3 used in the IWSLT 2008 evaluation campaign, where the brevity calculation is modified to use closest reference length instead of shortest reference length.",
                "cite_spans": [
                    {
                        "start": 72,
                        "end": 94,
                        "text": "Papineni et al. (2002)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Systems and Evaluation Method",
                "sec_num": "5.2"
            },
            {
                "text": "We conducted the pivot translation experiments using the BTEC CE1 and BTEC ES described in Section 5.1. We used the three methods described in Section 2 for pivot translation. For the transfer method, we selected the optimal translations among 10 \u00d7 10 candidates. For the synthetic method, we used the ES translation model to translate the English part of the CE corpus to Spanish to construct a synthetic corpus. And we also used the BTEC CE1 corpus to build a EC translation model to translate the English part of ES corpus into Chinese. Then we combined these two synthetic corpora to build a Chinese-Spanish translation model. In our experiments, only 1-best Chinese or Spanish translation was used since using n-best results did not greatly improve the translation quality. We used the method described in Section 4 to select translations from the translations produced by the three systems. For each system, we used three different alignment heuristics (grow, grow-diag, grow-diag-final 4 ) to obtain the final alignment results, and then constructed three different phrase tables. Thus, for each system, we can get three different translations for each input. These different translations can serve as pseudo references for the outputs of other systems. In our case, for each sentence, we have 6 pseudo reference translations. In addition, we found out that the grow heuristic performed the best for all the systems. Thus, for an individual system, we used the translation results produced using the grow alignment heuristic.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results by Using SMT Systems",
                "sec_num": "5.3"
            },
            {
                "text": "The translation results are shown in Table 3 : CRR/ASR translation results by using SMT systems nition and correct recognition result, respectively. Here, we used the 1-best ASR result. From the translation results, it can be seen that three methods achieved comparable translation quality on both ASR and CRR inputs, with the translation results on CRR inputs are much better than those on ASR inputs because of the errors in the ASR inputs. The results also show that our translation selection method is very effective, which achieved absolute improvements of about 4 and 1 BLEU scores on CRR and ASR inputs, respectively.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 37,
                        "end": 44,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results by Using SMT Systems",
                "sec_num": "5.3"
            },
            {
                "text": "In order to fill up the data gap as discussed in Section 3, we used the RBMT System A to translate the English sentences in the ES corpus into Chinese. As described in Section 3, this corpus can be used by the three pivot translation methods. First, the synthetic Chinese-Spanish corpus can be combined with those produced by the EC and ES SMT systems, which were used in the synthetic method. Second, the synthetic Chinese-English corpus can be added into the BTEC CE1 corpus to build the CE translation model. In this way, the intersected English phrases in the CE corpus and ES corpus becomes more, which enables the Chinese-Spanish translation model induced using the triangulation method to cover more phrase pairs. For the transfer method, the CE translation quality can be also improved, which would result in the improvement of the Spanish translation quality. The translation results are shown in the columns under \"EC RBMT\" in Table 4 . As compared with those in Table 3 , the translation quality was greatly improved, with absolute improvements of at least 5.1 and 3.9 BLEU scores on CRR and ASR inputs for system combination results. The above results indicate that RBMT systems indeed can be used to fill up the data gap for pivot translation.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 937,
                        "end": 944,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 973,
                        "end": 980,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results by Using both RBMT and SMT Systems",
                "sec_num": "5.4"
            },
            {
                "text": "In our experiments, we also used a CE RBMT system to enlarge the size of training data by pro- Table 4 . From the translation results, it can be seen that, enlarging the size of training data with RBMT systems can further improve the translation quality.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 95,
                        "end": 102,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results by Using both RBMT and SMT Systems",
                "sec_num": "5.4"
            },
            {
                "text": "In addition to translating the training data, the CE RBMT system can be also used to translate the test set into English, which can be further translated into Spanish with the ES RBMT system B. 56 The translated test set can be further added to the training data to improve translation quality. The columns under \"+Test Set\" in Table 4 describes the translation results. The results show that translating the test set using RBMT systems greatly improved the translation result, with further improvements of about 2 and 1.5 BLEU scores on CRR and ASR inputs, respectively.",
                "cite_spans": [
                    {
                        "start": 194,
                        "end": 196,
                        "text": "56",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 328,
                        "end": 335,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results by Using both RBMT and SMT Systems",
                "sec_num": "5.4"
            },
            {
                "text": "The results also indicate that both the triangulation method and the transfer method greatly outperformed the synthetic method when we combined both RBMT and SMT systems in our experiments. Further analysis shows that the synthetic method contributed little to system combination. The selection results are almost the same as those selected from the translations produced by the triangulation and transfer methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results by Using both RBMT and SMT Systems",
                "sec_num": "5.4"
            },
            {
                "text": "In order to further analyze the translation results, we evaluated the above systems by examining the coverage of the phrase tables over the test phrases. We took the triangulation method as a case study, the results of which are shown in Fig Table 4 : CRR/ASR translation results by using RBMT and SMT systems",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 238,
                        "end": 241,
                        "text": "Fig",
                        "ref_id": null
                    },
                    {
                        "start": 242,
                        "end": 249,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results by Using both RBMT and SMT Systems",
                "sec_num": "5.4"
            },
            {
                "text": "BLEU Table 5 : CRR/ASR translation results by using additional monolingual corpora ure 1. It can be seen that using RBMT systems to translate the training and/or test data can cover more source phrases in the test set, which results in translation quality improvement.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 5,
                        "end": 12,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Method",
                "sec_num": null
            },
            {
                "text": "In addition to translating the limited bilingual corpus, we also translated additional monolingual corpus to further enlarge the size of the training data. We assume that it is easier to obtain a monolingual pivot corpus than to obtain a monolingual source or target corpus. Thus, we translated the English part of the HIT Olympic corpus into Chinese and Spanish using EC and ES RBMT systems. The generated synthetic corpus was added to the training data to train EC and ES SMT systems. Here, we used the synthetic CE Olympic corpus to train a model, which was interpolated with the CE model trained with both the BTEC CE1 corpus and the synthetic BTEC corpus to obtain an interpolated CE translation model. Similarly, we obtained an interpolated ES translation model. Table 5 describes the translation results. 7 The results indicate that translating monolingual corpus using the RBMT system further improved the translation quality as compared with those in Table 4 .",
                "cite_spans": [
                    {
                        "start": 812,
                        "end": 813,
                        "text": "7",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 960,
                        "end": 967,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results by Using Monolingual Corpus",
                "sec_num": "5.5"
            },
            {
                "text": "In this section, we compare the effects of two commercial RBMT systems with different transla- 7 Here we excluded the synthetic method since it greatly falls behind the other two methods.",
                "cite_spans": [
                    {
                        "start": 95,
                        "end": 96,
                        "text": "7",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Effects of Different RBMT Systems",
                "sec_num": "6.1"
            },
            {
                "text": "Sys Table 6 : CRR translation results (BLEU scores) by using different RBMT systems tion accuracy on spoken language translation. The goals are (1) to investigate whether a RBMT system can improve pivot translation quality even if its translation accuracy is not high, and (2) to compare the effects of RBMT system with different translation accuracy on pivot translation. Besides the EC RBMT system A used in the above section, we also used the EC RBMT system B for this experiment. We used the two systems to translate the test set from English to Chinese, and then evaluated the translation quality against Chinese references obtained from the IWSLT 2008 evaluation campaign. The BLEU scores are 43.90 and 29.77 for System A and System B, respectively. This shows that the translation quality of System B on spoken language corpus is much lower than that of System A. Then we applied these two different RBMT systems to translate the English part of the BTEC ES corpus into Chinese as described in Section 5.4. The translation results on CRR inputs are shown in Table 6 . 8 We replicated some of the results in Table 4 for the convenience of comparison. The results indicate that the higher the translation accuracy of the RBMT system is, the better the pivot translation is. If we compare the results with those only using SMT systems as described in Table 3 , the translation quality was greatly improved by at least 3 BLEU scores, even if the translation ac- Table 7 : CRR translation results by using multilingual corpus. \"/\" separates the BLEU and BLEUfix scores.",
                "cite_spans": [
                    {
                        "start": 1075,
                        "end": 1076,
                        "text": "8",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 4,
                        "end": 11,
                        "text": "Table 6",
                        "ref_id": null
                    },
                    {
                        "start": 1065,
                        "end": 1072,
                        "text": "Table 6",
                        "ref_id": null
                    },
                    {
                        "start": 1114,
                        "end": 1121,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 1355,
                        "end": 1362,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 1465,
                        "end": 1472,
                        "text": "Table 7",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Method",
                "sec_num": null
            },
            {
                "text": "curacy of System B is not so high. Combining two RBMT systems further improved the translation quality, which indicates that the two systems complement each other.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Method",
                "sec_num": null
            },
            {
                "text": "In this section, we compare the translation results by using a multilingual corpus with those by using independently sourced corpora. BTEC CE2 and BTEC ES are from the same source sentences, which can be taken as a multilingual corpus. The two corpora were employed to build CE and ES SMT models, which were used in the triangulation method and the transfer method. We also extracted the Chinese-Spanish (CS) corpus to build a standard CS translation system, which is denoted as Standard. The comparison results are shown in Table 7 . The translation quality produced by the systems using a multilingual corpus is much higher than that produced by using independently sourced corpora as described in Table 3 , with an absolute improvement of about 5.6 BLEU scores. If we used the EC RBMT system, the translation quality of those in Table 4 is comparable to that by using the multilingual corpus, which indicates that our method using RBMT systems to fill up the data gap is effective. The results also indicate that our translation selection method for pivot translation outperforms the method using only a real sourcetarget corpus. For comparison purpose, we added BTEC CE1 into the training data. The translation quality was improved by only 1 BLEU score. This again proves that our method to fill up the data gap is more effective than that to increase the size of the independently sourced corpus.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 525,
                        "end": 532,
                        "text": "Table 7",
                        "ref_id": null
                    },
                    {
                        "start": 700,
                        "end": 707,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 832,
                        "end": 839,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results by Using Multilingual Corpus",
                "sec_num": "6.2"
            },
            {
                "text": "In IWSLT 2008, the best result for the pivot task is achieved by Wang et al. (2008) . In order to compare the results, we added the bilingual HIT Olympic corpus into the CE training data. 9 We also compared our translation selection method with that proposed in (Wang et al., 2008) that is based on the target sentence average length (TSAL). The translation results are shown in Table 8. \"Wang\" represents the results in Wang et al. (2008) . \"TSAL\" represents the translation selection method proposed in Wang et al. (2008) , which is applied to our experiment. From the results, it can be seen that our method outperforms the best system in IWSLT 2008 and that our translation selection method outperforms the method based on target sentence average length.",
                "cite_spans": [
                    {
                        "start": 65,
                        "end": 83,
                        "text": "Wang et al. (2008)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 188,
                        "end": 189,
                        "text": "9",
                        "ref_id": null
                    },
                    {
                        "start": 262,
                        "end": 281,
                        "text": "(Wang et al., 2008)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 421,
                        "end": 439,
                        "text": "Wang et al. (2008)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 505,
                        "end": 523,
                        "text": "Wang et al. (2008)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Comparison with Related Work",
                "sec_num": "6.3"
            },
            {
                "text": "In this paper, we have compared three different pivot translation methods for spoken language translation. Experimental results indicated that the triangulation method and the transfer method generally outperform the synthetic method. Then we showed that the hybrid method combining RBMT and SMT systems can be used to fill up the data gap between the source-pivot and pivot-target corpora. By translating the pivot sentences in independent corpora, the hybrid method can produce translations whose quality is higher than those produced by the method using a source-target corpus of the same size. We also showed that even if the translation quality of the RBMT system is low, it still greatly improved the translation quality.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "In addition, we proposed a system combination method to select better translations from outputs produced by different pivot methods. This method is developed through regression learning, where only a small size of training examples with reference translations are required. Experimental results indicate that this method can consistently and significantly improve translation quality over individual translation outputs. And our system outperforms the best system for the pivot task in the IWSLT 2008 evaluation campaign.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "http://www.chineseldc.org/EN/purchasing.htm 2 http://www.statmt.org/europarl/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://www.slc.atr.jp/Corpus/IWSLT08/eval/IWSLT08 auto eval.tgz 4 A description of the alignment heuristics can be found at http://www.statmt.org/jhuws/?n=FactoredTraining.Training Parameters",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Although using the ES RBMT system B to translate the training data did not improve the translation quality, it improved the translation quality by translating the test set.6 The RBMT systems achieved a BLEU score of 24.36 on the test set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We omitted the ASR translation results since the trends are the same as those for CRR inputs. And we only showed BLEU scores since the trend for BLEU-Fix scores is similar.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We used about 70k sentence pairs for CE model training, whileWang et al. (2008) used about 100k sentence pairs, a CE translation dictionary and more monolingual corpora for model training.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Regression for Sentence-Level MT Evaluation with Pseudo References",
                "authors": [
                    {
                        "first": "Joshua",
                        "middle": [
                            "S"
                        ],
                        "last": "Albrecht",
                        "suffix": ""
                    },
                    {
                        "first": "Rebecca",
                        "middle": [],
                        "last": "Hwa",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 45th Annual Meeting of the Accosiation of Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "296--303",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Joshua S. Albrecht and Rebecca Hwa. 2007. Regres- sion for Sentence-Level MT Evaluation with Pseudo References. In Proceedings of the 45th Annual Meeting of the Accosiation of Computational Lin- guistics, pages 296-303.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Phrase-Based Statistical Machine Translation with Pivot Languages",
                "authors": [
                    {
                        "first": "Nicola",
                        "middle": [],
                        "last": "Bertoldi",
                        "suffix": ""
                    },
                    {
                        "first": "Madalina",
                        "middle": [],
                        "last": "Barbaiani",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the International Workshop on Spoken Language Translation",
                "volume": "",
                "issue": "",
                "pages": "143--149",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nicola Bertoldi, Madalina Barbaiani, Marcello Fed- erico, and Roldano Cattoni. 2008. Phrase-Based Statistical Machine Translation with Pivot Lan- guages. In Proceedings of the International Work- shop on Spoken Language Translation, pages 143- 149.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Machine Translation by Triangulation: Making Effective Use of Multi-Parallel Corpora",
                "authors": [
                    {
                        "first": "Tevor",
                        "middle": [],
                        "last": "Cohn",
                        "suffix": ""
                    },
                    {
                        "first": "Mirella",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "348--355",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tevor Cohn and Mirella Lapata. 2007. Machine Trans- lation by Triangulation: Making Effective Use of Multi-Parallel Corpora. In Proceedings of the 45th Annual Meeting of the Association for Computa- tional Linguistics, pages 348-355.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Ranking vs. Regression in Machine Translation Evaluation",
                "authors": [
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Duh",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the Third Workshop on Statistical Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "191--194",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kevin Duh. 2008. Ranking vs. Regression in Machine Translation Evaluation. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 191-194.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Using RBMT Systems to Produce Bilingual Corpus for SMT",
                "authors": [
                    {
                        "first": "Xiaoguang",
                        "middle": [],
                        "last": "Hu",
                        "suffix": ""
                    },
                    {
                        "first": "Haifeng",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Hua",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
                "volume": "",
                "issue": "",
                "pages": "287--295",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xiaoguang Hu, Haifeng Wang, and Hua Wu. 2007. Using RBMT Systems to Produce Bilingual Corpus for SMT. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 287-295.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Making Large-Scale SVM Learning Practical",
                "authors": [
                    {
                        "first": "Thorsten",
                        "middle": [],
                        "last": "Joachims",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Advances in Kernel Methods -Support Vector Learning",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thorsten Joachims. 1999. Making Large-Scale SVM Learning Practical. In Bernhard Sch\u00f6elkopf, Christopher Burges, and Alexander Smola, edi- tors, Advances in Kernel Methods -Support Vector Learning. MIT Press.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "The TALP & I2R SMT Systems for IWSLT",
                "authors": [
                    {
                        "first": "Maxim",
                        "middle": [],
                        "last": "Khalilov",
                        "suffix": ""
                    },
                    {
                        "first": "Marta",
                        "middle": [
                            "R"
                        ],
                        "last": "Costa-Juss\u00e0",
                        "suffix": ""
                    },
                    {
                        "first": "Carlos",
                        "middle": [
                            "A"
                        ],
                        "last": "Henr\u00edquez",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "R"
                        ],
                        "last": "Jos\u00e9",
                        "suffix": ""
                    },
                    {
                        "first": "Adolfo",
                        "middle": [],
                        "last": "Fonollosa",
                        "suffix": ""
                    },
                    {
                        "first": "Jos\u00e9",
                        "middle": [
                            "B"
                        ],
                        "last": "Hern\u00e1ndez",
                        "suffix": ""
                    },
                    {
                        "first": "Rafael",
                        "middle": [
                            "E"
                        ],
                        "last": "Mari\u00f1o",
                        "suffix": ""
                    },
                    {
                        "first": "Chen",
                        "middle": [],
                        "last": "Banchs",
                        "suffix": ""
                    },
                    {
                        "first": "Min",
                        "middle": [],
                        "last": "Boxing",
                        "suffix": ""
                    },
                    {
                        "first": "Aiti",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Haizhou",
                        "middle": [],
                        "last": "Aw",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the International Workshop on Spoken Language Translation",
                "volume": "",
                "issue": "",
                "pages": "116--123",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maxim Khalilov, Marta R. Costa-Juss\u00e0, Carlos A. Henr\u00edquez, Jos\u00e9 A.R. Fonollosa, Adolfo Hern\u00e1ndez, Jos\u00e9 B. Mari\u00f1o, Rafael E. Banchs, Chen Boxing, Min Zhang, Aiti Aw, and Haizhou Li. 2008. The TALP & I2R SMT Systems for IWSLT 2008. In Proceedings of the International Workshop on Spo- ken Language Translation, pages 116-123.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Statistical phrase-based translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Franz",
                        "middle": [
                            "J"
                        ],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Marcu",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "HLT-NAACL: Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "127--133",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLT- NAACL: Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 127-133.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Moses: Open Source Toolkit for Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Hieu",
                        "middle": [],
                        "last": "Hoang",
                        "suffix": ""
                    },
                    {
                        "first": "Alexanda",
                        "middle": [],
                        "last": "Birch",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Callison-Burch",
                        "suffix": ""
                    },
                    {
                        "first": "Marcello",
                        "middle": [],
                        "last": "Federico",
                        "suffix": ""
                    },
                    {
                        "first": "Nicola",
                        "middle": [],
                        "last": "Bertoldi",
                        "suffix": ""
                    },
                    {
                        "first": "Brooke",
                        "middle": [],
                        "last": "Cowan",
                        "suffix": ""
                    },
                    {
                        "first": "Wade",
                        "middle": [],
                        "last": "Shen",
                        "suffix": ""
                    },
                    {
                        "first": "Christine",
                        "middle": [],
                        "last": "Moran",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Zens",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Dyer",
                        "suffix": ""
                    },
                    {
                        "first": "Ondrej",
                        "middle": [],
                        "last": "Bojar",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [],
                        "last": "Constantin",
                        "suffix": ""
                    },
                    {
                        "first": "Evan",
                        "middle": [],
                        "last": "Herbst",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 45th Annual Meeting of the Associa-tion for Computational Linguistics, demonstration session",
                "volume": "",
                "issue": "",
                "pages": "177--180",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Hieu Hoang, Alexanda Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Associa-tion for Computational Linguistics, demon- stration session, pages 177-180.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments",
                "authors": [
                    {
                        "first": "Alon",
                        "middle": [],
                        "last": "Lavie",
                        "suffix": ""
                    },
                    {
                        "first": "Abhaya",
                        "middle": [],
                        "last": "Agarwal",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of Workshop on Statistical Machine Translation at the 45th Annual Meeting of the Association of Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "228--231",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alon Lavie and Abhaya Agarwal. 2007. METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments. In Proceedings of Workshop on Statistical Machine Translation at the 45th Annual Meeting of the As- sociation of Computational Linguistics, pages 228- 231.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Minimum Error Rate Training in Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Franz",
                        "middle": [
                            "J"
                        ],
                        "last": "Och",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "160--167",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franz J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
                "authors": [
                    {
                        "first": "Kishore",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    },
                    {
                        "first": "Salim",
                        "middle": [],
                        "last": "Roukos",
                        "suffix": ""
                    },
                    {
                        "first": "Todd",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "Wei-Jing",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Overview of the IWSLT 2008 Evaluation Campaign",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Paul",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the International Workshop on Spoken Language Translation",
                "volume": "",
                "issue": "",
                "pages": "1--17",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michael Paul. 2008. Overview of the IWSLT 2008 Evaluation Campaign. In Proceedings of the In- ternational Workshop on Spoken Language Trans- lation, pages 1-17.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "A Comparison of Pivot Methods for Phrase-Based Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Masao",
                        "middle": [],
                        "last": "Utiyama",
                        "suffix": ""
                    },
                    {
                        "first": "Hitoshi",
                        "middle": [],
                        "last": "Isahara",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of human language technology: the Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "484--491",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Masao Utiyama and Hitoshi Isahara. 2007. A Com- parison of Pivot Methods for Phrase-Based Statisti- cal Machine Translation. In Proceedings of human language technology: the Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 484-491.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "The TCH Machine Translation System for IWSLT",
                "authors": [
                    {
                        "first": "Haifeng",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Hua",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "Xiaoguang",
                        "middle": [],
                        "last": "Hu",
                        "suffix": ""
                    },
                    {
                        "first": "Zhanyi",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Jianfeng",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Dengjun",
                        "middle": [],
                        "last": "Ren",
                        "suffix": ""
                    },
                    {
                        "first": "Zhengyu",
                        "middle": [],
                        "last": "Niu",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the International Workshop on Spoken Language Translation",
                "volume": "",
                "issue": "",
                "pages": "124--131",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Haifeng Wang, Hua Wu, Xiaoguang Hu, Zhanyi Liu, Jianfeng Li, Dengjun Ren, and Zhengyu Niu. 2008. The TCH Machine Translation System for IWSLT 2008. In Proceedings of the International Workshop on Spoken Language Translation, pages 124-131.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Pivot Language Approach for Phrase-Based Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Hua",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "Haifeng",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of 45th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "856--863",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hua Wu and Haifeng Wang. 2007. Pivot Lan- guage Approach for Phrase-Based Statistical Ma- chine Translation. In Proceedings of 45th Annual Meeting of the Association for Computational Lin- guistics, pages 856-863.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "uris": null,
                "type_str": "figure",
                "text": "Coverage on test source phrases viding alternative English translations for the Chinese part of the CE corpus. The translation results are shown in the columns under \"+CE RBMT\" in"
            },
            "TABREF0": {
                "content": "<table/>",
                "text": "",
                "num": null,
                "type_str": "table",
                "html": null
            },
            "TABREF1": {
                "content": "<table><tr><td>Corpus</td><td>Size</td><td>SW</td><td>TW</td></tr><tr><td>BTEC CE1</td><td>20,000</td><td>164K</td><td>182K</td></tr><tr><td>BTEC CE2</td><td>18,972</td><td>177K</td><td>182K</td></tr><tr><td>HIT CE</td><td>51,791</td><td>490K</td><td>502K</td></tr><tr><td>BTEC ES</td><td>19,972</td><td>182K</td><td>185K</td></tr><tr><td colspan=\"4\">Europarl ES 400,000 8,485K 8,219K</td></tr></table>",
                "text": "describes the data used for model training in this paper, including the BTEC (Basic Travel Expression Corpus) Chinese-English (CE) corpus and the BTEC English-Spanish (ES) corpus provided by IWSLT 2008 organizers, the HIT olympic CE corpus (2004-863-008) 1 and the Europarl ES corpus 2 . There are two kinds of BTEC CE corpus: BTEC CE1 and",
                "num": null,
                "type_str": "table",
                "html": null
            },
            "TABREF2": {
                "content": "<table/>",
                "text": "Training data. SW and TW represent source words and target words, respectively.",
                "num": null,
                "type_str": "table",
                "html": null
            },
            "TABREF3": {
                "content": "<table><tr><td>Method</td><td>BLEU</td><td>BLEU-Fix</td></tr></table>",
                "text": "ASR and CRR represent different input conditions, namely the result of automatic speech recog-Triangulation 33.70/27.46 31.59/25.02 Transfer 33.52/28.34 31.36/26.20 Synthetic 34.35/27.21 32.00/26.07 Combination 38.14/29.32 34.76/27.39",
                "num": null,
                "type_str": "table",
                "html": null
            },
            "TABREF7": {
                "content": "<table><tr><td/><td colspan=\"3\">Ours Wang TSAL</td></tr><tr><td>BLEU</td><td>49.57</td><td>-</td><td>48.25</td></tr><tr><td>BLEU-</td><td/><td/><td/></tr></table>",
                "text": "Fix 46.74 45.10 45.27    Table 8: Comparison with related work",
                "num": null,
                "type_str": "table",
                "html": null
            }
        }
    }
}