File size: 116,534 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
{
    "paper_id": "P13-1016",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:35:23.719150Z"
    },
    "title": "Distortion Model Considering Rich Context for Statistical Machine Translation",
    "authors": [
        {
            "first": "Isao",
            "middle": [],
            "last": "Goto",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Institute of Information and Communications Technology \u2021 Kyoto University",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Masao",
            "middle": [],
            "last": "Utiyama",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Institute of Information and Communications Technology \u2021 Kyoto University",
                "location": {}
            },
            "email": "mutiyama@nict.go.jp"
        },
        {
            "first": "Eiichiro",
            "middle": [],
            "last": "Sumita",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Institute of Information and Communications Technology \u2021 Kyoto University",
                "location": {}
            },
            "email": "eiichiro.sumita@nict.go.jp"
        },
        {
            "first": "Akihiro",
            "middle": [],
            "last": "Tamura",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Institute of Information and Communications Technology \u2021 Kyoto University",
                "location": {}
            },
            "email": "akihiro.tamura@nict.go.jp"
        },
        {
            "first": "Sadao",
            "middle": [],
            "last": "Kurohashi",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Institute of Information and Communications Technology \u2021 Kyoto University",
                "location": {}
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper proposes new distortion models for phrase-based SMT. In decoding, a distortion model estimates the source word position to be translated next (NP) given the last translated source word position (CP). We propose a distortion model that can consider the word at the CP, a word at an NP candidate, and the context of the CP and the NP candidate simultaneously. Moreover, we propose a further improved model that considers richer context by discriminating label sequences that specify spans from the CP to NP candidates. It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data. In our experiments, our model improved 2.9 BLEU points for Japanese-English and 2.6 BLEU points for Chinese-English translation compared to the lexical reordering models.",
    "pdf_parse": {
        "paper_id": "P13-1016",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper proposes new distortion models for phrase-based SMT. In decoding, a distortion model estimates the source word position to be translated next (NP) given the last translated source word position (CP). We propose a distortion model that can consider the word at the CP, a word at an NP candidate, and the context of the CP and the NP candidate simultaneously. Moreover, we propose a further improved model that considers richer context by discriminating label sequences that specify spans from the CP to NP candidates. It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data. In our experiments, our model improved 2.9 BLEU points for Japanese-English and 2.6 BLEU points for Chinese-English translation compared to the lexical reordering models.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Estimating appropriate word order in a target language is one of the most difficult problems for statistical machine translation (SMT). This is particularly true when translating between languages with widely different word orders.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To address this problem, there has been a lot of research done into word reordering: lexical reordering model (Tillman, 2004) , which is one of the distortion models, reordering constraint (Zens et al., 2004) , pre-ordering (Xia and Mc-Cord, 2004) , hierarchical phrase-based SMT (Chiang, 2007) , and syntax-based SMT (Yamada and Knight, 2001) .",
                "cite_spans": [
                    {
                        "start": 110,
                        "end": 125,
                        "text": "(Tillman, 2004)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 189,
                        "end": 208,
                        "text": "(Zens et al., 2004)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 224,
                        "end": 247,
                        "text": "(Xia and Mc-Cord, 2004)",
                        "ref_id": null
                    },
                    {
                        "start": 280,
                        "end": 294,
                        "text": "(Chiang, 2007)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 318,
                        "end": 343,
                        "text": "(Yamada and Knight, 2001)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In general, source language syntax is useful for handling long distance word reordering. However, obtaining syntax requires a syntactic parser, which is not available for many languages. Phrase-based SMT (Koehn et al., 2007) is a widely used SMT method that does not use a parser.",
                "cite_spans": [
                    {
                        "start": 204,
                        "end": 224,
                        "text": "(Koehn et al., 2007)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Phrase-based SMT mainly 1 estimates word reordering using distortion models 2 . Therefore, distortion models are one of the most important components for phrase-based SMT. On the other hand, there are methods other than distortion models for improving word reordering for phrase-based SMT, such as pre-ordering or reordering constraints. However, these methods also use distortion models when translating by phrase-based SMT. Therefore, distortion models do not compete against these methods and are commonly used with them. If there is a good distortion model, it will improve the translation quality of phrase-based SMT and benefit to the methods using distortion models.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we propose two distortion models for phrase-based SMT. In decoding, a distortion model estimates the source word position to be translated next (NP) given the last translated source word position (CP). The proposed models are the pair model and the sequence model. The pair model utilizes the word at the CP, a word at an NP candidate site, and the words surrounding the CP and the NP candidates (context) simultaneously. In addition, the sequence model, which is the further improved model, considers richer context by identifying the label sequence that specify the span from the CP to the NP. It enables our model to learn the effect of relative word order among NP candidates as well as to learn the effect of distances from the training data. Our model learns the preference relations among NP candidates. Our model consists of one probabilistic model and does not require a parser. Experiments confirmed the effectiveness of our method for Japanese-English and Chinese-English translation, using NTCIR-9 Patent Machine Translation Task data sets (Goto et al., 2011) .",
                "cite_spans": [
                    {
                        "start": 1067,
                        "end": 1086,
                        "text": "(Goto et al., 2011)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "A Moses-style phrase-based SMT generates target hypotheses sequentially from left to right. Therefore, the role of the distortion model is to estimate the source phrase position to be translated next whose target side phrase will be located immediately to the right of the already generated hypotheses. An example is shown in Figure 1 . In Figure 1 , we assume that only the kare wa (English side: \"he\") has been translated. The target word to be generated next will be \"bought\" and the source word to be selected next will be its corresponding Japanese word katta. Thus, a distortion model should estimate phrases including katta as a source phrase position to be translated next.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 326,
                        "end": 334,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 340,
                        "end": 348,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "To explain the distortion model task in more detail, we need to redefine more precisely two terms, the current position (CP) and next position (NP) in the source sentence. CP is the source sentence position corresponding to the rightmost aligned target word in the generated target word sequence. NP is the source sentence position corresponding to the leftmost aligned target word in the target phrase to be generated next. The task of the distortion model is to estimate the NP 3 from NP candidates (NPCs) for each CP in the source sentence. 4",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "3 NP is not always one position, because there may be multiple correct hypotheses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "4 This definition is slightly different from that of existing methods such as Moses and (Green et al., 2010) . In existing methods, CP is the rightmost position of the last translated source phrase and NP is the leftmost position of the source phrase to be translated next. Note that existing methods do kinou 1 kare 2 wa 3 pari 4 de 5 hon 6 wo 7 katta 8 he bought books in Paris yesterday (a) kinou 1 kare 2 wa 3 pari 4 de 5 ni 6 satsu 7 hon 8 wo 9 katta 10 he bought two books in Paris yesterday (b) kinou 1 kare 2 wa 3 hon 4 wo 5 karita 6 ga 7 kanojo 8 wa 9 katta 10 he borrowed books yesterday but she bought (c) kinou 1 kare 2 wa 3 kanojo 4 ga 5 katta 6 hon 7 wo 8 karita 9",
                "cite_spans": [
                    {
                        "start": 88,
                        "end": 108,
                        "text": "(Green et al., 2010)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "yesterday he borrowed the books that she bought (e) kinou 1 kare 2 wa 3 hon 4 wo 5 katta 6 ga 7 kanojo 8 wa 9 karita 10 he bought books yesterday but she borrowed Estimating NP is a difficult task. Figure 2 shows some examples. The superscript numbers indicate the word position in the source sentence.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 198,
                        "end": 206,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "In Figure 2 (a), the NP is 8. However, in Figure 2 (b) , the word (kare) at the CP is the same as (a), but the NP is different (the NP is 10). From these examples, we see that distance is not the essential factor in deciding an NP. And it also turns out that the word at the CP alone is not enough to estimate the NP. Thus, not only the word at the CP but also the word at a NP candidate (NPC) should be considered simultaneously.",
                "cite_spans": [
                    {
                        "start": 42,
                        "end": 54,
                        "text": "Figure 2 (b)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 11,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "In (c) and (d) in Figure 2 , the word (kare) at the CP is the same and karita (borrowed) and katta (bought) are at the NPCs. Karita is the word at the NP and katta is not the word at the NP for (c), while katta is the word at the NP and karita is not the word at the NP for (d). From these examples, considering what the word is at the NP not consider word-level correspondences.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 18,
                        "end": 26,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "is not enough to estimate the NP. One of the reasons for this difference is the relative word order between words. Thus, considering relative word order is important.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "In (d) and (e) in Figure 2 , the word (kare) at the CP and the word order between katta and karita are the same. However, the word at the NP for (d) and the word at the NP for (e) are different. From these examples, we can see that selecting a nearby word is not always correct. The difference is caused by the words surrounding the NPCs (context), the CP context, and the words between the CP and the NPC. Thus, these should be considered when estimating the NP.",
                "cite_spans": [
                    {
                        "start": 3,
                        "end": 6,
                        "text": "(d)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 18,
                        "end": 26,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "In summary, in order to estimate the NP, the following should be considered simultaneously: the word at the NP, the word at the CP, the relative word order among the NPCs, the words surrounding NP and CP (context), and the words between the CP and the NPC.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "There are distortion models that do not require a parser for phrase-based SMT. The linear distortion cost model used in Moses (Koehn et al., 2007) , whose costs are linearly proportional to the reordering distance, always gives a high cost to long distance reordering, even if the reordering is correct. The MSD lexical reordering model (Tillman, 2004; Koehn et al., 2005; Galley and Manning, 2008) only calculates probabilities for the three kinds of phrase reorderings (monotone, swap, and discontinuous), and does not consider relative word order or words between the CP and the NPC. Thus, these models are not sufficient for long distance word reordering.",
                "cite_spans": [
                    {
                        "start": 126,
                        "end": 146,
                        "text": "(Koehn et al., 2007)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 337,
                        "end": 352,
                        "text": "(Tillman, 2004;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 353,
                        "end": 372,
                        "text": "Koehn et al., 2005;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 373,
                        "end": 398,
                        "text": "Galley and Manning, 2008)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "Al-Onaizan and Papineni (2006) proposed a distortion model that used the word at the CP and the word at an NPC. However, their model did not use context, relative word order, or words between the CP and the NPC. Ni et al. (2009) proposed a method that adjusts the linear distortion cost using the word at the CP and its context. Their model does not simultaneously consider both the word specified at the CP and the word specified at the NPCs. Green et al. (2010) proposed distortion models that used context. Their model (the outbound model) estimates how far the NP should be from the CP using the word at the CP and its context. 5 Their model does not simultaneously con- 5 They also proposed another model (the inbound model) sider both the word specified at the CP and the word specified at an NPC. For example, the outbound model considers the word specified at the CP, but does not consider the word specified at an NPC. Their models also do not consider relative word order.",
                "cite_spans": [
                    {
                        "start": 15,
                        "end": 30,
                        "text": "Papineni (2006)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 212,
                        "end": 228,
                        "text": "Ni et al. (2009)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 444,
                        "end": 463,
                        "text": "Green et al. (2010)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 632,
                        "end": 633,
                        "text": "5",
                        "ref_id": null
                    },
                    {
                        "start": 675,
                        "end": 676,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "In contrast, our distortion model solves the aforementioned problems. Our distortion models utilize the word specified at the CP, the word specified at an NPC, and also the context of the CP and the NPC simultaneously. Furthermore, our sequence model considers richer context including the relative word order among NPCs and also including all the words between the CP and the NPC. In addition, unlike previous methods, our models learn the preference relations among NPCs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model for Phrase-Based SMT",
                "sec_num": "2"
            },
            {
                "text": "In this section, we first define our distortion model and explain our learning strategy. Then, we describe two proposed models: the pair model and the sequence model that is the further improved model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Proposed Method",
                "sec_num": "3"
            },
            {
                "text": "First, we define our distortion model. Let i be a CP, j be an NPC, S be a source sentence, and X be the random variable of the NP. In this paper, distortion probability is defined as P (X = j|i, S), which is the probability of an NPC j being the NP. Our distortion model is defined as the model calculating the distortion probability.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model and Learning Strategy",
                "sec_num": "3.1"
            },
            {
                "text": "Next, we explain the learning strategy for our distortion model. We train this model as a discriminative model that discriminates the NP from NPCs. Let J be a set of word positions in S other than i. We train the distortion model subject to",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model and Learning Strategy",
                "sec_num": "3.1"
            },
            {
                "text": "\u2211 j\u2208J P (X = j|i, S) = 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model and Learning Strategy",
                "sec_num": "3.1"
            },
            {
                "text": "The model parameters are learned to maximize the distortion probability of the NP among all of the NPCs J in each source sentence. This learning strategy is a kind of preference relation learning (Evgniou and Pontil, 2002) . In this learning, the that estimates reverse direction distance. Each NPC is regarded as an NP, and the inbound model estimates how far the corresponding CP should be from the NP using the word at the NP and its context. distortion probability of the actual NP will be relatively higher than those of all the other NPCs J.",
                "cite_spans": [
                    {
                        "start": 196,
                        "end": 222,
                        "text": "(Evgniou and Pontil, 2002)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model and Learning Strategy",
                "sec_num": "3.1"
            },
            {
                "text": "This learning strategy is different from that of (Al-Onaizan and Papineni, 2006; Green et al., 2010) . For example, Green et al. (2010) trained their outbound model subject to \u2211 c\u2208C P (Y = c|i, S) = 1, where C is the set of the nine distortion classes 6 and Y is the random variable of the correct distortion class that the correct distortion is classified into. Distortion is defined as j \u2212 i \u2212 1. Namely, the model probabilities that they learned were the probabilities of distortion classes in all of the training data, not the relative preferences among the NPCs in each source sentence.",
                "cite_spans": [
                    {
                        "start": 65,
                        "end": 80,
                        "text": "Papineni, 2006;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 81,
                        "end": 100,
                        "text": "Green et al., 2010)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 116,
                        "end": 135,
                        "text": "Green et al. (2010)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Distortion Model and Learning Strategy",
                "sec_num": "3.1"
            },
            {
                "text": "The pair model utilizes the word at the CP, the word at an NPC, and the context of the CP and the NPC simultaneously to estimate the NP. This can be done by our distortion model definition and the learning strategy described in the previous section.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "In this work, we use the maximum entropy method (Berger et al., 1996) as a discriminative machine learning method. The reason for this is that a model based on the maximum entropy method can calculate probabilities. However, if we use scores as an approximation of the distortion probabilities, various discriminative machine learning methods can be applied to build the distortion model. Let s be a source word and s n 1 = s 1 s 2 ...s n be a source sentence. We add a beginning of sentence (BOS) marker to the head of the source sentence and an end of sentence (EOS) marker to the end, so the source sentence S is expressed as s n+1 0 (s 0 = BOS, s n+1 = EOS). Our distortion model calculates the distortion probability for an NPC",
                "cite_spans": [
                    {
                        "start": 48,
                        "end": 69,
                        "text": "(Berger et al., 1996)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "j \u2208 {j|1 \u2264 j \u2264 n + 1 \u2227 j \u0338 = i} for each CP i \u2208 {i|0 \u2264 i \u2264 n} P (X = j|i, S) = 1 Z i exp ( w T f (i, j, S, o, d) )",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "where [4, 6] , and [7, \u221e). In (Green et al., 2010) , \u22121 was used as one of distortion classes. However, \u22121 represents the CP in our definition, and CP is not an NPC. Thus, we shifted all of the distortion classes for negative distortions by \u22121.",
                "cite_spans": [
                    {
                        "start": 6,
                        "end": 9,
                        "text": "[4,",
                        "ref_id": null
                    },
                    {
                        "start": 10,
                        "end": 12,
                        "text": "6]",
                        "ref_id": null
                    },
                    {
                        "start": 30,
                        "end": 50,
                        "text": "(Green et al., 2010)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "o = { 0 (i < j) 1 (i > j) , d = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 (|j \u2212 i| = 1) 1 (2 \u2264 |j \u2212 i| \u2264 5) 2 (6 \u2264 |j \u2212 i|) , 6 (\u2212\u221e, \u22128], [\u22127, \u22125], [\u22124, \u22123], \u22122, 0, 1, [2, 3],",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "Template \u27e8o\u27e9, \u27e8o, sp\u27e9 1 , \u27e8o, ti\u27e9, \u27e8o, tj\u27e9, \u27e8o, d\u27e9, \u27e8o, sp, sq\u27e9 2 , \u27e8o, ti, tj\u27e9, \u27e8o, ti\u22121, ti, tj\u27e9, \u27e8o, ti, ti+1, tj\u27e9, \u27e8o, ti, tj\u22121, tj\u27e9, \u27e8o, ti, tj, tj+1\u27e9, \u27e8o, si, ti, tj\u27e9, \u27e8o, sj, ti, tj\u27e9 The binary feature function that constitutes an element of f (\u2022) returns 1 when its feature is matched and if else, returns 0. Table 1 shows the feature templates used to produce the features. A feature is an instance of a feature template.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 317,
                        "end": 324,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "1 p \u2208 {p|i \u2212 2 \u2264 p \u2264 i + 2 \u2228 j \u2212 2 \u2264 p \u2264 j + 2} 2 (p, q) \u2208 {(p, q)|i \u2212 2 \u2264 p \u2264 i + 2 \u2227 j \u2212 2 \u2264 q \u2264 j + 2 \u2227 (|p \u2212 i| \u2264 1 \u2228 |q \u2212 j| \u2264 1)}",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "In Equation 1, i, j, and S are used by the feature functions. Thus, Equation 1 can utilize features consisting of both s i , which is the word specified at i, and s j , which is the word specified at j, or both the context of i and the context of j simultaneously. Distance is considered using the distance class d. Distortion is represented by distance and orientation. The pair model considers distortion using six joint classes of d and o.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Pair Model",
                "sec_num": "3.2"
            },
            {
                "text": "The pair model does not consider relative word order among NPCs or all the words between the CP and an NPC. In this section, we propose a further improved model, the sequence model, which considers richer context including relative word order among NPCs and also including all the words between the CP and an NPC.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "In (c) and (d) in Figure 2 , karita (borrowed) and katta (bought) occur in the source sentences. The pair model considers the effect of distances using only the distance class d. If these positions are in the same distance class, the pair model cannot consider the differences in distances. In this case, these are conflict instances during training and it is difficult to distinguish the NP for translation. Now to explain how to consider the relative word order by the sequence model. The sequence model considers the relative word order by discriminating the label sequence corresponding to the NP from the label sequences corresponding to Label Description C A position is the CP. I A position is a position between the CP and the NPC. N A position is the NPC. each NPC in each sentence. Each label sequence corresponds to one NPC. Therefore, if we identify the label sequence that corresponds to the NP, we can obtain the NP. The label sequences specify the spans from the CP to each NPC using three kinds of labels indicating the type of word positions in the spans. The three kinds of labels, \"C, I, and N,\" are shown in Table 2 . Figure 3 shows examples of the label sequences for the case of Figure 2 (c) . In Figure 3 , the label sequences are represented by boxes and the elements of the sequences are labels. The NPC is used as the label sequence ID for each label sequence. The label sequence can treat relative word order. For example, the label sequence ID of 10 in Figure 3 knows that karita exists to the left of the NPC of 10. This is because karita 6 carries a label I while katta 10 carries a label N, and a position with label I is defined as relatively closer to the CP than a position with label N. By utilizing the label sequence and corresponding words, the model can reflect the effect of karita existing between the CP and the NPC of 10 on the probability. For the sequence model, karita (borrowed) and katta (bought) in (c) and (d) in Figure 2 are not conflict instances in training, whereas they are conflict instances in training for the pair model. The reason is as follows. In order to make the probability of the NPC of 10 smaller than the NPC of 6, instead of making the weight parameters for the features with respect to the word at the position of 10 with label N smaller than the weight parameters for the features with respect to the word at the position of 6 with label N, the sequence model can give negative weight parameters for the features with respect to the word at the position of 6 with label I.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 18,
                        "end": 26,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 1128,
                        "end": 1135,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 1138,
                        "end": 1146,
                        "text": "Figure 3",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 1201,
                        "end": 1213,
                        "text": "Figure 2 (c)",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 1219,
                        "end": 1227,
                        "text": "Figure 3",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 1481,
                        "end": 1489,
                        "text": "Figure 3",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 1963,
                        "end": 1971,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "We use a sequence discrimination technique based on CRF (Lafferty et al., 2001) to identify the label sequence that corresponds to the NP. There are two differences between our task and the CRF task. One difference is that CRF discriminates label sequences that consist of labels from all of the label candidates, whereas we constrain the label sequences to sequences where the label at the CP is C, the label at an NPC is N, and the labels between the CP and the NPC are I. The other difference is that CRF is designed for discriminating label sequences corresponding to the same object sequence, whereas we do not assign labels to words outside the spans from the CP to each NPC. However, when we assume that another label such as E has been assigned to the words outside the spans and there are no features involving label E, CRF with our label constraints can be applied to our task. In this paper, the method designed to discriminate label sequences corresponding to the different word sequence lengths is called partial CRF.",
                "cite_spans": [
                    {
                        "start": 56,
                        "end": 79,
                        "text": "(Lafferty et al., 2001)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "The sequence model based on partial CRF is derived by extending the pair model. We introduce the label l and extend the pair model to discriminating the label sequences. There are two extensions to the pair model. One extension uses labels. We suppose that label sequences specify the spans from the CP to each NPC. We conjoined all the feature templates in Table 1 with an additional feature template \u27e8l i , l j \u27e9 to include the labels into features where l i is the label corresponding to the position of i. The other extension uses sequence. In the pair model, the position pair of (i, j) is used to derive features. In contrast, to descriminate label sequences in the sequence model, the position pairs of (i, k), k \u2208 {k|i < k \u2264 j \u2228 j \u2264 k < i} and (k, j), k \u2208 {k|i \u2264 k < j \u2228 j < k \u2264 i} are used to derive features. Note that in the feature templates in Table 1 , i and j are used to specify two positions. When features are used for the sequence model, one of the positions is regarded as k.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 358,
                        "end": 365,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    },
                    {
                        "start": 857,
                        "end": 864,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "The distortion probability for an NPC j being the NP given a CP i and a source sentence S is calculated as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P (X = j|i, S) = 1 Z i exp ( \u2211 k\u2208M \u222a{j} w T f (i, k, S, o, d, l i , l k ) + \u2211 k\u2208M \u222a{i} w T f (k, j, S, o, d, l k , l j ) )",
                        "eq_num": "(2)"
                    }
                ],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "where",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "M = { {m|i < m < j} (i < j) {m|j < m < i} (i > j)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "and Z i = \u2211 j\u2208{j|1\u2264j\u2264n+1 \u2227 j\u0338 =i} (numerator of Equation 2) is a normalization factor. Since j is used as the label sequence ID, discriminating j also means discriminating label sequence IDs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "The first term in exp(\u2022) in Equation 2 considers all of the word pairs located at i and other positions in the sequence, and also their context. The second term in exp(\u2022) in Equation 2 considers all of the word pairs located at j and other positions in the sequence, and also their context. By designing our model to discriminate among different length label sequences, our model can naturally handle the effect of distances. Many features are derived from a long label sequence because it will contain many labels between the CP and the NPC. On the other hand, fewer features are derived from a short label sequence because a short label sequence will contain fewer labels between the CP and the NPC. The bias from these differences provides important clues for learning the effect of distances. 7 ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Sequence Model",
                "sec_num": "3.3"
            },
            {
                "text": "To train our discriminative distortion model, supervised training data is needed. The training data is built from a parallel corpus and word alignments between corresponding source words and target words. Figure 4 shows examples of training data. We select the target words aligned to the source words sequentially from left to right (target side arrows). Then, the order of the source words in the target word order is decided (source side arrows). The source sentence and the source side arrows are the training data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 205,
                        "end": 213,
                        "text": "Figure 4",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Training Data for Discriminative Distortion Model",
                "sec_num": "3.4"
            },
            {
                "text": "In order to confirm the effects of our distortion model, we conducted a series of Japanese to English (JE) and Chinese to English (CE) translation experiments. 8",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "4"
            },
            {
                "text": "We used the patent data for the Japanese to English and Chinese to English translation subtasks from the NTCIR-9 Patent Machine Translation Task (Goto et al., 2011) . There were 2,000 sentences for the test data and 2,000 sentences for the development data. Mecab 9 was used for the Japanese morphological analysis. The Stanford segmenter 10 and tagger 11 were used for Chinese segmentation and POS tagging. The translation model was trained using sentences of 40 words or less from the training data. So approximately 2.05 million sentence pairs consisting of approximately 54 million Japanese tokens whose lexicon size was 134k and 50 million English tokens whose lexicon size was 213k were used for JE. And approximately 0.49 million sentence pairs consisting of 14.9 million Chinese tokens whose lexicon size was 169k and 16.3 million English tokens whose lexicon size was 240k were used for CE. GIZA++ and growdiag-final-and heuristics were used to obtain word alignments. In order to reduce word alignment errors, we removed articles {a, an, the} in English and particles {ga, wo, wa} in Japanese before performing word alignments because these function words do not correspond to any words in the other languages. After word alignment, we restored the removed words and shifted the word alignment positions to the original word positions. We used 5gram language models that were trained using the English side of each set of bilingual training data.",
                "cite_spans": [
                    {
                        "start": 145,
                        "end": 164,
                        "text": "(Goto et al., 2011)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common Settings",
                "sec_num": "4.1"
            },
            {
                "text": "We used an in-house standard phrase-based SMT system compatible with the Moses decoder (Koehn et al., 2007) . The SMT weighting parameters were tuned by MERT (Och, 2003) using the development data. To stabilize the MERT results, we tuned three times by MERT using the first half of the development data and we selected the SMT weighting parameter set that performed the best on the second half of the development data based on the BLEU scores from the three SMT weighting parameter sets.",
                "cite_spans": [
                    {
                        "start": 87,
                        "end": 107,
                        "text": "(Koehn et al., 2007)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 158,
                        "end": 169,
                        "text": "(Och, 2003)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common Settings",
                "sec_num": "4.1"
            },
            {
                "text": "We compared systems that used a common SMT feature set from standard SMT features and different distortion model features. The common SMT feature set consists of: four translation model features, phrase penalty, word penalty, and a language model feature. The compared different distortion model features are: the linear distortion cost model feature (LINEAR), the linear distortion cost model feature and the six MSD bidirectional lexical distortion model (Koehn et al., 2005) features (LINEAR+LEX), the outbound and inbound distortion model features discriminating nine distortion classes (Green et al., 2010 ) (9-CLASS), the proposed pair model feature (PAIR), and the proposed sequence model feature (SEQUENCE).",
                "cite_spans": [
                    {
                        "start": 457,
                        "end": 477,
                        "text": "(Koehn et al., 2005)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 591,
                        "end": 610,
                        "text": "(Green et al., 2010",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Common Settings",
                "sec_num": "4.1"
            },
            {
                "text": "Our distortion model was trained as follows: We used 0.2 million sentence pairs and their word alignments from the data used to build the translation model as the training data for our distortion models. The features that were selected and used were the ones that had been counted 12 , using the feature templates in Table 1 , at least four times for all of the (i, j) position pairs in the training sentences. We conjoined the features with three types of label pairs \u27e8C, I\u27e9, \u27e8I, N\u27e9, or \u27e8C, N\u27e9 as instances of the feature template \u27e8l i , l j \u27e9 to produce features for SEQUENCE. The L-BFGS method (Liu and Nocedal, 1989) was used to estimate the weight parameters of maximum entropy models. The Gaussian prior (Chen and Rosenfeld, 1999) was used for smoothing.",
                "cite_spans": [
                    {
                        "start": 597,
                        "end": 620,
                        "text": "(Liu and Nocedal, 1989)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 710,
                        "end": 736,
                        "text": "(Chen and Rosenfeld, 1999)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 317,
                        "end": 324,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Training for the Proposed Models",
                "sec_num": "4.2"
            },
            {
                "text": "For 9-CLASS, we used the same training data as for our distortion models. Let t i be the part of speech of s i . We used the following feature templates to produce features for the outbound model:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training for the Compared Models",
                "sec_num": "4.3"
            },
            {
                "text": "\u27e8si\u22122\u27e9, \u27e8si\u22121\u27e9, \u27e8si\u27e9, \u27e8si+1\u27e9, \u27e8si+2\u27e9, \u27e8ti\u27e9, \u27e8ti\u22121, ti\u27e9,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training for the Compared Models",
                "sec_num": "4.3"
            },
            {
                "text": "\u27e8ti, ti+1\u27e9, and \u27e8si, ti\u27e9. These feature templates correspond to the components of the feature templates of our distortion models. In addition to these features, we used a feature consisting of the relative source sentence position as the feature used by (Green et al., 2010) . The relative source sentence position is discretized into five bins, one for each quintile of the sentence. For the inbound model 13 , i of the feature templates was changed to j. Features occurring four or more times in the training sentences were used. The maximum entropy method with Gaussian prior smoothing was used to estimate the model parameters.",
                "cite_spans": [
                    {
                        "start": 254,
                        "end": 274,
                        "text": "(Green et al., 2010)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training for the Compared Models",
                "sec_num": "4.3"
            },
            {
                "text": "The MSD bidirectional lexical distortion model was built using all of the data used to build the translation model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training for the Compared Models",
                "sec_num": "4.3"
            },
            {
                "text": "We evaluated translation quality based on the caseinsensitive automatic evaluation score BLEU-4 (Papineni et al., 2002) . We used distortion limits of 10, 20, 30, and unlimited (\u221e), which limited the number of words for word reordering to a maximum number. Table 3 presents our main results. The proposed SEQUENCE outperformed the baselines for both Japanese to English and Chinese to English translation. This demonstrates the effectiveness of the proposed SEQUENCE. The scores of the proposed SEQUENCE were higher than those Table 3 : Evaluation results for each method. The values are case-insensitive BLEU scores. Bold numbers indicate no significant difference from the best result in each language pair using the bootstrap resampling test at a significance level \u03b1 = 0.01 (Koehn, 2004) .",
                "cite_spans": [
                    {
                        "start": 96,
                        "end": 119,
                        "text": "(Papineni et al., 2002)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 778,
                        "end": 791,
                        "text": "(Koehn, 2004)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 257,
                        "end": 264,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 527,
                        "end": 534,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.4"
            },
            {
                "text": "Japanese-English Chinese-English HIER 30.47 32.66 Table 4 : Evaluation results for hierarchical phrasebased SMT.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 50,
                        "end": 57,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.4"
            },
            {
                "text": "of the proposed PAIR. This confirms the effectiveness for considering relative word order and words between the CP and an NPC. The proposed PAIR outperformed 9-CLASS, confirming that considering both the word specified at the CP and the word specified at the NPC simultaneously was more effective than that of 9-CLASS. For translating between languages with widely different word orders such as Japanese and English, a small distortion limit is undesirable because there are cases where correct translations cannot be produced with a small distortion limit, since the distortion limit prunes the search space that does not meet the constraint. Therefore, a large distortion limit is required to translate correctly. For JE translation, our SEQUENCE achieved significantly better results at distortion limits of 20 and 30 than that at a distortion limit of 10, while the baseline systems of LINEAR, LINEAR+LEX, and 9-CLASS did not achieve this. This indicate that SEQUENCE could treat long distance reordering candidates more appropriately than the compared methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.4"
            },
            {
                "text": "We also tested hierarchical phrase-based SMT (Chiang, 2007) (HIER) using the Moses implementation. The common data was used to train HIER. We used unlimited max-chart-span for the system setting. Results are given in Table 4 . Our SEQUENCE outperformed HIER. The gain for JE was large but the gain for CE was modest. Since phrase-based SMT is generally faster in decoding speed than hierarchical phrase-based SMT, achieving better or comparable scores is worth-Distortion Probability Figure 5 : Average probabilities for large distortion for Japanese-English translation.",
                "cite_spans": [
                    {
                        "start": 45,
                        "end": 59,
                        "text": "(Chiang, 2007)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 217,
                        "end": 224,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 484,
                        "end": 492,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "4.4"
            },
            {
                "text": "To investigate the tolerance for sparsity of the training data, we reduced the training data for the sequence model to 20,000 sentences for JE translation. 14 SEQUENCE using this model with a distortion limit of 30 achieved a BLEU score of 32.22. 15 Although the score is lower than the score of SEQUENCE with a distortion limit of 30 in Table 3 , the score was still higher than those of LINEAR, LINEAR+LEX, and 9-CLASS for JE in Table 3 . This indicates that the sequence model also works even when the training data is not large. This is because the sequence model considers not only the word at the CP and the word at an NPC but also rich context, and rich context would be effective even for a smaller set of training data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 338,
                        "end": 345,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 431,
                        "end": 438,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "while.",
                "sec_num": null
            },
            {
                "text": "To investigate how well SEQUENCE learns the effect of distance, we checked the average distortion probabilities for large distortions of j \u2212 i \u2212 1. Figure 5 shows three kinds of probabilities for distortions from 3 to 20 for Japanese-English translation. One is the average distortion probabilities in the Japanese test sentences for each distortion for SEQUENCE, and another is this for PAIR. The third (CORPUS) is the probabilities for the actual distortions in the training data that were obtained from the word alignments used to build the translation model. The probability for a distortion for CORPUS was calculated by the number of the distortion divided by the total number of distortions in the training data. Figure 5 shows that when a distance class feature used in the model was the same (e.g., distortions from 5 to 20 were the same distance class feature), PAIR produced average distortion probabilities that were almost the same. In contrast, the average distortion probabilities for SEQUENCE decreased when the lengths of the distortions increased, even if the distance class feature was the same, and this behavior was the same as that of CORPUS. This confirms that the proposed SEQUENCE could learn the effect of distances appropriately from the training data. 16",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 148,
                        "end": 156,
                        "text": "Figure 5",
                        "ref_id": null
                    },
                    {
                        "start": 719,
                        "end": 727,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "while.",
                "sec_num": null
            },
            {
                "text": "We discuss related works other than discussed in Section 2. Xiong et al. (2012) proposed a model predicting the orientation of an argument with respect to its verb using a parser. Syntactic structures and predicate-argument structures are useful for reordering. However, orientations do not handle distances. Thus, our distortion model does not compete against the methods predicting orientations using a parser and would assist them if used 16 We also checked the average distortion probabilities for the 9-CLASS outbound model in the Japanese test sentences for Japanese-English translation. We averaged the average probabilities for distortions in a distortion span of [4, 6] and also averaged those in a distortion span of [7, 20] , where the distortions in each span are in the same distortion class. The average probability for [4, 6] was 0.058 and that for [7, 20] was 0.165. From CORPUS, the average probabilities in the training data for each distortion in [4, 6] were higher than those for each distortion in [7, 20] . However, the converse was true for the comparison between the two average probabilities for the outbound model. This is because the sum of probabilities for distortions from 7 and above was larger than the sum of probabilities for distortions from 4 to 6 in the training data. This comparison indicates that the 9-CLASS outbound model could not appropriately learn the effects of large distances for JE translation.",
                "cite_spans": [
                    {
                        "start": 60,
                        "end": 79,
                        "text": "Xiong et al. (2012)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 442,
                        "end": 444,
                        "text": "16",
                        "ref_id": null
                    },
                    {
                        "start": 672,
                        "end": 675,
                        "text": "[4,",
                        "ref_id": null
                    },
                    {
                        "start": 676,
                        "end": 678,
                        "text": "6]",
                        "ref_id": null
                    },
                    {
                        "start": 727,
                        "end": 730,
                        "text": "[7,",
                        "ref_id": null
                    },
                    {
                        "start": 731,
                        "end": 734,
                        "text": "20]",
                        "ref_id": null
                    },
                    {
                        "start": 834,
                        "end": 837,
                        "text": "[4,",
                        "ref_id": null
                    },
                    {
                        "start": 838,
                        "end": 840,
                        "text": "6]",
                        "ref_id": null
                    },
                    {
                        "start": 864,
                        "end": 867,
                        "text": "[7,",
                        "ref_id": null
                    },
                    {
                        "start": 868,
                        "end": 871,
                        "text": "20]",
                        "ref_id": null
                    },
                    {
                        "start": 1019,
                        "end": 1022,
                        "text": "[7,",
                        "ref_id": null
                    },
                    {
                        "start": 1023,
                        "end": 1026,
                        "text": "20]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "5"
            },
            {
                "text": "together.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "5"
            },
            {
                "text": "There are word reordering constraint methods using ITG (Wu, 1997) for phrase-based SMT (Zens et al., 2004; Yamamoto et al., 2008; Feng et al., 2010 ). These methods consider sentence level consistency with respect to ITG. The ITG constraint does not consider distances of reordering and was used with other distortion models. Our distortion model does not consider sentence level consistency, so our distortion model and ITG constraint methods are thought to be complementary.",
                "cite_spans": [
                    {
                        "start": 55,
                        "end": 65,
                        "text": "(Wu, 1997)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 87,
                        "end": 106,
                        "text": "(Zens et al., 2004;",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 107,
                        "end": 129,
                        "text": "Yamamoto et al., 2008;",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 130,
                        "end": 147,
                        "text": "Feng et al., 2010",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "5"
            },
            {
                "text": "There are tree-based SMT methods (Chiang, 2007; Galley et al., 2004; Liu et al., 2006) . In many cases, tree-based SMT methods do not use the distortion models that consider reordering distance apart from translation rules because it is not trivial to use distortion scores considering the distances for decoders that do not generate hypotheses from left to right. If it could be applied to these methods, our distortion model might contribute to tree-based SMT methods. Investigating the effects will be for future work.",
                "cite_spans": [
                    {
                        "start": 33,
                        "end": 47,
                        "text": "(Chiang, 2007;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 48,
                        "end": 68,
                        "text": "Galley et al., 2004;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 69,
                        "end": 86,
                        "text": "Liu et al., 2006)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Works",
                "sec_num": "5"
            },
            {
                "text": "This paper described our distortion models for phrase-based SMT. Our sequence model simply consists of only one probabilistic model, but it can consider rich context. Experiments indicate that our models achieved better performance and the sequence model could learn the effect of distances appropriately. Since our models do not require a parser, they can be applied to many languages. Future work includes application to other language pairs, incorporation into ITG constraint methods and other reordering methods, and application to tree-based SMT methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "A language model also supports the estimation. 2 In this paper, reordering models for phrase-based SMT, which are intended to estimate the source word position to be translated next in decoding, are called distortion models. This estimation is used to produce a hypothesis in the target language word order sequentially from left to right.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Note that the sequence model does not only consider larger context than the pair model, but that it also considers labels. The pair model does not discriminate labels, whereas the sequence model uses label N and label I for the positions except for the CP, depending on each situation. For example, inFigure 3, at position 6, label N is used in the label sequence ID of 6, but label I is used in the label sequence IDs of 7 to 11. Namely, even if they are at the same position, the labels in the label sequences are different. The sequence model discriminates the label differences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We conducted JE and CE translation as examples of language pairs with different word orders and of languages where there is a great need for translation into English. 9 http://mecab.sourceforge.net/ 10 http://nlp.stanford.edu/software/segmenter.shtml 11 http://nlp.stanford.edu/software/tagger.shtml",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "When we counted features for selection, we only counted features that were from the feature templates of \u27e8si, sj\u27e9, \u27e8ti, tj\u27e9, \u27e8si, ti, tj\u27e9, and \u27e8sj, ti, tj\u27e9 inTable 1when j was not the NP, in order to avoid increasing the number of features.13 The inbound model is explained in footnote 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We did not conduct experiments using larger training data because there would have been a very high computational cost to build models using the L-BFGS method.15 To avoid effects from differences in the SMT weighting parameters, we used the same SMT weighting parameters for SEQUENCE, with a distortion limit of 30, inTable 3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Distortion models for statistical machine translation",
                "authors": [
                    {
                        "first": "Yaser",
                        "middle": [],
                        "last": "Al-Onaizan",
                        "suffix": ""
                    },
                    {
                        "first": "Kishore",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "529--536",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yaser Al-Onaizan and Kishore Papineni. 2006. Dis- tortion models for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguis- tics, pages 529-536, Sydney, Australia, July. Asso- ciation for Computational Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "A maximum entropy approach to natural language processing",
                "authors": [
                    {
                        "first": "Adam",
                        "middle": [
                            "L"
                        ],
                        "last": "Berger",
                        "suffix": ""
                    },
                    {
                        "first": "Vincent",
                        "middle": [
                            "J"
                        ],
                        "last": "Della Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [
                            "A"
                        ],
                        "last": "Della Pietra",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Comput. Linguist",
                "volume": "22",
                "issue": "1",
                "pages": "39--71",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Comput. Linguist., 22(1):39-71, March.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "A gaussian prior for smoothing maximum entropy models",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Stanley",
                        "suffix": ""
                    },
                    {
                        "first": "Ronald",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rosenfeld",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stanley F. Chen and Ronald Rosenfeld. 1999. A gaus- sian prior for smoothing maximum entropy models. Technical report.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Hierarchical phrase-based translation",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Chiang",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Computational Linguistics",
                "volume": "33",
                "issue": "2",
                "pages": "201--228",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Learning preference relations from data",
                "authors": [
                    {
                        "first": "Theodoros",
                        "middle": [],
                        "last": "Evgniou",
                        "suffix": ""
                    },
                    {
                        "first": "Massimiliano",
                        "middle": [],
                        "last": "Pontil",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Neural Nets Lecture Notes in Computer Science",
                "volume": "2486",
                "issue": "",
                "pages": "23--32",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Theodoros Evgniou and Massimiliano Pontil. 2002. Learning preference relations from data. Neural Nets Lecture Notes in Computer Science, 2486:23- 32.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "An efficient shift-reduce decoding algorithm for phrased-based machine translation",
                "authors": [
                    {
                        "first": "Yang",
                        "middle": [],
                        "last": "Feng",
                        "suffix": ""
                    },
                    {
                        "first": "Haitao",
                        "middle": [],
                        "last": "Mi",
                        "suffix": ""
                    },
                    {
                        "first": "Yang",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Qun",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Coling 2010: Posters",
                "volume": "",
                "issue": "",
                "pages": "285--293",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yang Feng, Haitao Mi, Yang Liu, and Qun Liu. 2010. An efficient shift-reduce decoding algorithm for phrased-based machine translation. In Coling 2010: Posters, pages 285-293, Beijing, China, August. Coling 2010 Organizing Committee.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A simple and effective hierarchical phrase reordering model",
                "authors": [
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Galley",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [
                            "D"
                        ],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "848--856",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing, pages 848-856, Honolulu, Hawaii, October. As- sociation for Computational Linguistics.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "What's in a translation rule",
                "authors": [
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Galley",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Hopkins",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Marcu",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "HLT-NAACL 2004: Main Proceedings",
                "volume": "",
                "issue": "",
                "pages": "273--280",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceed- ings, pages 273-280, Boston, Massachusetts, USA, May 2 -May 7. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Overview of the patent machine translation task at the NTCIR-9 workshop",
                "authors": [
                    {
                        "first": "Isao",
                        "middle": [],
                        "last": "Goto",
                        "suffix": ""
                    },
                    {
                        "first": "Bin",
                        "middle": [],
                        "last": "Lu",
                        "suffix": ""
                    },
                    {
                        "first": "Ka",
                        "middle": [
                            "Po"
                        ],
                        "last": "Chow",
                        "suffix": ""
                    },
                    {
                        "first": "Eiichiro",
                        "middle": [],
                        "last": "Sumita",
                        "suffix": ""
                    },
                    {
                        "first": "Benjamin",
                        "middle": [
                            "K"
                        ],
                        "last": "Tsou",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of NTCIR-9",
                "volume": "",
                "issue": "",
                "pages": "559--578",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Isao Goto, Bin Lu, Ka Po Chow, Eiichiro Sumita, and Benjamin K. Tsou. 2011. Overview of the patent machine translation task at the NTCIR-9 workshop. In Proceedings of NTCIR-9, pages 559-578.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Improved models of distortion cost for statistical machine translation",
                "authors": [
                    {
                        "first": "Spence",
                        "middle": [],
                        "last": "Green",
                        "suffix": ""
                    },
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Galley",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [
                            "D"
                        ],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "867--875",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Spence Green, Michel Galley, and Christopher D. Man- ning. 2010. Improved models of distortion cost for statistical machine translation. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 867-875, Los Angeles, California, June. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Amittai",
                        "middle": [],
                        "last": "Axelrod",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [
                            "Birch"
                        ],
                        "last": "Mayne",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Callison-Burch",
                        "suffix": ""
                    },
                    {
                        "first": "Miles",
                        "middle": [],
                        "last": "Osborne",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Talbot",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the International Workshop on Spoken Language Translation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Descrip- tion for the 2005 IWSLT Speech Translation Evalu- ation. In Proceedings of the International Workshop on Spoken Language Translation.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Moses: Open source toolkit for statistical machine translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Hieu",
                        "middle": [],
                        "last": "Hoang",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [],
                        "last": "Birch",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Callison-Burch",
                        "suffix": ""
                    },
                    {
                        "first": "Marcello",
                        "middle": [],
                        "last": "Federico",
                        "suffix": ""
                    },
                    {
                        "first": "Nicola",
                        "middle": [],
                        "last": "Bertoldi",
                        "suffix": ""
                    },
                    {
                        "first": "Brooke",
                        "middle": [],
                        "last": "Cowan",
                        "suffix": ""
                    },
                    {
                        "first": "Wade",
                        "middle": [],
                        "last": "Shen",
                        "suffix": ""
                    },
                    {
                        "first": "Christine",
                        "middle": [],
                        "last": "Moran",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Zens",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Dyer",
                        "suffix": ""
                    },
                    {
                        "first": "Ondrej",
                        "middle": [],
                        "last": "Bojar",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [],
                        "last": "Constantin",
                        "suffix": ""
                    },
                    {
                        "first": "Evan",
                        "middle": [],
                        "last": "Herbst",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
                "volume": "",
                "issue": "",
                "pages": "177--180",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine transla- tion. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Com- panion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic, June. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Statistical significance tests for machine translation evaluation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of EMNLP 2004",
                "volume": "",
                "issue": "",
                "pages": "388--395",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388-395, Barcelona, Spain, July. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Lafferty",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Mccallum",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Pereira",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of 18th International Conference on Machine Learning",
                "volume": "",
                "issue": "",
                "pages": "282--289",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proceedings of 18th International Conference on Machine Learn- ing, pages 282-289.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "On the limited memory method for large scale optimization",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "C"
                        ],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "Mathematical Programming B",
                "volume": "45",
                "issue": "3",
                "pages": "503--528",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D.C. Liu and J. Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3):503-528.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Treeto-string alignment template for statistical machine translation",
                "authors": [
                    {
                        "first": "Yang",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Qun",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Shouxun",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "609--616",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree- to-string alignment template for statistical machine translation. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Compu- tational Linguistics, pages 609-616, Sydney, Aus- tralia, July. Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Handling phrase reorderings for machine translation",
                "authors": [
                    {
                        "first": "Yizhao",
                        "middle": [],
                        "last": "Ni",
                        "suffix": ""
                    },
                    {
                        "first": "Craig",
                        "middle": [],
                        "last": "Saunders",
                        "suffix": ""
                    },
                    {
                        "first": "Sandor",
                        "middle": [],
                        "last": "Szedmak",
                        "suffix": ""
                    },
                    {
                        "first": "Mahesan",
                        "middle": [],
                        "last": "Niranjan",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers",
                "volume": "",
                "issue": "",
                "pages": "241--244",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yizhao Ni, Craig Saunders, Sandor Szedmak, and Ma- hesan Niranjan. 2009. Handling phrase reorder- ings for machine translation. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 241-244, Suntec, Singapore, August. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Minimum error rate training in statistical machine translation",
                "authors": [
                    {
                        "first": "Franz Josef",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "160--167",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franz Josef Och. 2003. Minimum error rate train- ing in statistical machine translation. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167, Sap- poro, Japan, July. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
                "authors": [
                    {
                        "first": "Kishore",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    },
                    {
                        "first": "Salim",
                        "middle": [],
                        "last": "Roukos",
                        "suffix": ""
                    },
                    {
                        "first": "Todd",
                        "middle": [],
                        "last": "Ward",
                        "suffix": ""
                    },
                    {
                        "first": "Wei-Jing",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A unigram orientation model for statistical machine translation",
                "authors": [
                    {
                        "first": "Christoph",
                        "middle": [],
                        "last": "Tillman",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "HLT-NAACL 2004: Short Papers",
                "volume": "",
                "issue": "",
                "pages": "101--104",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christoph Tillman. 2004. A unigram orienta- tion model for statistical machine translation. In Daniel Marcu Susan Dumais and Salim Roukos, ed- itors, HLT-NAACL 2004: Short Papers, pages 101- 104, Boston, Massachusetts, USA, May 2 -May 7. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
                "authors": [
                    {
                        "first": "Dekai",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Computational Linguistics",
                "volume": "23",
                "issue": "3",
                "pages": "377--403",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Improving a statistical mt system with automatically learned rewrite patterns",
                "authors": [
                    {
                        "first": "Fei",
                        "middle": [],
                        "last": "Xia",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Mccord",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of Coling",
                "volume": "",
                "issue": "",
                "pages": "508--514",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fei Xia and Michael McCord. 2004. Improving a sta- tistical mt system with automatically learned rewrite patterns. In Proceedings of Coling 2004, pages 508- 514, Geneva, Switzerland, Aug 23-Aug 27. COL- ING.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Modeling the translation of predicate-argument structure for smt",
                "authors": [
                    {
                        "first": "Deyi",
                        "middle": [],
                        "last": "Xiong",
                        "suffix": ""
                    },
                    {
                        "first": "Min",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Haizhou",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "902--911",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Mod- eling the translation of predicate-argument structure for smt. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 902-911, Jeju Island, Korea, July. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "A syntaxbased statistical translation model",
                "authors": [
                    {
                        "first": "Kenji",
                        "middle": [],
                        "last": "Yamada",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of 39th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "523--530",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax- based statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Com- putational Linguistics, pages 523-530, Toulouse, France, July. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Imposing constraints from the source tree on ITG constraints for SMT",
                "authors": [
                    {
                        "first": "Hirofumi",
                        "middle": [],
                        "last": "Yamamoto",
                        "suffix": ""
                    },
                    {
                        "first": "Hideo",
                        "middle": [],
                        "last": "Okuma",
                        "suffix": ""
                    },
                    {
                        "first": "Eiichiro",
                        "middle": [],
                        "last": "Sumita",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2)",
                "volume": "",
                "issue": "",
                "pages": "1--9",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hirofumi Yamamoto, Hideo Okuma, and Eiichiro Sumita. 2008. Imposing constraints from the source tree on ITG constraints for SMT. In Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2), pages 1-9, Columbus, Ohio, June. Association for Com- putational Linguistics.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Reordering constraints for phrase-based statistical machine translation",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Zens",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    },
                    {
                        "first": "Taro",
                        "middle": [],
                        "last": "Watanabe",
                        "suffix": ""
                    },
                    {
                        "first": "Eiichiro",
                        "middle": [],
                        "last": "Sumita",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of Coling",
                "volume": "",
                "issue": "",
                "pages": "205--211",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard Zens, Hermann Ney, Taro Watanabe, and Ei- ichiro Sumita. 2004. Reordering constraints for phrase-based statistical machine translation. In Pro- ceedings of Coling 2004, pages 205-211, Geneva, Switzerland, Aug 23-Aug 27. COLING.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "An example of left-to-right translation for Japanese-English. Boxes represent phrases and arrows indicate the translation order of the phrases.",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "FIGREF1": {
                "text": "Examples of CP and NP for Japanese-English translation. The upper sentence is the source sentence and the sentence underneath is a target hypothesis for each example. The NP is in bold, and the CP is in bold italics. The point of an arrow with a \u00d7 mark indicates a wrong NP candidate.",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "FIGREF2": {
                "text": "Example of label sequences that specify spans from the CP to each NPC for the case ofFigure 2(c). The labels (C, I, and N) in the boxes are the label sequences.",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "FIGREF4": {
                "text": "Examples of supervised training data. The lines represent word alignments. The English side arrows point to the nearest word aligned on the right.",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "TABREF0": {
                "type_str": "table",
                "num": null,
                "text": "Feature templates. t is the part of speech of s.w is a weight parameter vector, each element of f (\u2022) is a binary feature function, and Z i = \u2211 j\u2208{j|1\u2264j\u2264n+1 \u2227 j\u0338 =i} (numerator of Equation 1) is a normalization factor. o is an orientation of i to j and d is a distance class.",
                "content": "<table/>",
                "html": null
            },
            "TABREF1": {
                "type_str": "table",
                "num": null,
                "text": "The \"C, I, and N\" label set.",
                "content": "<table><tr><td/><td>1</td><td colspan=\"2\">N C</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Label sequence ID</td><td>3 4 5 6 7 8 9 10</td><td/><td colspan=\"2\">C N C I C I C I C I C I C I C I</td><td>N I I I I I I</td><td>N I I I I I</td><td>N I I I I</td><td>N I I I</td><td>N I I</td><td>N I</td><td>N</td></tr><tr><td/><td>11</td><td/><td>C</td><td>I</td><td>I</td><td>I</td><td>I</td><td>I</td><td>I</td><td>I</td><td>I</td><td>N</td></tr><tr><td/><td>BOS 0</td><td>kinou 1</td><td>kare 2</td><td>wa 3</td><td>hon 4</td><td>wo 5</td><td>karita 6</td><td>ga 7</td><td>kanojo 8</td><td>wa 9</td><td>katta 10</td><td>EOS 11</td></tr><tr><td/><td/><td>(yesterday)</td><td>(he)</td><td/><td>(book)</td><td/><td>(borrowed)</td><td/><td>(she)</td><td/><td>(bought)</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"4\">Source sentence</td><td/><td/><td/></tr></table>",
                "html": null
            }
        }
    }
}