File size: 76,323 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
{
    "paper_id": "P07-1003",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:49:31.877376Z"
    },
    "title": "Tailoring Word Alignments to Syntactic Machine Translation",
    "authors": [
        {
            "first": "John",
            "middle": [],
            "last": "Denero",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of California",
                "location": {
                    "settlement": "Berkeley"
                }
            },
            "email": "denero@berkeley.edu"
        },
        {
            "first": "Dan",
            "middle": [],
            "last": "Klein",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of California",
                "location": {
                    "settlement": "Berkeley"
                }
            },
            "email": "klein@cs.berkeley.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Extracting tree transducer rules for syntactic MT systems can be hindered by word alignment errors that violate syntactic correspondences. We propose a novel model for unsupervised word alignment which explicitly takes into account target language constituent structure, while retaining the robustness and efficiency of the HMM alignment model. Our model's predictions improve the yield of a tree transducer extraction system, without sacrificing alignment quality. We also discuss the impact of various posteriorbased methods of reconciling bidirectional alignments.",
    "pdf_parse": {
        "paper_id": "P07-1003",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Extracting tree transducer rules for syntactic MT systems can be hindered by word alignment errors that violate syntactic correspondences. We propose a novel model for unsupervised word alignment which explicitly takes into account target language constituent structure, while retaining the robustness and efficiency of the HMM alignment model. Our model's predictions improve the yield of a tree transducer extraction system, without sacrificing alignment quality. We also discuss the impact of various posteriorbased methods of reconciling bidirectional alignments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Syntactic methods are an increasingly promising approach to statistical machine translation, being both algorithmically appealing (Melamed, 2004; Wu, 1997) and empirically successful (Chiang, 2005; Galley et al., 2006) . However, despite recent progress, almost all syntactic MT systems, indeed statistical MT systems in general, build upon crude legacy models of word alignment. This dependence runs deep; for example, Galley et al. (2006) requires word alignments to project trees from the target language to the source, while Chiang (2005) requires alignments to induce grammar rules.",
                "cite_spans": [
                    {
                        "start": 130,
                        "end": 145,
                        "text": "(Melamed, 2004;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 146,
                        "end": 155,
                        "text": "Wu, 1997)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 183,
                        "end": 197,
                        "text": "(Chiang, 2005;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 198,
                        "end": 218,
                        "text": "Galley et al., 2006)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 420,
                        "end": 440,
                        "text": "Galley et al. (2006)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 529,
                        "end": 542,
                        "text": "Chiang (2005)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Word alignment models have not stood still in recent years. Unsupervised methods have seen substantial reductions in alignment error (Liang et al., 2006) as measured by the now much-maligned AER metric. A host of discriminative methods have been introduced (Taskar et al., 2005; Moore, 2005; Ayan and Dorr, 2006) . However, few of these methods have explicitly addressed the tension between word alignments and the syntactic processes that employ them (Cherry and Lin, 2006; Daum\u00e9 III and Marcu, 2005; Lopez and Resnik, 2005) .",
                "cite_spans": [
                    {
                        "start": 133,
                        "end": 153,
                        "text": "(Liang et al., 2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 257,
                        "end": 278,
                        "text": "(Taskar et al., 2005;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 279,
                        "end": 291,
                        "text": "Moore, 2005;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 292,
                        "end": 312,
                        "text": "Ayan and Dorr, 2006)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 452,
                        "end": 474,
                        "text": "(Cherry and Lin, 2006;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 475,
                        "end": 501,
                        "text": "Daum\u00e9 III and Marcu, 2005;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 502,
                        "end": 525,
                        "text": "Lopez and Resnik, 2005)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We are particularly motivated by systems like the one described in Galley et al. (2006) , which constructs translations using tree-to-string transducer rules. These rules are extracted from a bitext annotated with both English (target side) parses and word alignments. Rules are extracted from target side constituents that can be projected onto contiguous spans of the source sentence via the word alignment. Constituents that project onto non-contiguous spans of the source sentence do not yield transducer rules themselves, and can only be incorporated into larger transducer rules. Thus, if the word alignment of a sentence pair does not respect the constituent structure of the target sentence, then the minimal translation units must span large tree fragments, which do not generalize well.",
                "cite_spans": [
                    {
                        "start": 67,
                        "end": 87,
                        "text": "Galley et al. (2006)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We present and evaluate an unsupervised word alignment model similar in character and computation to the HMM model (Ney and Vogel, 1996) , but which incorporates a novel, syntax-aware distortion component which conditions on target language parse trees. These trees, while automatically generated and therefore imperfect, are nonetheless (1) a useful source of structural bias and (2) the same trees which constrain future stages of processing anyway. In our model, the trees do not rule out any alignments, but rather softly influence the probability of transitioning between alignment positions. In particular, transition probabilities condition upon paths through the target parse tree, allowing the model to prefer distortions which respect the tree structure.",
                "cite_spans": [
                    {
                        "start": 115,
                        "end": 136,
                        "text": "(Ney and Vogel, 1996)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our model generates word alignments that better respect the parse trees upon which they are conditioned, without sacrificing alignment quality. Using the joint training technique of Liang et al. (2006) to initialize the model parameters, we achieve an AER superior to the GIZA++ implementation of IBM model 4 (Och and Ney, 2003) and a reduction of 56.3% in aligned interior nodes, a measure of agreement between alignments and parses. As a result, our alignments yield more rules, which better match those we would extract had we used manual alignments.",
                "cite_spans": [
                    {
                        "start": 182,
                        "end": 201,
                        "text": "Liang et al. (2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 318,
                        "end": 328,
                        "text": "Ney, 2003)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In a tree transducer system, as in phrase-based systems, the coverage and generality of the transducer inventory is strongly related to the effectiveness of the translation model (Galley et al., 2006) . We will demonstrate that this coverage, in turn, is related to the degree to which initial word alignments respect syntactic correspondences. Galley et al. (2004) proposes a method for extracting tree transducer rules from a parallel corpus. Given a source language sentence s, a target language parse tree t of its translation, and a word-level alignment, their algorithm identifies the constituents in t which map onto contiguous substrings of s via the alignment. The root nodes of such constituents -denoted frontier nodes -serve as the roots and leaves of tree fragments that form minimal transducer rules.",
                "cite_spans": [
                    {
                        "start": 179,
                        "end": 200,
                        "text": "(Galley et al., 2006)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 345,
                        "end": 365,
                        "text": "Galley et al. (2004)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation with Tree Transducers",
                "sec_num": "2"
            },
            {
                "text": "Frontier nodes are distinguished by their compatibility with the word alignment. For a constituent c of t, we consider the set of source words s c that are aligned to c. If none of the source words in the linear closure s * c (the words between the leftmost and rightmost members of s c ) aligns to a target word outside of c, then the root of c is a frontier node. The remaining interior nodes do not generate rules, but can play a secondary role in a translation system. 1 The roots of null-aligned constituents are not frontier nodes, but can attach productively to multiple minimal rules. Two transducer rules, t 1 \u2192 s 1 and t 2 \u2192 s 2 , can be combined to form larger translation units by composing t 1 and t 2 at a shared frontier node and appropriately concatenating s 1 and s 2 . However, no technique has yet been shown to robustly extract smaller component rules from a large transducer rule. Thus, for the purpose of maximizing the coverage of the extracted translation model, we prefer to extract many small, minimal rules and generate larger rules via composition. Maximizing the number of frontier nodes supports this goal, while inducing many aligned interior nodes hinders it.",
                "cite_spans": [
                    {
                        "start": 473,
                        "end": 474,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Rule Extraction",
                "sec_num": "2.1"
            },
            {
                "text": "We now turn to the interaction between word alignments and the transducer extraction algorithm. Consider the example sentence in figure 1A , which demonstrates how a particular type of alignment error prevents the extraction of many useful transducer rules. The mistaken link [la \u21d2 the] intervenes between ax\u00e9s and carri\u00e8r, which both align within an English adjective phrase, while la aligns to a distant subspan of the English parse tree. In this way, the alignment violates the constituent structure of the English parse.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 129,
                        "end": 138,
                        "text": "figure 1A",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Word Alignment Interactions",
                "sec_num": "2.2"
            },
            {
                "text": "While alignment errors are undesirable in general, this error is particularly problematic for a syntax-based translation system. In a phrase-based system, this link would block extraction of the phrases [ax\u00e9s sur la carri\u00e8r \u21d2 career oriented] and [les emplois \u21d2 the jobs] because the error overlaps with both. However, the intervening phrase [emplois sont \u21d2 jobs are] would still be extracted, at least capturing the transfer of subject-verb agreement. By contrast, the tree transducer extraction method fails to extract any of these fragments: the alignment error causes all non-terminal nodes in the parse tree to be interior nodes, excluding preterminals and the root. Figure 1B exposes the consequences: a wide array of desired rules are lost during extraction.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 672,
                        "end": 681,
                        "text": "Figure 1B",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Word Alignment Interactions",
                "sec_num": "2.2"
            },
            {
                "text": "The degree to which a word alignment respects the constituent structure of a parse tree can be quantified by the frequency of interior nodes, which indicate alignment patterns that cross constituent boundaries. To achieve maximum coverage of the translation model, we hope to infer tree-violating alignments only when syntactic structures truly diverge. .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word Alignment Interactions",
                "sec_num": "2.2"
            },
            {
                "text": "Correct proposed word alignment consistent with human annotation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Legend",
                "sec_num": null
            },
            {
                "text": "Proposed word alignment error inconsistent with human annotation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Legend",
                "sec_num": null
            },
            {
                "text": "Word alignment constellation that renders the root of the relevant constituent to be an interior node.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Legend",
                "sec_num": null
            },
            {
                "text": "Word alignment constellation that would allow a phrase extraction in a phrase-based translation system, but which does not correspond to an English constituent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Legend",
                "sec_num": null
            },
            {
                "text": "Frontier node (agrees with alignment)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Italic",
                "sec_num": null
            },
            {
                "text": "Interior node (inconsistent with alignment)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Italic",
                "sec_num": null
            },
            {
                "text": "(S (NP (DT[0] NNS[1]) (VP AUX[2] (ADJV NN[3] VBN[4]) .[5]) \u2192 [0] [1] [2] [3] [4] [5] (S (NP (DT[0] (NNS jobs)) (VP AUX[1] (ADJV NN[2] VBN[3]) .[4]) \u2192 [0] sont [1] [2] [3] [4] (S (NP (DT[0] (NNS jobs)) (VP (AUX are) (ADJV NN[1] VBN[2]) .[3]) \u2192 [0] emplois sont [1] [2] [3] (S NP[0] VP[1] .[2]) \u2192 [0] [1] [2] (S (NP (DT[0] NNS[1]) VP[2] .[3]) \u2192 [0] [1] [2] [3] (S (NP (DT[0] (NNS jobs)) VP[2] .[3]) \u2192 [0] emplois [2] [3] (S (NP (DT[0] (NNS jobs)) (VP AUX[1] ADJV[2]) .[3]) \u2192 [0] emplois [1] [2] [3] (S (NP (DT[0] (NNS jobs)) (VP (AUX are) ADJV[1]) .[2]) \u2192 [0] emplois sont [1] [2]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Italic",
                "sec_num": null
            },
            {
                "text": "Figure 1: In this transducer extraction example, (A) shows a proposed alignment from our test set with an alignment error that violates the constituent structure of the English sentence. The resulting frontier nodes are printed in bold; all nodes would be frontier nodes under a correct alignment. (B) shows a small sample of the rules extracted under the proposed alignment, (ii), and the correct alignment, (i) and (ii). The single alignment error prevents the extraction of all rules in (i) and many more. This alignment pattern was observed in our test set and corrected by our model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Italic",
                "sec_num": null
            },
            {
                "text": "To allow for this preference, we present a novel conditional alignment model of a foreign (source) sentence f = {f 1 , ..., f J } given an English (target) sentence e = {e 1 , ..., e I } and a target tree structure t. Like the classic IBM models (Brown et al., 1994) , our model will introduce a latent alignment vector a = {a 1 , ..., a J } that specifies the position of an aligned target word for each source word. Formally, our model describes p(a, f|e, t), but otherwise borrows heavily from the HMM alignment model of Ney and Vogel (1996) .",
                "cite_spans": [
                    {
                        "start": 246,
                        "end": 266,
                        "text": "(Brown et al., 1994)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 524,
                        "end": 544,
                        "text": "Ney and Vogel (1996)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Word Alignment",
                "sec_num": "3"
            },
            {
                "text": "The HMM model captures the intuition that the alignment vector a will in general progress across the sentence e in a pattern which is mostly local, perhaps with a few large jumps. That is, alignments are locally monotonic more often than not. Formally, the HMM model factors as:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Word Alignment",
                "sec_num": "3"
            },
            {
                "text": "p(a, f|e) = J j=1 p d (a j |a j \u2212 , j)p (f j |e a j )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Word Alignment",
                "sec_num": "3"
            },
            {
                "text": "where j \u2212 is the position of the last non-null-aligned source word before position j, p is a lexical transfer model, and p d is a local distortion model. As in all such models, the lexical component p is a collection of unsmoothed multinomial distributions over foreign words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Word Alignment",
                "sec_num": "3"
            },
            {
                "text": "The distortion model p d (a j |a j \u2212 , j)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Word Alignment",
                "sec_num": "3"
            },
            {
                "text": "is a distribution over the signed distance a j \u2212 a j \u2212 , typically parameterized as a multinomial, Gaussian or exponential distribution. The implementation that serves as our baseline uses a multinomial distribution with separate parameters for j = 1, j = J and shared parameters for all 1 < j < J. Null alignments have fixed probability at any position. Inference over a requires only the standard forward-backward algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Unsupervised Word Alignment",
                "sec_num": "3"
            },
            {
                "text": "The broad and robust success of the HMM alignment model underscores the utility of its assumptions: that word-level translations can be usefully modeled via first-degree Markov transitions and independent lexical productions. However, its distortion model considers only string distance, disregarding the constituent structure of the English sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "To allow syntax-sensitive distortion, we consider a new distortion model of the form",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "p d (a j |a j \u2212 , j, t).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "We condition on t via a generative process that transitions between two English positions by traversing the unique shortest path \u03c1 (a j \u2212 ,a j ,t) through t from a j \u2212 to a j . We constrain ourselves to this shortest path using a staged generative process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "Stage 1 (POP(n), STOP(n)): Starting in the leaf node at a j \u2212 , we choose whether to STOP or POP from child to parent, conditioning on the type of the parent noden. Upon choosing STOP, we transition to stage 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "Stage 2 (MOVE(n, d)): Again, conditioning on the type of the parentn of the current node n, we choose a siblingn based on the signed distance d = \u03c6n(n) \u2212 \u03c6n(n), where \u03c6n(n) is the index of n in the child list ofn. Zero distance moves are disallowed. After exactly one MOVE, we transition to stage 3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "Stage 3 (PUSH(n, \u03c6 n (n))): Given the current node n, we select one of its childrenn, conditioning on the type of n and the position of the child \u03c6 n (n). We continue to PUSH until reaching a leaf.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "This process is a first-degree Markov walk through the tree, conditioning on the current node and its immediate surroundings at each step. We enforce the property that \u03c1 (a j \u2212 ,a j ,t) be unique by staging the process and disallowing zero distance moves in stage 2. Figure 2 gives an example sequence of tree transitions for a small parse tree. The parameterization of this distortion model follows directly from its generative process. Given a path \u03c1 (a j \u2212 ,a j ,t) with r = k + m + 3 nodes including the two leaves, the nearest common ancestor, k intervening nodes on the ascent and m on the descent, we express it as a triple of staged tree transitions that include k POPs, a STOP, a MOVE, and m PUSHes:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 267,
                        "end": 275,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "\uf8eb \uf8ed {POP(n 2 ), ..., POP(n k+1 ), STOP(n k+2 )} {MOVE (n k+2 , \u03c6(n k+3 ) \u2212 \u03c6(n k+1 ))} {PUSH (n k+3 , \u03c6(n k+4 )) , ..., PUSH (n r\u22121 , \u03c6(n r ))} \uf8f6 \uf8f8",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "Next, we assign probabilities to each tree transition in each stage. In selecting these distributions, we aim to maintain the original HMM's sensitivity to target word order:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "\u2022 Selecting POP or STOP is a simple Bernoulli distribution conditioned upon a node type.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "\u2022 We model both MOVE and PUSH as multinomial distributions over the signed distance in positions (assuming a starting position of 0 for PUSH), echoing the parameterization popular in implementations of the HMM model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "This model reduces to the classic HMM distortion model given minimal English trees of only uniformly labeled pre-terminals and a root node. The classic 0-distance distortion would correspond to the 20 For instance, the short path from relieve to on gives a high transition likelihood.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "STOP probability of the pre-terminal label; all other distances would correspond to MOVE probabilities conditioned on the root label, and the probability of transitioning to the terminal state would correspond to the POP probability of the root label. As in a multinomial-distortion implementation of the classic HMM model, we must sometimes artificially normalize these distributions in the deficient case that certain jumps extend beyond the ends of the local rules. For this reason, MOVE and PUSH are actually parameterized by three values: a node type, a signed distance, and a range of options that dictates a normalization adjustment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "Once each tree transition generates a score, their product gives the probability of the entire path, and thereby the cost of the transition between string positions. Figure 3 shows an example learned distribution that reflects the structure of the given parse.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 166,
                        "end": 174,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "With these derivation steps in place, we must address a handful of special cases to complete the generative model. We require that the Markov walk from leaf to leaf of the English tree must start and end at the root, using the following assumptions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "1. Given no previous alignment, we forego stages 1 and 2 and begin with a series of PUSHes from the root of the tree to the desired leaf.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "2. Given no subsequent alignments, we skip stages 2 and 3 after a series of POPs including a pop conditioned on the root node.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "3. If the first choice in stage 1 is to STOP at the current leaf, then stage 2 and 3 are unnecessary. Hence, a choice to STOP immediately is a choice to emit another foreign word from the current English word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "4. We flatten unary transitions from the tree when computing distortion probabilities.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "5. Null alignments are treated just as in the HMM model, incurring a fixed cost from any position.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "This model can be simplified by removing all conditioning on node types. However, we found this variant to slightly underperform the full model described above. Intuitively, types carry information about cross-linguistic ordering preferences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntax-Sensitive Distortion",
                "sec_num": "3.1"
            },
            {
                "text": "Because our model largely mirrors the generative process and structure of the original HMM model, we apply a nearly identical training procedure to fit the parameters to the training data via the Expectation-Maximization algorithm. Och and Ney (2003) gives a detailed exposition of the technique.",
                "cite_spans": [
                    {
                        "start": 232,
                        "end": 250,
                        "text": "Och and Ney (2003)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training Approach",
                "sec_num": "3.2"
            },
            {
                "text": "In the E-step, we employ the forward-backward algorithm and current parameters to find expected counts for each potential pair of links in each training pair. In this familiar dynamic programming approach, we must compute the distortion probabilities for each pair of English positions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training Approach",
                "sec_num": "3.2"
            },
            {
                "text": "The minimal path between two leaves in a tree can be computed efficiently by first finding the path from the root to each leaf, then comparing those paths to find the nearest common ancestor and a path through it -requiring time linear in the height of the tree. Computing distortion costs independently for each pair of words in the sentence imposed a computational overhead of roughly 50% over the original HMM model. The bulk of this increase arises from the fact that distortion probabilities in this model must be computed for each unique tree, in contrast to the original HMM which has the same distortion probabilities for all sentences of a given length.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training Approach",
                "sec_num": "3.2"
            },
            {
                "text": "In the M-step, we re-estimate the parameters of the model using the expected counts collected during the E-step. All of the component distributions of our lexical and distortion models are multinomials. Thus, upon assuming these expectations as values for the hidden alignment vectors, we maximize likelihood of the training data simply by computing relative frequencies for each component multinomial. For the distortion model, an expected count c(a j , a j \u2212 ) is allocated to all tree transitions along the path \u03c1 (a j \u2212 ,a j ,t) . These allocations are summed and normalized for each tree transition type to complete re-estimation. The method of re-estimating the lexical model remains unchanged.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training Approach",
                "sec_num": "3.2"
            },
            {
                "text": "Initialization of the lexical model affects performance dramatically. Using the simple but effective joint training technique of Liang et al. (2006) , we initialized the model with lexical parameters from a jointly trained implementation of IBM Model 1. Liang et al. (2006) shows that thresholding the posterior probabilities of alignments improves AER relative to computing Viterbi alignments. That is, we choose a threshold \u03c4 (typically \u03c4 = 0.5), and take a = {(i, j) : p(a j = i|f, e) > \u03c4 }.",
                "cite_spans": [
                    {
                        "start": 129,
                        "end": 148,
                        "text": "Liang et al. (2006)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 254,
                        "end": 273,
                        "text": "Liang et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training Approach",
                "sec_num": "3.2"
            },
            {
                "text": "Posterior thresholding provides computationally convenient ways to combine multiple alignments, and bidirectional combination often corrects for errors in individual directional alignment models. Liang et al. (2006) suggests a soft intersection of a model m with a reverse model r (foreign to English) that thresholds the product of their posteriors at each position:",
                "cite_spans": [
                    {
                        "start": 196,
                        "end": 215,
                        "text": "Liang et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improved Posterior Inference",
                "sec_num": "3.3"
            },
            {
                "text": "a = {(i, j) : p m (a j = i|f, e) \u2022 p r (a i = j|f, e) > \u03c4 } .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improved Posterior Inference",
                "sec_num": "3.3"
            },
            {
                "text": "These intersected alignments can be quite sparse, boosting precision at the expense of recall. We explore a generalized version to this approach by varying the function c that combines p m and p r :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improved Posterior Inference",
                "sec_num": "3.3"
            },
            {
                "text": "a = {(i, j) : c(p m , p r ) > \u03c4 }.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improved Posterior Inference",
                "sec_num": "3.3"
            },
            {
                "text": "If c is the max function, we recover the (hard) union of the forward and reverse posterior alignments. If c is the min function, we recover the (hard) intersection. A novel, high performing alternative is the soft union, which we evaluate in the next section:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improved Posterior Inference",
                "sec_num": "3.3"
            },
            {
                "text": "c(p m , p r ) = p m (a j = i|f, e) + p r (a i = j|f, e) 2 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improved Posterior Inference",
                "sec_num": "3.3"
            },
            {
                "text": "Syntax-alignment compatibility can be further promoted with a simple posterior decoding heuristic we call competitive thresholding. Given a threshold and a matrix c of combined weights for each possible link in an alignment, we include a link (i, j) only if its weight c ij is above-threshold and it is connected to the maximum weighted link in both row i and column j. That is, only the maximum in each column and row and a contiguous enclosing span of above-threshold links are included in the alignment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Improved Posterior Inference",
                "sec_num": "3.3"
            },
            {
                "text": "This proposed model is not the first variant of the HMM model that incorporates syntax-based distortion. Lopez and Resnik (2005) considers a simpler tree distance distortion model. Daum\u00e9 III and Marcu (2005) employs a syntax-aware distortion model for aligning summaries to documents, but condition upon the roots of the constituents that are jumped over during a transition, instead of those that are visited during a walk through the tree. In the case of syntactic machine translation, we want to condition on crossing constituent boundaries, even if no constituents are skipped in the process.",
                "cite_spans": [
                    {
                        "start": 105,
                        "end": 128,
                        "text": "Lopez and Resnik (2005)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 181,
                        "end": 207,
                        "text": "Daum\u00e9 III and Marcu (2005)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "3.4"
            },
            {
                "text": "To understand the behavior of this model, we computed the standard alignment error rate (AER) performance metric. 2 We also investigated extractionspecific metrics: the frequency of interior nodes -a measure of how often the alignments violate the constituent structure of English parses -and a variant of the CPER metric of Ayan and Dorr (2006) .",
                "cite_spans": [
                    {
                        "start": 114,
                        "end": 115,
                        "text": "2",
                        "ref_id": null
                    },
                    {
                        "start": 325,
                        "end": 345,
                        "text": "Ayan and Dorr (2006)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Results",
                "sec_num": "4"
            },
            {
                "text": "We evaluated the performance of our model on both French-English and Chinese-English manually aligned data sets. For Chinese, we trained on the FBIS corpus and the LDC bilingual dictionary, then tested on 491 hand-aligned sentences from the 2002 2The hand-aligned test data has been annotated with both sure alignments S and possible alignments P , with S \u2286 P , according to the specifications described in Och and Ney (2003) . With these alignments, we compute AER for a proposed alignment A as: NIST MT evaluation set. For French, we used the Hansards data from the NAACL 2003 Shared Task. 3 We trained on 100k sentences for each language.",
                "cite_spans": [
                    {
                        "start": 407,
                        "end": 425,
                        "text": "Och and Ney (2003)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 592,
                        "end": 593,
                        "text": "3",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Results",
                "sec_num": "4"
            },
            {
                "text": "\" 1 \u2212 |A\u2229S|+|A\u2229P | |A|+|S| \" \u00d7 100%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Results",
                "sec_num": "4"
            },
            {
                "text": "We compared our model to the original HMM model, identical in implementation to our syntactic HMM model save the distortion component. Both models were initialized using the same jointly trained Model 1 parameters (5 iterations), then trained independently for 5 iterations. Both models were then combined with an independently trained HMM model in the opposite direction: f \u2192 e. 4 Table 1 summarizes the results; the two models perform similarly. The main benefit of our model is the effect on rule extraction, discussed below. We also compared our French results to the public baseline GIZA++ using the script published for the NAACL 2006 Machine Translation Workshop Shared Task. 5 Similarly, we compared our Chinese results to the GIZA++ results in Ayan and Dorr (2006) . Our models substantially outperform GIZA++, confirming results in Liang et al. (2006) . Table 2 shows the effect on AER of competitive thresholding and different combination functions.",
                "cite_spans": [
                    {
                        "start": 753,
                        "end": 773,
                        "text": "Ayan and Dorr (2006)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 842,
                        "end": 861,
                        "text": "Liang et al. (2006)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 864,
                        "end": 871,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Alignment Error Rate",
                "sec_num": "4.1"
            },
            {
                "text": "3 Following previous work, we developed our system on the 37 provided validation sentences and the first 100 sentences of the corpus test set. We used the remainder as a test set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Alignment Error Rate",
                "sec_num": "4.1"
            },
            {
                "text": "4 Null emission probabilities were fixed to 1 |e| , inversely proportional to the length of the English sentence. The decoding threshold was held fixed at \u03c4 = 0.5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Alignment Error Rate",
                "sec_num": "4.1"
            },
            {
                "text": "5 Training includes 16 iterations of various IBM models and a fixed null emission probability of .01. The output of running GIZA++ in both directions was combined via intersection. The most dramatic effect of competitive thresholding is to improve alignment quality for hard unions. It also impacts rule extraction substantially.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Alignment Error Rate",
                "sec_num": "4.1"
            },
            {
                "text": "While its competitive AER certainly speaks to the potential utility of our syntactic distortion model, we proposed the model for a different purpose: to minimize the particularly troubling alignment errors that cross constituent boundaries and violate the structure of English parse trees. We found that while the HMM and Syntactic models have very similar AER, they make substantially different errors.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Rule Extraction Results",
                "sec_num": "4.2"
            },
            {
                "text": "To investigate the differences, we measured the degree to which each set of alignments violated the supplied parse trees, by counting the frequency of interior nodes that are not null aligned. Figure 4 summarizes the results of the experiment for French: the Syntactic distortion with competitive thresholding reduces tree violations substantially. Interior node frequency is reduced by 56% overall, with the most dramatic improvement observed for clausal constituents. We observed a similar 50% reduction for the Chinese data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 193,
                        "end": 201,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Rule Extraction Results",
                "sec_num": "4.2"
            },
            {
                "text": "Additionally, we evaluated our model with the transducer analog to the consistent phrase error rate (CPER) metric of Ayan and Dorr (2006) . This evaluation computes precision, recall, and F1 of the rules extracted under a proposed alignment, relative to the rules extracted under the gold-standard sure alignments. Figure 4 : The syntactic distortion model with competitive thresholding decreases the frequency of interior nodes for each type and the whole corpus. the syntactic HMM model and competitive thresholding together. Individually, each of these changes contributes substantially to this increase. Together, their benefits are partially, but not fully, additive.",
                "cite_spans": [
                    {
                        "start": 117,
                        "end": 137,
                        "text": "Ayan and Dorr (2006)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 315,
                        "end": 323,
                        "text": "Figure 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Rule Extraction Results",
                "sec_num": "4.2"
            },
            {
                "text": "In light of the need to reconcile word alignments with phrase structure trees for syntactic MT, we have proposed an HMM-like model whose distortion is sensitive to such trees. Our model substantially reduces the number of interior nodes in the aligned corpus and improves rule extraction while nearly retaining the speed and alignment accuracy of the HMM model. While it remains to be seen whether these improvements impact final translation accuracy, it is reasonable to hope that, all else equal, alignments which better respect syntactic correspondences will be superior for syntactic MT.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5"
            },
            {
                "text": "Interior nodes can be used, for instance, in evaluating syntax-based language models. They also serve to differentiate transducer rules that have the same frontier nodes but different internal structure.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Going beyond aer: An extensive analysis of word alignments and their impact on mt",
                "authors": [
                    {
                        "first": "Bonnie",
                        "middle": [
                            "J"
                        ],
                        "last": "Necip Fazil Ayan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Dorr",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Necip Fazil Ayan and Bonnie J. Dorr. 2006. Going beyond aer: An extensive analysis of word alignments and their impact on mt. In ACL.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "The mathematics of statistical machine translation: Parameter estimation",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Peter",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [
                            "A Della"
                        ],
                        "last": "Brown",
                        "suffix": ""
                    },
                    {
                        "first": "Vincent",
                        "middle": [
                            "J"
                        ],
                        "last": "Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "Robert",
                        "middle": [
                            "L"
                        ],
                        "last": "Della Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mercer",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Computational Linguistics",
                "volume": "19",
                "issue": "",
                "pages": "263--311",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1994. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19:263-311.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Soft syntactic constraints for word alignment through discriminative training",
                "authors": [
                    {
                        "first": "Colin",
                        "middle": [],
                        "last": "Cherry",
                        "suffix": ""
                    },
                    {
                        "first": "Dekang",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Colin Cherry and Dekang Lin. 2006. Soft syntactic constraints for word alignment through discriminative training. In ACL.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "A hierarchical phrase-based model for statistical machine translation",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Chiang",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Induction of word and phrase alignments for automatic document summarization",
                "authors": [
                    {
                        "first": "Hal",
                        "middle": [],
                        "last": "Daum\u00e9",
                        "suffix": ""
                    },
                    {
                        "first": "Iii",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Marcu",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Computational Linguistics",
                "volume": "31",
                "issue": "4",
                "pages": "505--530",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005. Induction of word and phrase alignments for automatic document summarization. Computational Linguistics, 31(4):505-530, December. French Prec. Recall F1",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Relative to the classic HMM baseline, our syntactic distortion model with competitive thresholding improves the tradeoff between precision and recall of extracted transducer rules",
                "authors": [],
                "year": null,
                "venue": "Table",
                "volume": "3",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Table 3: Relative to the classic HMM baseline, our syntactic distortion model with competitive thresh- olding improves the tradeoff between precision and recall of extracted transducer rules. Both French aligners were decoded using the best-performing soft union combiner. For Chinese, we show aligners under both soft and hard union combiners. * Denotes relative change from the second line to the third line.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "What's in a translation rule",
                "authors": [
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Galley",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Hopkins",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Marcu",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "HLT-NAACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In HLT-NAACL.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Scalable inference and training of context-rich syntactic translation models",
                "authors": [
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Galley",
                        "suffix": ""
                    },
                    {
                        "first": "Jonathan",
                        "middle": [],
                        "last": "Graehl",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Marcu",
                        "suffix": ""
                    },
                    {
                        "first": "Steve",
                        "middle": [],
                        "last": "Deneefe",
                        "suffix": ""
                    },
                    {
                        "first": "Wei",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Ignacio",
                        "middle": [],
                        "last": "Thayer",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scal- able inference and training of context-rich syntactic transla- tion models. In ACL.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Alignment by agreement",
                "authors": [
                    {
                        "first": "Percy",
                        "middle": [],
                        "last": "Liang",
                        "suffix": ""
                    },
                    {
                        "first": "Ben",
                        "middle": [],
                        "last": "Taskar",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "HLT-NAACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In HLT-NAACL.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Improved hmm alignment models for languages with scarce resources",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Lopez",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Resnik",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "ACL WPT-05",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Lopez and P. Resnik. 2005. Improved hmm alignment mod- els for languages with scarce resources. In ACL WPT-05.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Algorithms for syntax-aware statistical machine translation",
                "authors": [
                    {
                        "first": "I",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Melamed",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the Conference on Theoretical and Methodological Issues in Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "I. Dan Melamed. 2004. Algorithms for syntax-aware statistical machine translation. In Proceedings of the Conference on Theoretical and Methodological Issues in Machine Transla- tion.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "A discriminative framework for bilingual word alignment",
                "authors": [
                    {
                        "first": "Robert",
                        "middle": [
                            "C"
                        ],
                        "last": "Moore",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Robert C. Moore. 2005. A discriminative framework for bilin- gual word alignment. In EMNLP.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Hmm-based word alignment in statistical translation",
                "authors": [
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    },
                    {
                        "first": "Stephan",
                        "middle": [],
                        "last": "Vogel",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "COLING",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hermann Ney and Stephan Vogel. 1996. Hmm-based word alignment in statistical translation. In COLING.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "A systematic comparison of various statistical alignment models",
                "authors": [
                    {
                        "first": "Josef",
                        "middle": [],
                        "last": "Franz",
                        "suffix": ""
                    },
                    {
                        "first": "Hermann",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Computational Linguistics",
                "volume": "29",
                "issue": "",
                "pages": "19--51",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic com- parison of various statistical alignment models. Computa- tional Linguistics, 29:19-51.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "A discriminative matching approach to word alignment",
                "authors": [
                    {
                        "first": "Ben",
                        "middle": [],
                        "last": "Taskar",
                        "suffix": ""
                    },
                    {
                        "first": "Simon",
                        "middle": [],
                        "last": "Lacoste-Julien",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Klein",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In EMNLP.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
                "authors": [
                    {
                        "first": "Dekai",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Computational Linguistics",
                "volume": "23",
                "issue": "",
                "pages": "377--404",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377-404.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF1": {
                "text": "{ Pop(VBN), Pop(ADJP), Pop(VP), Stop(S) } Stage 2: { Move(S, -1) } Stage 3: { Push(NP, 1), Push(DT, 1",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF2": {
                "text": "An example sequence of staged tree transitions implied by the unique shortest path from the word oriented (a j \u2212 = 5) to the word the (a j = 1).",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "FIGREF3": {
                "text": "For this example sentence, the learned dis-tortion distribution of p d (a j |a j \u2212 , j, t) resembles its counterpart p d (a j |a j \u2212 , j) of the HMM model but reflects the constituent structure of the English tree t.",
                "uris": null,
                "num": null,
                "type_str": "figure"
            },
            "TABREF2": {
                "text": "",
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td>: Alignment error rates (AER) by decoding</td></tr><tr><td>method for the syntactic HMM model. The compet-</td></tr><tr><td>itive thresholding heuristic (CT) is particularly help-</td></tr><tr><td>ful for the hard union combination method.</td></tr></table>",
                "html": null
            },
            "TABREF3": {
                "text": "shows improvements in F1 by using",
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td/><td/><td>Reduction 0.0 5.0 10.0 15.0 20.0 25.0 30.0 (percent) Interior Node Frequency</td><td>NP</td><td colspan=\"2\">VP HMM Model PP</td><td>S</td><td colspan=\"2\">SBAR Syntactic Model + CT Terminals All Non-</td></tr><tr><td/><td/><td>(percent)</td><td>54.1</td><td>46.3</td><td>52.4</td><td>77.5</td><td>58.0</td><td>53.1</td><td>56.3</td></tr><tr><td/><td/><td>Corpus Frequency</td><td>14.6</td><td>10.3</td><td>6.3</td><td>4.8</td><td>1.9</td><td>41.1</td><td>100.0</td></tr><tr><td>2</td><td>45.3</td><td colspan=\"2\">54.8</td><td/><td>59.7</td><td/><td>43.7</td><td>45.1</td></tr><tr><td>3</td><td>6.3</td><td>4.8</td><td/><td/><td>1.9</td><td/><td>41.1</td><td>100</td></tr></table>",
                "html": null
            }
        }
    }
}