File size: 54,207 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
{
    "paper_id": "P93-1002",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:52:17.428598Z"
    },
    "title": "ALIGNING SENTENCES IN BILINGUAL CORPORA USING LEXICAL INFORMATION",
    "authors": [
        {
            "first": "Stanley",
            "middle": [
                "F"
            ],
            "last": "Chen",
            "suffix": "",
            "affiliation": {
                "laboratory": "Aiken Computation Laboratory",
                "institution": "Harvard University Cambridge",
                "location": {
                    "postCode": "02138",
                    "region": "MA"
                }
            },
            "email": "sfc@calliope.harvard.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In this paper, we describe a fast algorithm for aligning sentences with their translations in a bilingual corpus. Existing efficient algorithms ignore word identities and only consider sentence length (Brown el al., 1991b; Gale and Church, 1991). Our algorithm constructs a simple statistical word-to-word translation model on the fly during alignment. We find the alignment that maximizes the probability of generating the corpus with this translation model. We have achieved an error rate of approximately 0.4% on Canadian Hansard data, which is a significant improvement over previous results. The algorithm is language independent.",
    "pdf_parse": {
        "paper_id": "P93-1002",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In this paper, we describe a fast algorithm for aligning sentences with their translations in a bilingual corpus. Existing efficient algorithms ignore word identities and only consider sentence length (Brown el al., 1991b; Gale and Church, 1991). Our algorithm constructs a simple statistical word-to-word translation model on the fly during alignment. We find the alignment that maximizes the probability of generating the corpus with this translation model. We have achieved an error rate of approximately 0.4% on Canadian Hansard data, which is a significant improvement over previous results. The algorithm is language independent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "In this paper, we describe an algorithm for aligning sentences with their translations in a bilingual corpus. Aligned bilingual corpora have proved useful in many tasks, including machine translation (Brown e/ al., 1990; Sadler, 1989) , sense disambiguation (Brown el al., 1991a; Dagan el at., 1991; Gale el al., 1992) , and bilingual lexicography (Klavans and Tzoukermann, 1990; Warwick and Russell, 1990) .",
                "cite_spans": [
                    {
                        "start": 200,
                        "end": 220,
                        "text": "(Brown e/ al., 1990;",
                        "ref_id": null
                    },
                    {
                        "start": 221,
                        "end": 234,
                        "text": "Sadler, 1989)",
                        "ref_id": null
                    },
                    {
                        "start": 258,
                        "end": 279,
                        "text": "(Brown el al., 1991a;",
                        "ref_id": null
                    },
                    {
                        "start": 280,
                        "end": 299,
                        "text": "Dagan el at., 1991;",
                        "ref_id": null
                    },
                    {
                        "start": 300,
                        "end": 318,
                        "text": "Gale el al., 1992)",
                        "ref_id": null
                    },
                    {
                        "start": 348,
                        "end": 379,
                        "text": "(Klavans and Tzoukermann, 1990;",
                        "ref_id": null
                    },
                    {
                        "start": 380,
                        "end": 406,
                        "text": "Warwick and Russell, 1990)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The task is difficult because sentences frequently do not align one-to-one. Sometimes sentences align many-to-one, and often there are deletions in *The author wishes to thank Peter Brown, Stephen Del-laPietra, Vincent DellaPietra, and Robert Mercer for their suggestions, support, and relentless taunting. The author also wishes to thank Jan Hajic and Meredith Goldsmith as well as the aforementioned for checking the aligmnents produced by the implementation. one of the supposedly parallel corpora of a bilingual corpus. These deletions can be substantial; in the Canadian Hansard corpus, there are many deletions of several thousand sentences and one deletion of over 90,000 sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Previous work includes (Brown el al., 1991b) and (Gale and Church, 1991) . In Brown, alignment is based solely on the number of words in each sentence; the actual identities of words are ignored. The general idea is that the closer in length two sentences are, the more likely they align. To perform the search for the best alignment, dynamic programming (Bellman, 1957) is used. Because dynamic programming requires time quadratic in the length of the text aligned, it is not practical to align a large corpus as a single unit. The computation required is drastically reduced if the bilingual corpus can be subdivided into smaller chunks. Brown uses anchors to perform this subdivision. An anchor is a piece of text likely to be present at the same location in both of the parallel corpora of a bilingual corpus. Dynamic programming is used to align anchors, and then dynamic programming is used again to align the text between anchors.",
                "cite_spans": [
                    {
                        "start": 23,
                        "end": 44,
                        "text": "(Brown el al., 1991b)",
                        "ref_id": null
                    },
                    {
                        "start": 49,
                        "end": 72,
                        "text": "(Gale and Church, 1991)",
                        "ref_id": null
                    },
                    {
                        "start": 355,
                        "end": 370,
                        "text": "(Bellman, 1957)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The Gale algorithm is similar to the Brown algorithm except that instead of basing alignment on the number of words in sentences, alignment is based on the number of characters in sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Dynamic programming is also used to search for the best alignment. Large corpora are assumed to be already subdivided into smaller chunks.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "While these algorithms have achieved remarkably good performance, there is definite room for improvement. These algorithms are not robust with respect to non-literal translations and small deletions; they can easily misalign small passages Oui.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Figure 1: A Bilingual Corpus Fragment because they ignore word identities. For example, the type of passage depicted in Figure 1 occurs in the Hansard corpus. With length-based alignment algorithms, these passages may well be misaligned by an even number of sentences if one of the corpora contains a deletion. In addition, with lengthbased algorithms it is difficult to automatically recover from large deletions. In Brown, anchors are used to deal with this issue, but the selection of anchors requires manual inspection of the corpus to be aligned. Gale does not discuss this issue.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 120,
                        "end": 128,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": ":",
                "sec_num": null
            },
            {
                "text": "Alignment algorithms that use lexical information offer a potential for higher accuracy. Previous work includes (Kay, 1991) and (Catizone el al., 1989) . However, to date lexically-based algorithms have not proved efficient enough to be suitable for large corpora.",
                "cite_spans": [
                    {
                        "start": 112,
                        "end": 123,
                        "text": "(Kay, 1991)",
                        "ref_id": null
                    },
                    {
                        "start": 128,
                        "end": 151,
                        "text": "(Catizone el al., 1989)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": ":",
                "sec_num": null
            },
            {
                "text": "In this paper, we describe a fast algorithm for sentence alignment that uses lexical information. The algorithm constructs a simple statistical word-to-word translation model on the fly during sentence alignment. We find the alignment that maximizes the probability of generating the corpus with this translation model. The search strategy used is dynamic programming with thresholding. Because of thresholding, the search is linear in the length of the corpus so that a corpus need not be subdivided into smaller chunks. The search strategy is robust with respect to large deletions; lexical information allows us to confidently identify the beginning and end of deletions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": ":",
                "sec_num": null
            },
            {
                "text": "We use an example to introduce our framework for alignment. Consider the bilingual corpus (E, ~') displayed in Figure 2 . Assume that we have constructed a model for English-to-French transla-tion,/.e., for all E and Fp we have an estimate for P(Fp]E), the probability that the English sentence E translates to the French passage Fp. Then, we can assign a probability to the English corpus E translating to the French corpus :T with a particular alignment. For example, consider the alignment .41 where sentence E1 corresponds to sentence F1 and sentence E2 corresponds to sentences F2 and F3. We get P(-~',.4~l,f:) = P(FIIE1)P(F~., FsIE2), assuming that successive sentences translate independently of each other. This value should be relatively large, since F1 is a good translation of E1 and (F2, F3) is a good translation of E2. Another possible alignment .42 is one where E1 maps to nothing and E2 maps to F1, F2, and F3. We get P(.F',.42]\u00a3) = P(elE1)P (F~, F2, F3IE2) This value should be fairly low, since the alignment does not map the English sentences to their translations. Hence, if our translation model is accurate we will have P(~',`41I,~) >> P(.r,.421,f:)",
                "cite_spans": [
                    {
                        "start": 958,
                        "end": 962,
                        "text": "(F~,",
                        "ref_id": null
                    },
                    {
                        "start": 963,
                        "end": 966,
                        "text": "F2,",
                        "ref_id": null
                    },
                    {
                        "start": 967,
                        "end": 973,
                        "text": "F3IE2)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 111,
                        "end": 119,
                        "text": "Figure 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Alignment Framework",
                "sec_num": "2.1"
            },
            {
                "text": "In general, the more sentences that are mapped to their translations in an alignment .4, the higher the value of P(~,.AIE). We can extend this idea to produce an alignment algorithm given a translation model. In particular, we take the alignment of a corpus (~, ~) to be the alignment ,4 that maximizes P(~',`41E). The more accurate the translation model, the more accurate the resulting alignment will be.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Alignment Framework",
                "sec_num": "2.1"
            },
            {
                "text": "However, because the parameters are all of the form P(FplE ) where E is a sentence, the above framework is not amenable to the situation where a French sentence corresponds to no English sentences. Hence, we use a slightly different framework. We view a bilingual corpus as a sequence of sentence beads (Brown et al., 1991b) , where a sentence bead corresponds to an irreducible group of sentences that align with each other. For example, the correct alignment of the bilingual corpus in Figure 2 consists ",
                "cite_spans": [
                    {
                        "start": 303,
                        "end": 324,
                        "text": "(Brown et al., 1991b)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 488,
                        "end": 505,
                        "text": "Figure 2 consists",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Alignment Framework",
                "sec_num": "2.1"
            },
            {
                "text": "For our translation model, we desire the simplest model that incorporates lexical information effectively. We describe our model in terms of a series of increasingly complex models. In this section, we only consider the generation of sentence beads containing a single English sentence E = el \"\"en and single French sentence F = fl\"\"fm. As a starting point, consider a model that assumes that all individual words are independent. We take",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "n P([E; F]) = p(n)p(m) H p(ei) fi p(fj) i=l j=l",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "where p(n) is the probability that an English sentence is n words long, p(m) is the probability that a French sentence is m words long, p(ei) is the frequency of the word ei in English, and p(fj) is the frequency of the word fj in French.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "To capture the dependence between individual English words and individual French words, we generate English and French words in pairs in addition to singly. For two words e and f that are mutual translations, instead of having the two terms p(e) and p(f) in the above equation we would like a single term p(e, f) that is substantially larger than p(e)p(f). To this end, we introduce the concept of a word bead. A word bead is either a single English word, a single French word, or a single English word and a single French word. We refer to these as 1:0, 0:1, and 1:1 word beads, respectively. Instead of generating a pair of sentences word by word, we generate sentences bead by bead, using the hl word beads to capture the dependence between English and French words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "As a first cut, consider the following \"model\":",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "P* (B) = p(l) H p(bi) i=1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "where B = {bl, ..., bl} is a multiset of word beads, p(l) is the probability that an English sentence and a French sentence contain l word beads, and p(bi) denotes the frequency of the word bead bi.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "This simple model captures lexical dependencies between English and French sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "However, this \"model\" does not satisfy the constraint that ~B P*(B) = 1; because beddings B are unordered multisets, the sum is substantially less than one. To force this model to sum to one, we simply normalize by a constant so that we retain the qualitative aspects of the model. We take l p(t) \"b\"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "P(B) =- II p[ i) N, Z",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "While a beading B describes an unordered multiset of English and French words, sentences are in actuality ordered sequences of words. We need to model word ordering, and ideally the probability of a sentence bead should depend on the ordering of its component words. For example, the sentence John ate Fido should have a higher probability of aligning with the sentence Jean a mang4 Fido than with the sentence Fido a mang4 Jean.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "However, modeling word order under translation is notoriously difficult (Brown et al., 1993) , and it is unclear how much improvement in accuracy a good model of word order would provide. Hence, we model word order using a uniform distribution; we take",
                "cite_spans": [
                    {
                        "start": 72,
                        "end": 92,
                        "text": "(Brown et al., 1993)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "I P([E;F],B)- p(l) Hp(bi) Nin!m! i=1 which gives us p([E;F])= E p(l) ,(s) N,n!m! H p(b,) B i=1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "where B ranges over beadings consistent with [E; F] and l(B) denotes the number of beads in B. Recall that n is the length of the English sentence and m is the length of the French sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Basic Translation Model",
                "sec_num": "2.2"
            },
            {
                "text": "In this section, we extend the translation model to other types of sentence beads. For simplicity, we only consider sentence beads consisting of one English sentence, one French sentence, one English sentence and one French sentence, two English sentences and one French sentence, and one English sentence and two French sentences. We refer to these as 1:0, 0:1, 1:1, 2:1, and 1:2 sentence beads, respectively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "For 1:1 sentence beads, we take",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "t(B) P([E; F]) = P1:1 E P1:1(/) H p(bi) NLHn!ml B i=1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "where B ranges over beadings consistent with [E;F] and where Pz:I is the probability of generating a 1:1 sentence bead.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "To model 1:0 sentence beads, we use a similar equation except that we only use 1:0 word beads, and we do not need to sum over beadings since there is only one word beading consistent with a 1:0 sentence bead. We take Notice that n = I. We use an analogous equation for 0:1 sentence beads.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "For 2:1 sentence beads, we take",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "z(s) P2:l (/) H p(bi) Pr([E1, E2; F]) = P~:I E Nl 2:lnl !n2!m! B ' i=1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "where the sum ranges over beadings B consistent with the sentence bead. We use an analogous equation for 1:2 sentence beads.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Complete Translation Model",
                "sec_num": "2.3"
            },
            {
                "text": "Due to space limitations, we cannot describe the implementation in full detail. We present its most significant characteristics in this section; for a more complete discussion please refer to (Chen, 1993) .",
                "cite_spans": [
                    {
                        "start": 192,
                        "end": 204,
                        "text": "(Chen, 1993)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Implementation",
                "sec_num": "3"
            },
            {
                "text": "We chose to model sentence length using a Poisson distribution, i.e., we took At1:0 Pl:0(/) -l! e ~1:0 for some Al:0, and analogously for the other types of sentence beads. At first, we tried to estimate each A parameter independently, but we found that after training one or two A would be unnaturally small or large in order to specifically model very short or very long sentences. To prevent this phenomenon, we tied the A values for the different types of sentence beads together. We took A1:1 A2:l AI:2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameterization",
                "sec_num": "3.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "Al:0=A0:l---~--- 3 - 3",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Parameterization",
                "sec_num": "3.1"
            },
            {
                "text": "To model the parameters p(L) representing the probability that the bilingual corpus is L sentence beads in length, we assumed a uniform distribution, z This allows us to ignore this term, since length will not influence the probability of an alignment. We felt this was reasonable becattse",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameterization",
                "sec_num": "3.1"
            },
            {
                "text": "it is unclear what a priori information we have on the length of a corpus.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameterization",
                "sec_num": "3.1"
            },
            {
                "text": "In modeling the frequency of word beads, notice that there are five distinct distributions we need to model: the distribution of 1:0 word beads in 1:0 sentence beads, the distribution of 0:1 word beads in 0:1 sentence beads, and the distribution of all word beads in 1:1, 2:1, and 1:2 sentence beads. To reduce the number of independent parameters we need to estimate, we tied these distributions together. We assumed that the distribution of word beads in 1:1, 2:1, and 1:2 sentence beads are identical. We took the distribution of word beads in 1:0 and 0:1 sentence beads to be identical as well except restricted to the relevant subset of word beads and normalized appropriately, i.e., we took pb(e) for e E Be pc(e) :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameterization",
                "sec_num": "3.1"
            },
            {
                "text": "pb (e') and Pb(f) for f E By P:(f) = ~'~.: 'eB, Pb(f') where Pe refers to the distribution of word beads in 1:0 sentence beads, pf refers to the distribution of word beads in 0:1 sentence beads, pb refers to the distribution of word beads in 1:1, 2:1, and 1:2 sentence beads, and Be and B I refer to the sets of 1:0 and 0:1 word beads in the vocabulary, respectively.",
                "cite_spans": [
                    {
                        "start": 12,
                        "end": 17,
                        "text": "Pb(f)",
                        "ref_id": null
                    },
                    {
                        "start": 43,
                        "end": 54,
                        "text": "'eB, Pb(f')",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 7,
                        "text": "(e')",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Parameterization",
                "sec_num": "3.1"
            },
            {
                "text": "The probability of generating a 0:1 or 1:0 sentence bead can be calculated efficiently using the equation given in Section 2.3. To evaluate the probabilities of the other sentence beads requires a sum over an exponential number of word beadings. We make the gross approximation that this sum is roughly equal to the maximum term in the sum. Even with this approximation, the calculation of P([E; F]) is still intractable since it requires a search for the most probable beading. We use a greedy heuristic to perform this search; we are not guaranteed to find the most probable beading. We begin with every word in its own bead. We then find the 0:1 bead and 1:0 bead that, when replaced with a 1:1 word bead, results in the greatest increase in probability. We repeat this process until we can no longer find a 0:1 and 1:0 bead pair that when replaced would increase the probability of the beading.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluating the Probability of a Sentence Bead",
                "sec_num": "3.2"
            },
            {
                "text": "We estimate parameters by using a variation of the Viterbi version of the expectation-maximization (EM) algorithm (Dempster et al., 1977) . The Viterbi version is used to reduce computational complexity. We use an incremental variation of the algorithm to reduce the number of passes through the corpus required.",
                "cite_spans": [
                    {
                        "start": 114,
                        "end": 137,
                        "text": "(Dempster et al., 1977)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Estimation",
                "sec_num": "3.3"
            },
            {
                "text": "In the EM algorithm, an expectation phase,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Estimation",
                "sec_num": "3.3"
            },
            {
                "text": "where counts on the corpus are taken using the current estimates of the parameters, is alternated with a maximization phase, where parameters are re-estimated based on the counts just taken. Improved parameters lead to improved counts which lead to even more accurate parameters. In the incremental version of the EM algorithm we use, instead of re-estimating parameters after each complete pass through the corpus, we re-estimate parameters after each sentence. By re-estimating parameters continually as we take counts on the corpus, we can align later sections of the corpus more reliably based on alignments of earlier sections. We can align a corpus with only a single pass, simultaneously producing alignments and updating the model as we proceed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Estimation",
                "sec_num": "3.3"
            },
            {
                "text": "More specifically, we initialize parameters by taking counts on a small body of previously aligned data. To estimate word bead frequencies, we maintain a count c(b) for each word bead that records the number of times the word bead b occurs in the most probable word beading of a sentence bead. We take c(b)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Estimation",
                "sec_num": "3.3"
            },
            {
                "text": "We initialize the counts c(b) to 1 for 0:1 and 1:0 word beads, so that these beads can occur in beadings with nonzero probability. To enable 1:1 word beads to occur in beadings with nonzero probability, we initialize their counts to a small value whenever we see the corresponding 0:1 and 1:0 word beads occur in the most probable word beading of a sentence bead.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "pb(b) -Eb, c(V)",
                "sec_num": null
            },
            {
                "text": "To estimate the sentence length parameters ,~, we divide the number of word beads in the most probable beading of the initial training data by the total number of sentences. This gives us an estimate for hi:0, and the other ~ parameters can be calculated using equation (1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "pb(b) -Eb, c(V)",
                "sec_num": null
            },
            {
                "text": "We have found that one hundred sentence pairs are sufficient to train the model to a state where it can align adequately. At this point, we can process unaligned text and use the alignments we produce to further train the model. We update parameters based on the newly aligned text in the same way that we update parameters based on the initial ]3 training data. 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "pb(b) -Eb, c(V)",
                "sec_num": null
            },
            {
                "text": "To align a corpus in a single pass the model must be fairly accurate before starting or else the beginning of the corpus will be poorly aligned. Hence, after bootstrapping the model on one hundred sentence pairs, we train the algorithm on a chunk of the unaligned target bilingual corpus, typically 20,000 sentence pairs, before making one pass through the entire corpus to produce the actual alignment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "pb(b) -Eb, c(V)",
                "sec_num": null
            },
            {
                "text": "It is natural to use dynamic programming to search for the best alignment; one can find the most probable of an exponential number of alignments using quadratic time and memory. Alignment can be viewed as a \"shortest distance\" problem, where the \"distance\" associated with a sentence bead is the negative logarithm of its probability. The probability of an alignment is inversely related to the sum of the distances associated with its component sentence beads.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search",
                "sec_num": "3.4"
            },
            {
                "text": "Given the size of existing bilingual corpora and the computation necessary to evaluate the probability of a sentence bead, a quadratic algorithm is still too profligate. However, most alignments are one-to-one, so we can reap great benefits through intelligent thresholding. By considering only a subset of all possible alignments, we reduce the computation to a linear one.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search",
                "sec_num": "3.4"
            },
            {
                "text": "Dynamic programming consists of incrementally finding the best alignment of longer and longer prefixes of the bilingual corpus. We prune all alignment prefixes that have a substantially lower probability than the most probable alignment prefix of the same length.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search",
                "sec_num": "3.4"
            },
            {
                "text": "2 In theory, one cannot decide whether a particular sentence bead belongs to the best alignment of a corpus until the whole corpus has been processed. In practice, some partial alignments will have much higher probabilities than all other ahgnments, and it is desirable to train on these partial alignments to aid in aligning later sections of the corpus. To decide when it is reasonably safe to train on a particular sentence bead, we take advantage of the thresholding described in Section 3.4, where improbable partial alignments are discarded. At a given point in time in aligning a corpus, all undiscarded partial alignments will have some sentence beads in common. When a sentence bead is common to all active partial alignments, we consider it to he safe to train on.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Search",
                "sec_num": "3.4"
            },
            {
                "text": "Deletions are automatically handled within the standard dynamic programming framework. However, because of thresholding, we must handle large deletions using a separate mechanism.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Deletion Identification",
                "sec_num": "3.5"
            },
            {
                "text": "Because lexical information is used, correct alignments receive vastly greater probabilities than incorrect alignments. Consequently, thresholding is generally very aggressive and our search beam in the dynamic programming array is narrow. However, when there is a large deletion in one of the parallel corpora, consistent lexical correspondences disappear so no one alignment has a much higher probability than the others and our search beam becomes wide. When the search beam reaches a certain width, we take this to indicate the beginning of a deletion.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Deletion Identification",
                "sec_num": "3.5"
            },
            {
                "text": "To identify the end of a deletion, we search linearly through both corpora simultaneously. All occurrences of words whose frequency is below a certain value are recorded in a hash table. Whenever we notice the occurrence of a rare word in one corpus and its translation in the other, we take this as a candidate location for the end of the deletion. For each candidate location, we examine the forty sentences following the occurrence of the rare word in each of the two parallel corpora. We use dynamic programming to find the probability of the best alignment of these two blocks of sentences. If this probability is sufficiently high we take the candidate location to be the end of the deletion. Because it is extremely unlikely that there are two very similar sets of forty sentences in a corpus, this deletion identification algorithm is robust. In addition, because we key off of rare words in considering ending points, deletion identification requires time linear in the length of the deletion.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Deletion Identification",
                "sec_num": "3.5"
            },
            {
                "text": "Using this algorithm, we have aligned three large English/French corpora. We have aligned a corpus of 3,000,000 sentences (of both English and French) of the Canadian Hansards, a corpus of 1,000,000 sentences of newer Hansard proceedings, and a corpus of 2,000,000 sentences of proceedings from the European Economic Community. In each case, we first bootstrapped the translation model by training on 100 previously aligned sentence pairs. We then trained the model further on 20,000 sentences of the target corpus. Note that these 20,000 sentences were not previously aligned.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4"
            },
            {
                "text": "Because of the very low error rates involved, instead of direct sampling we decided to estimate the error of the old Hansard corpus through comparison with the alignment found by Brown of the same corpus. We manually inspected over 500 locations where the two alignments differed to estimate our error rate on the alignments disagreed upon. Taking the error rate of the Brown alignment to be 0.6%, we estimated the overall error rate of our alignment to be 0.4%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4"
            },
            {
                "text": "In addition, in the Brown alignment approximately 10% of the corpus was discarded because of indications that it would be difficult to align. Their error rate of 0.6% holds on the remaining sentences. Our error rate of 0.4% holds on the entire corpus. Gale reports an approximate error rate of 2% on a different body of Hansard data with no discarding, and an error rate of 0.4% if 20% of the sentences can be discarded.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4"
            },
            {
                "text": "Hence, with our algorithm we can achieve at least as high accuracy as the Brown and Gale algorithms without discarding any data. This is especially significant since, presumably, the sentences discarded by the Brown and Gale algorithms are those sentences most difficult to align.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4"
            },
            {
                "text": "In addition, the errors made by our algorithm are generally of a fairly trivial nature. We randomly sampled 300 alignments from the newer Hansard corpus. The two errors we found are displayed in Figures 3 and 4 . In the first error, E1 was aligned with F1 and E2 was aligned with /'2. The correct alignment maps E1 and E2 to F1 and F2 to nothing. In the second error, E1 was aligned with F1 and F2 was aligned to nothing. Both of these errors could have been avoided with improved sentence boundary detection. Because length-based alignment algorithms ignore lexical information, their errors can be of a more spectacular nature.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 195,
                        "end": 210,
                        "text": "Figures 3 and 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4"
            },
            {
                "text": "The rate of alignment ranged from 2,000 to 5,000 sentences of both English and French per hour on an IBM RS/6000 530H workstation. The alignment algorithm lends itself well to parallelization; we can use the deletion identification mechanism to automatically identify locations where we can subdivide a bilingual corpus. While it required on the order of 500 machine-hours to align the newer Hansard corpus, it took only 1.5 days of real time to complete the job on fifteen machines.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4"
            },
            {
                "text": "We have described an accurate, robust, and fast algorithm for sentence alignment. The algorithm can handle large deletions in text, it is language independent, and it is parallelizable. It requires a minimum of human intervention; for each language pair 100 sentences need to be aligned by hand to bootstrap the translation model. The use of lexical information requires a great computational cost. Even with numerous approximations, this algorithm is tens of times slower than the Brown and Gale algorithms. This is acceptable given that alignment is a one-time cost and given available computing power. It is unclear, though, how much further it is worthwhile to proceed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "5"
            },
            {
                "text": "The natural next step in sentence alignment is to account for word ordering in the translation model, e.g., the models described in (Brown et al., 1993 ) could be used. However, substantially greater computing power is required before these approaches can become practical, and there is not much room for further improvements in accuracy.",
                "cite_spans": [
                    {
                        "start": 132,
                        "end": 151,
                        "text": "(Brown et al., 1993",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "5"
            },
            {
                "text": "To be precise, we assumed a uniform distribution over some arbitrarily large finite range, as one cannot have a uniform distribution over a countably infinite set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "If there is some evidence that it ... and I will see that it does.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "E1",
                "sec_num": null
            },
            {
                "text": "\\SCM{} ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "E2",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Dynamic Programming",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Bellman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Bellman",
                        "suffix": ""
                    }
                ],
                "year": 1957,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bellman, 1957) Richard Bellman. Dynamic Pro- gramming. Princeton University Press, Princeton N.J., 1957.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The mathematics of machine translation: Parameter estimation",
                "authors": [
                    {
                        "first": "Paul",
                        "middle": [
                            "S"
                        ],
                        "last": "Mercer",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Roossin",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Brown",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "Proceedings 29th Annual Meeting of the ACL",
                "volume": "16",
                "issue": "",
                "pages": "169--176",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mercer, and Paul S. Roossin. A statistical ap- proach to machine translation. Computational Linguistics, 16(2):79-85, June 1990. (Brown et al., 1991a) Peter F. Brown, Stephen A. DellaPietra, Vincent J. DellaPietra, and Ro- bert L. Mercer. Word sense disambiguation using statistical methods. In Proceedings 29th Annu- al Meeting of the ACL, pages 265-270, Berkeley, CA, June 1991. (Brown et al., 1991b) Peter F. Brown, Jennifer C. Lai, and Robert L. Mercer. Aligning sentences in parallel corpora. In Proceedings 29th Annual Meeting of the ACL, pages 169-176, Berkeley, CA, June 1991. (Brown et al., 1993) Peter F. Brown, Stephen A. Del- laPietra, Vincent J. DellaPietra, and Robert L. Mercer. The mathematics of machine transla- tion: Parameter estimation. Computational Lin- guistics, 1993. To appear. (Catizone et al., 1989) Roberta Catizone, Graham Russell, and Susan Warwick. Deriving transla- tion data from bilingual texts. In Proceedings",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "A Bilingual Corpus P(FpIE ) we express the translation model as a distribution P([Ep; Fp]) over sentence beads. The alignment problem becomes discovering the alignment A that maximizes the joint distribution P(\u00a3,2\",.A). Assuming that successive sentence beads are generated independently, we get L P(C, Yr, A) = p(L) H P([E~;F~]) E~,F;],..., [EL; FL])is consistent with g and ~\" and where p(L) is the probability that a corpus contains L sentence beads.",
                "type_str": "figure",
                "uris": null,
                "num": null
            },
            "TABREF1": {
                "type_str": "table",
                "text": "Ep2; F~],...), where the E~ and F~ can be zero, one, or more sentences long.",
                "num": null,
                "html": null,
                "content": "<table><tr><td/><td>English (\u00a3)</td><td/><td>French (~)</td></tr><tr><td>El</td><td>That is what the consumers</td><td colspan=\"2\">/'i Voil~ ce qui int6resse le</td></tr><tr><td/><td>are interested in and that</td><td/><td>consommateur et roll&amp; ce</td></tr><tr><td/><td>is what the party is</td><td/><td>que int6resse notre parti.</td></tr><tr><td/><td>interested in.</td><td>F2</td><td>Les d6put6s d'en face se</td></tr><tr><td colspan=\"2\">E2 Hon. members opposite scoff</td><td/><td>moquent du gel que a</td></tr><tr><td/><td>at the freeze suggested by</td><td/><td>propos6 notre parti.</td></tr><tr><td/><td>this party; to them it is</td><td/></tr><tr><td/><td>laughable.</td><td/></tr><tr><td/><td/><td/><td>of the sentence bead [El; F1]</td></tr><tr><td/><td/><td/><td>followed by the sentence bead [E2; ];'2, F3]. We</td></tr><tr><td/><td/><td/><td>can represent an alignment `4 of a corpus as a se-</td></tr><tr><td/><td/><td/><td>quence of sentence beads ([Epl; Fpl], [Under this paradigm, instead of expressing the</td></tr><tr><td/><td/><td/><td>translation model as a conditional distribution</td></tr></table>"
            }
        }
    }
}