ACL-OCL / Base_JSON /prefixD /json /D07 /D07-1006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D07-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:18:34.826125Z"
},
"title": "Getting the structure right for word alignment: LEAF",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ISI / University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "fraser@isi.edu"
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": "",
"affiliation": {},
"email": "marcu@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word alignment is the problem of annotating parallel text with translational correspondence. Previous generative word alignment models have made structural assumptions such as the 1-to-1, 1-toN , or phrase-based consecutive word assumptions, while previous discriminative models have either made such an assumption directly or used features derived from a generative model making one of these assumptions. We present a new generative alignment model which avoids these structural limitations, and show that it is effective when trained using both unsupervised and semi-supervised training methods.",
"pdf_parse": {
"paper_id": "D07-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "Word alignment is the problem of annotating parallel text with translational correspondence. Previous generative word alignment models have made structural assumptions such as the 1-to-1, 1-toN , or phrase-based consecutive word assumptions, while previous discriminative models have either made such an assumption directly or used features derived from a generative model making one of these assumptions. We present a new generative alignment model which avoids these structural limitations, and show that it is effective when trained using both unsupervised and semi-supervised training methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Several generative models and a large number of discriminatively trained models have been proposed in the literature to solve the problem of automatic word alignment of bitexts. The generative proposals have required unrealistic assumptions about the structure of the word alignments. Two assumptions are particularly common. The first is the 1-to-N assumption, meaning that each source word generates zero or more target words, which requires heuristic techniques in order to obtain alignments suitable for training a SMT system. The second is the consecutive word-based \"phrasal SMT\" assumption. This does not allow gaps, which can be used to particular advantage by SMT models which model hierarchical structure. Previous discriminative models have either made such assumptions directly or used fea-tures from a generative model making such an assumption. Our objective is to automatically produce alignments which can be used to build high quality machine translation systems. These are presumably close to the alignments that trained bilingual speakers produce. Human annotated alignments often contain M-to-N alignments, where several source words are aligned to several target words and the resulting unit can not be further decomposed. Source or target words in a single unit are sometimes nonconsecutive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe a new generative model which directly models M-to-N non-consecutive word alignments. The rest of the paper is organized as follows. The generative story is presented, followed by the mathematical formulation. Details of the unsupervised training procedure are described. The generative model is then decomposed into feature functions used in a log-linear model which is trained using a semi-supervised algorithm. Experiments show improvements in word alignment accuracy and usage of the generated alignments in hierarchical and phrasal SMT systems results in an increased BLEU score. Previous work is discussed and this is followed by the conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce a new generative story which enables the capture of non-consecutive M-to-N alignment structure. We have attempted to use the same labels as the generative story for Model 4 (Brown et al., 1993) , which we are extending. Our generative story describes the stochastic generation of a target string f (sometimes referred to as the French string, or foreign string) from a source string e (sometimes referred to as the English string), consisting of l words. The variable m is the length of f . We generally use the index i to refer to source words (e i is the English word at position i), and j to refer to target words.",
"cite_spans": [
{
"start": 186,
"end": 206,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "Our generative story makes the distinction between different types of source words. There are head words, non-head words, and deleted words. Similarly, for target words, there are head words, non-head words, and spurious words. A head word is linked to zero or more non-head words; each nonhead word is linked to from exactly one head word. The purpose of head words is to try to provide a robust representation of the semantic features necessary to determine translational correspondence. This is similar to the use of syntactic head words in statistical parsers to provide a robust representation of the syntactic features of a parse sub-tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "A minimal translational correspondence consists of a linkage between a source head word and a target head word (and by implication, the non-head words linked to them). Deleted source words are not involved in a minimal translational correspondence, as they were \"deleted\" by the translation process. Spurious target words are also not involved in a minimal translational correspondence, as they spontaneously appeared during the generation of other target words. Figure 1 shows a simple example of the stochastic generation of a French sentence from an English sentence, annotated with the step number in the generative story.",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 471,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "1. Choose the source word type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each i = 1, 2, ..., l choose a word type \u03c7 i = \u22121 (non-head word), \u03c7 i = 0 (deleted word) or \u03c7 i = 1 (head word) according to the distribution g(\u03c7 i |e i ) let \u03c7 0 = 1 2. Choose the identity of the head word for each non-head word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each i = 1, 2, ..., l if \u03c7 i = \u22121 choose a \"linked from head word\" value \u00b5 i (the position of the head word which e i is linked to) according to the distribution w \u22121 (\u00b5 i \u2212 i|class e (e i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each i = 1, 2, ..., l if \u03c7 i = 1 let \u00b5 i = i for each i = 1, 2, ..., l if \u03c7 i = 0 let \u00b5 i = 0 for each i = 1, 2, ..., l if \u03c7 \u00b5 i = 1 return \"fail- ure\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "3. Choose the identity of the generated target head word for each source head word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each i = 1, 2, ..., l if \u03c7 i = 1 choose \u03c4 i1 according to the distribution t 1 (\u03c4 i1 |e i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "4. Choose the number of words in a target cept conditioned on the identity of the source head word and the source cept size (\u03b3 i is 1 if the cept size is 1, and 2 if the cept size is greater).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each i = 1, 2, ..., l if \u03c7 i = 1 choose a Foreign cept size \u03c8 i according to the distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "s(\u03c8 i |e i , \u03b3 i ) for each i = 1, 2, ..., l if \u03c7 i < 1 let \u03c8 i = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "5. Choose the number of spurious words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "choose \u03c8 0 according to the distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "s 0 (\u03c8 0 | i \u03c8 i ) let m = \u03c8 0 + l i=1 \u03c8 i 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": ". Choose the identity of the spurious words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each k = 1, 2, ..., \u03c8 0 choose \u03c4 0k according to the distribution t 0 (\u03c4 0k ) 7. Choose the identity of the target non-head words linked to each target head word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each i = 1, 2, ..., l and for each k = 2, 3, ..., \u03c8 i choose \u03c4 ik according to the distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "t >1 (\u03c4 ik |e i , class h (\u03c4 i1 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "8. Choose the position of the target head and nonhead words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each i = 1, 2, ..., l and for each k = 1, 2, ..., \u03c8 i choose a position \u03c0 ik as follows: \u2022 if k > 2 choose \u03c0 ik according to the dis-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "\u2022 if k = 1 choose \u03c0 i1 accord- ing to the distribution d 1 (\u03c0 i1 \u2212 c \u03c1 i |class e (e \u03c1 i ), class f (\u03c4 i1 )) \u2022 if k = 2 choose \u03c0 i2 according to the dis- tribution d 2 (\u03c0 i2 \u2212 \u03c0 i1 |class f (\u03c4 i1 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "tribution d >2 (\u03c0 ik \u2212 \u03c0 ik\u22121 |class f (\u03c4 i1 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "if any position was chosen twice, return \"failure\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "9. Choose the position of the spuriously generated words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "for each k = 1, 2, ..., \u03c8 0 choose a position \u03c0 0k from \u03c8 0 \u2212 k + 1 remaining vacant positions in 1, 2, ..., m according to the uniform distribution let f be the string f \u03c0 ik = \u03c4 ik",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "We note that the steps which return \"failure\" are required because the model is deficient. Deficiency means that a portion of the probability mass in the model is allocated towards generative stories which would result in infeasible alignment structures. Our model has deficiency in the non-spurious target word placement, just as Model 4 does. It has additional deficiency in the source word linking decisions. (Och and Ney, 2003) presented results suggesting that the additional parameters required to ensure that a model is not deficient result in inferior performance, but we plan to study whether this is the case for our generative model in future work.",
"cite_spans": [
{
"start": 412,
"end": 431,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "Given e, f and a candidate alignment a, which represents both the links between source and target head-words and the head-word connections of the non-head words, we would like to calculate p(f, a|e). The formula for this is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "p(f, a|e) =[ l i=1 g(\u03c7 i |e i )] [ l i=1 \u03b4(\u03c7 i , \u22121)w \u22121 (\u00b5 i \u2212 i|class e (e i ))] [ l i=1 \u03b4(\u03c7 i , 1)t 1 (\u03c4 i1 |e i )] [ l i=1 \u03b4(\u03c7 i , 1)s(\u03c8 i |e i , \u03b3 i )] [s 0 (\u03c8 0 | l i=1 \u03c8 i )] [ \u03c8 0 k=1 t 0 (\u03c4 0k )] [ l i=1 \u03c8 i k=2 t >1 (\u03c4 ik |e i , class h (\u03c4 i1 ))] [ l i=1 \u03c8 i k=1 D ik (\u03c0 ik )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "where: \u03b4(i, i ) is the Kronecker delta function which is equal to 1 if i = i and 0 otherwise. \u03c1 i is the position of the closest English head word to the left of the word at i or 0 if there is no such word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "class e (e i ) is the word class of the English word at position i, class f (f j ) is the word class of the French word at position j, class h (f j ) is the word class of the French head word at position j. p 0 and p 1 are parameters describing the probability of not generating and of generating a target spurious word from each non-spurious target word,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p 0 + p 1 = 1. m = l i=1 \u03c8 i (1) s 0 (\u03c8 0 |m ) = m \u03c8 0 p m \u2212\u03c8 0 0 p \u03c8 0 1 (2) D ik (j) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 d 1 (j \u2212 c \u03c1 i |class e (e \u03c1 i ), class f (\u03c4 ik )) if k = 1 d 2 (j \u2212 \u03c0 i1 |class f (\u03c4 ik )) if k = 2 d >2 (j \u2212 \u03c0 ik\u22121 |class f (\u03c4 ik )) if k > 2 (3) \u03b3 i = min(2, l i =1 \u03b4(\u00b5 i , i))",
"eq_num": "(4)"
}
],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "c i = ceiling( \u03c8 i k=1 \u03c0 ik /\u03c8 i ) if \u03c8 i = 0 0 if \u03c8 i = 0 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "The alignment structure used in many other models can be modeled using special cases of this framework. We can express the 1-to-N structure of models like Model 4 by disallowing \u03c7 i = \u22121, while for 1-to-1 structure we both disallow \u03c7 i = \u22121 and deterministically set \u03c8 i = \u03c7 i . We can also specialize our generative story to the consecutive word M-to-N alignments used in \"phrase-based\" models, though in this case the conditioning of the generation decisions would be quite different. This involves adding checks on source and target connection geometry to the generative story which, if violated, would return \"failure\"; naturally this is at the cost of additional deficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEAF: a generative word alignment model 2.1 Generative story",
"sec_num": "2"
},
{
"text": "We can perform maximum likelihood estimation of the parameters of this model in a similar fashion to that of Model 4 (Brown et al., 1993), described thoroughly in (Och and Ney, 2003) . We use Viterbi training (Brown et al., 1993) but neighborhood estimation (Al-Onaizan et al., 1999; Och and Ney, 2003) or \"pegging\" (Brown et al., 1993) could also be used.",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 209,
"end": 229,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
},
{
"start": 258,
"end": 283,
"text": "(Al-Onaizan et al., 1999;",
"ref_id": null
},
{
"start": 284,
"end": 302,
"text": "Och and Ney, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 316,
"end": 336,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parameter Estimation",
"sec_num": "2.2"
},
{
"text": "To initialize the parameters of the generative model for the first iteration, we use bootstrapping from a 1-to-N and a M-to-1 alignment. We use the intersection of the 1-to-N and M-to-1 alignments to establish the head word relationship, the 1-to-N alignment to delineate the target word cepts, and the M-to-1 alignment to delineate the source word cepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parameter Estimation",
"sec_num": "2.2"
},
{
"text": "In bootstrapping, a problem arises when we encounter infeasible alignment structure where, for instance, a source word generates target words but no link between any of the target words and the source word appears in the intersection, so it is not clear which target word is the target head word. To address this, we consider each of the N generated target words as the target head word in turn and assign this configuration 1/N of the counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parameter Estimation",
"sec_num": "2.2"
},
{
"text": "For each iteration of training we search for the Viterbi solution for millions of sentences. Evidence that inference over the space of all possible alignments is intractable has been presented, for a similar problem, in (Knight, 1999) . Unlike phrasebased SMT, left-to-right hypothesis extension using a beam decoder is unlikely to be effective because in word alignment reordering is not limited to a small local window and so the necessary beam would be very large. We are not aware of admissible or inadmissible search heuristics which have been shown to be effective when used in conjunction with a search algorithm similar to A* search for a model predicting over a structure like ours. Therefore we use a simple local search algorithm which operates on complete hypotheses.",
"cite_spans": [
{
"start": 220,
"end": 234,
"text": "(Knight, 1999)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parameter Estimation",
"sec_num": "2.2"
},
{
"text": "(Brown et al., 1993) defined two local search operations for their 1-to-N alignment models 3, 4 and 5. All alignments which are reachable via these operations from the starting alignment are considered. One operation is to change the generation decision for a French word to a different English word (move), and the other is to swap the generation decision for two French words (swap). All possible operations are tried and the best is chosen. This is repeated. The search is terminated when no opera-tion results in an improvement. (Och and Ney, 2003) discussed efficient implementation.",
"cite_spans": [
{
"start": 533,
"end": 552,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parameter Estimation",
"sec_num": "2.2"
},
{
"text": "In our model, because the alignment structure is richer, we define the following operations: move French non-head word to new head, move English non-head word to new head, swap heads of two French non-head words, swap heads of two English non-head words, swap English head word links of two French head words, link English word to French word making new head words, unlink English and French head words. We use multiple restarts to try to reduce search errors. (Germann et al., 2004; Marcu and Wong, 2002) have some similar operations without the head word distinction.",
"cite_spans": [
{
"start": 461,
"end": 483,
"text": "(Germann et al., 2004;",
"ref_id": "BIBREF9"
},
{
"start": 484,
"end": 505,
"text": "Marcu and Wong, 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parameter Estimation",
"sec_num": "2.2"
},
{
"text": "Equation 6 defines a log-linear model. Each feature function h m has an associated weight \u03bb m . Given a vector of these weights \u03bb, the alignment search problem, i.e. the search to return the best alignment a of the sentences e and f according to the model, is specified by Equation 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised parameter estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03bb (f, a|e) = exp( m \u03bb m h m (a, e, f )) a ,f exp( m \u03bb m h m (a , e, f )) (6) a = argmax a m \u03bb m h m (f, a, e)",
"eq_num": "(7)"
}
],
"section": "Semi-supervised parameter estimation",
"sec_num": "3"
},
{
"text": "We decompose the new generative model presented in Section 2 in both translation directions to provide the initial feature functions for our loglinear model, features 1 to 10 and 16 to 25 in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 199,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Semi-supervised parameter estimation",
"sec_num": "3"
},
{
"text": "We use backoffs for the translation decisions (features 11 and 26 and the HMM translation tables which are features 12 and 27) and the target cept size distributions (features 13, 14, 28 and 29 in Table 1) , as well as heuristics which directly control the number of unaligned words we generate (features 15 and 30 in Table 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 205,
"text": "Table 1)",
"ref_id": "TABREF2"
},
{
"start": 318,
"end": 326,
"text": "Table 1)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Semi-supervised parameter estimation",
"sec_num": "3"
},
{
"text": "We use the semi-supervised EMD algorithm (Fraser and Marcu, 2006b ) to train the model. The initial M-step bootstraps parameters as described in Section 2.2 from a M-to-1 and a 1-to-N alignment. We then perform the D-step following (Fraser and A B C D n n n n n n n n n n n n n n E @ @ @ @ @ @ @~A B C D n n n n n n n n n n n n n n E @ @ @ @ @ @ @F igure 2: Two alignments with the same translational correspondence Marcu, 2006b) . Given the feature function parameters estimated in the M-step and the feature function weights \u03bb determined in the D-step, the E-step searches for the Viterbi alignment for the full training corpus.",
"cite_spans": [
{
"start": 41,
"end": 65,
"text": "(Fraser and Marcu, 2006b",
"ref_id": "BIBREF8"
},
{
"start": 416,
"end": 429,
"text": "Marcu, 2006b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised parameter estimation",
"sec_num": "3"
},
{
"text": "We use 1 \u2212 F-Measure as our error criterion. (Fraser and Marcu, 2006a) established that it is important to tune \u03b1 (the trade-off between Precision and Recall) to maximize performance. In working with LEAF, we discovered a methodological problem with our baseline systems, which is that two alignments which have the same translational correspondence can have different F-Measures. An example is shown in Figure 2 .",
"cite_spans": [
{
"start": 45,
"end": 70,
"text": "(Fraser and Marcu, 2006a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-supervised parameter estimation",
"sec_num": "3"
},
{
"text": "To overcome this problem we fully interlinked the transitive closure of the undirected bigraph formed by each alignment hypothesized by our baseline alignment systems 1 . This operation maps the alignment shown to the left in Figure 2 to the alignment shown to the right. This operation does not change the collection of phrases or rules extracted from a hypothesized alignment, see, for instance, (Koehn et al., 2003) . Working with this fully interlinked representation we found that the best settings of \u03b1 were \u03b1 = 0.1 for the Arabic/English task and \u03b1 = 0.4 for the French/English task.",
"cite_spans": [
{
"start": 398,
"end": 418,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-supervised parameter estimation",
"sec_num": "3"
},
{
"text": "We perform experiments on two large alignments tasks, for Arabic/English and French/English data sets. Statistics for these sets are shown in Table 2 ",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1"
},
{
"text": "To build all alignment systems, we start with 5 iterations of Model 1 followed by 4 iterations of HMM (Vogel et al., 1996) , as implemented in GIZA++ (Och and Ney, 2003) . For all non-LEAF systems, we take the best performing of the \"union\", \"refined\" and \"intersection\" symmetrization heuristics (Och and Ney, 2003) to combine the 1-to-N and M-to-1 directions resulting in a M-to-N alignment. Because these systems do not output fully linked alignments, we fully link the resulting alignments as described at the end of Section 3. The reader should recall that this does not change the set of rules or phrases that can be extracted using the alignment.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF25"
},
{
"start": 150,
"end": 169,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 297,
"end": 316,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "We perform one main comparison, which is of semi-supervised systems, which is what we will use to produce alignments for SMT. We compare semisupervised LEAF with a previous state of the art semi-supervised system (Fraser and Marcu, 2006b) . We performed translation experiments on the alignments generated using semi-supervised training to verify that the improvements in F-Measure result in increases in BLEU.",
"cite_spans": [
{
"start": 213,
"end": 238,
"text": "(Fraser and Marcu, 2006b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "We also compare the unsupervised LEAF system with GIZA++ Model 4 to give some idea of the performance of the unsupervised model. We made an effort to optimize the free parameters of GIZA++, while for unsupervised LEAF there are no free parameters to optimize. A single iteration of unsupervised LEAF 2 is compared with heuristic 2 Unsupervised LEAF is equivalent to using the log-linear model and setting \u03bb m = 1 for m = 1 to 10 and m = 16 to 25, symmetrization of GIZA++'s extension of Model 4 (which was run for four iterations). LEAF was bootstrapped as described in Section 2.2 from the HMM Viterbi alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "Results for the experiments on the French/English data set are shown in Table 3 . We ran GIZA++ for four iterations of Model 4 and used the \"refined\" heuristic (line 1). We ran the baseline semisupervised system for two iterations (line 2), and in contrast with (Fraser and Marcu, 2006b) we found that the best symmetrization heuristic for this system was \"union\", which is most likely due to our use of fully linked alignments which was discussed at the end of Section 3. We observe that LEAF unsupervised (line 3) is competitive with GIZA++ (line 1), and is in fact competitive with the baseline semi-supervised result (line 2). We ran the LEAF semi-supervised system for two iterations (line 4). The best result is the LEAF semi-supervised system, with a gain of 1.8 F-Measure over the LEAF unsupervised system.",
"cite_spans": [
{
"start": 262,
"end": 287,
"text": "(Fraser and Marcu, 2006b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "For French/English translation we use a state of the art phrase-based MT system similar to (Och and Ney, 2004; Koehn et al., 2003) . The translation test data is described in Table 2 . We use two trigram language models, one built using the English portion of the training data and the other built using additional English news data. The BLEU scores reported in this work are calculated using lowercased and tokenized data. For semi-supervised LEAF the gain of 0.46 BLEU over the semi-supervised baseline is not statistically significant (a gain of 0.78 BLEU would be required), but LEAF semi-supervised compared with GIZA++ is significant, with a gain of 1.23 BLEU. We note that this shows a large gain in transwhile setting \u03bb m = 0 for other values of m. A E F E TRAINING SENTS 6,609,162 2,842,184 WORDS 147,165,003 168,301,299 75,794,254 67,366,819 VOCAB 642,518 352,357 149,568 114,907 SINGLETONS 256,778 158,544 60,651 47,765 ALIGN DISCR. SENTS 1,000 110 WORDS 26,882 37,635 1, lation quality over that obtained using GIZA++ because BLEU is calculated using only a single reference for the French/English task.",
"cite_spans": [
{
"start": 91,
"end": 110,
"text": "(Och and Ney, 2004;",
"ref_id": "BIBREF22"
},
{
"start": 111,
"end": 130,
"text": "Koehn et al., 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 757,
"end": 1013,
"text": "A E F E TRAINING SENTS 6,609,162 2,842,184 WORDS 147,165,003 168,301,299 75,794,254 67,366,819 VOCAB 642,518 352,357 149,568 114,907 SINGLETONS 256,778 158,544 60,651 47,765 ALIGN DISCR. SENTS 1,000 110 WORDS 26,882 37,635 1,",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "Results for the Arabic/English data set are also shown in Table 3 . We used a large gold standard word alignment set available from the LDC. We ran GIZA++ for four iterations of Model 4 and used the \"union\" heuristic. We compare GIZA++ (line 1) with one iteration of the unsupervised LEAF model (line 2). The unsupervised LEAF system is worse than four iterations of GIZA++ Model 4. We believe that the features in LEAF are too high dimensional to use for the Arabic/English task without the backoffs available in the semi-supervised models. The baseline semi-supervised system (line 3) was run for three iterations and the resulting alignments were combined with the \"union\" heuristic. We ran the LEAF semi-supervised system for two iterations. The best result is the LEAF semi-supervised system (line 4), with a gain of 5.4 F-Measure over the baseline semi-supervised system.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "ARABIC/ENGLISH FRENCH/ENGLISH",
"sec_num": null
},
{
"text": "For Arabic/English translation we train a state of the art hierarchical model similar to (Chiang, 2005) using our Viterbi alignments. The translation test data used is described in Table 2 . We use two trigram language models, one built using the English portion of the training data and the other built using additional English news data. The test set is from the NIST 2005 translation task. LEAF had the best performance scoring 1.43 BLEU better than the baseline semi-supervised system, which is statistically significant.",
"cite_spans": [
{
"start": 89,
"end": 103,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "ARABIC/ENGLISH FRENCH/ENGLISH",
"sec_num": null
},
{
"text": "The LEAF model is inspired by the literature on generative modeling for statistical word alignment and particularly by Model 4 (Brown et al., 1993) . Much of the additional work on generative modeling of 1to-N word alignments is based on the HMM model (Vogel et al., 1996) . (Toutanova et al., 2002) and (Lopez and Resnik, 2005) presented a variety of refinements of the HMM model particularly effective for low data conditions. (Deng and Byrne, 2005) described work on extending the HMM model using a bigram formulation to generate 1-to-N alignment structure. The common thread connecting these works is their reliance on the 1-to-N approximation, while we have defined a generative model which does not require use of this approximation, at the cost of having to rely on local search.",
"cite_spans": [
{
"start": 119,
"end": 147,
"text": "Model 4 (Brown et al., 1993)",
"ref_id": null
},
{
"start": 252,
"end": 272,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF25"
},
{
"start": 275,
"end": 299,
"text": "(Toutanova et al., 2002)",
"ref_id": "BIBREF24"
},
{
"start": 304,
"end": 328,
"text": "(Lopez and Resnik, 2005)",
"ref_id": "BIBREF16"
},
{
"start": 429,
"end": 451,
"text": "(Deng and Byrne, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "5"
},
{
"text": "There has also been work on generative models for other alignment structures. (Wang and Waibel, 1998) introduced a generative story based on extension of the generative story of Model 4. The alignment structure modeled was \"consecutive M to non-consecutive N\". (Marcu and Wong, 2002) defined the Joint model, which modeled consecutive word M-to-N alignments. presented a model capable of modeling 1-to-N and M-to-1 alignments (but not arbitrary M-to-N alignments) which was bootstrapped from Model 4. LEAF directly models non-consecutive M-to-N alignments.",
"cite_spans": [
{
"start": 88,
"end": 101,
"text": "Waibel, 1998)",
"ref_id": "BIBREF26"
},
{
"start": 261,
"end": 283,
"text": "(Marcu and Wong, 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "5"
},
{
"text": "One important aspect of LEAF is its symmetry. (Och and Ney, 2003) tion of the output of a 1-to-N model and a M-to-1 model resulting in a M-to-N alignment, this was extended in (Koehn et al., 2003) . We have used insights from these works to help determine the structure of our generative model. introduced a model featuring a symmetrized lexicon. (Liang et al., 2006) showed how to train two HMM models, a 1-to-N model and a M-to-1 model, to agree in predicting all of the links generated, resulting in a 1-to-1 alignment with occasional rare 1to-N or M-to-1 links. We improve on these works by choosing a new structure for our generative model, the head word link structure, which is both symmetric and a robust structure for modeling of nonconsecutive M-to-N alignments.",
"cite_spans": [
{
"start": 46,
"end": 65,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 176,
"end": 196,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 347,
"end": 367,
"text": "(Liang et al., 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "5"
},
{
"text": "In designing LEAF, we were also inspired by dependency-based alignment models (Wu, 1997; Alshawi et al., 2000; Yamada and Knight, 2001; Cherry and Lin, 2003; Zhang and Gildea, 2004) . In contrast with their approaches, we have a very flat, one-level notion of dependency, which is bilingually motivated and learned automatically from the parallel corpus. This idea of dependency has some similarity with hierarchical SMT models such as (Chiang, 2005) .",
"cite_spans": [
{
"start": 78,
"end": 88,
"text": "(Wu, 1997;",
"ref_id": "BIBREF27"
},
{
"start": 89,
"end": 110,
"text": "Alshawi et al., 2000;",
"ref_id": "BIBREF1"
},
{
"start": 111,
"end": 135,
"text": "Yamada and Knight, 2001;",
"ref_id": "BIBREF28"
},
{
"start": 136,
"end": 157,
"text": "Cherry and Lin, 2003;",
"ref_id": "BIBREF4"
},
{
"start": 158,
"end": 181,
"text": "Zhang and Gildea, 2004)",
"ref_id": "BIBREF30"
},
{
"start": 436,
"end": 450,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "5"
},
{
"text": "The discriminative component of our work is based on a plethora of recent literature. This literature generally views the discriminative modeling problem as a supervised problem involving the combination of heuristically derived feature functions. These feature functions generally include the prediction of some type of generative model, such as the HMM model or Model 4. A discriminatively trained 1-to-N model with feature functions specifically designed for Arabic was presented in (Ittycheriah and Roukos, 2005) . (Lacoste-Julien et al., 2006 ) created a discriminative model able to model 1-to-1, 1-to-2 and 2-to-1 alignments for which the best results were obtained using features based on symmetric HMMs trained to agree, (Liang et al., 2006) , and intersected Model 4. (Ayan and Dorr, 2006) defined a discriminative model which learns how to combine the predictions of several alignment algorithms. The experiments performed included Model 4 and the HMM extensions of (Lopez and Resnik, 2005) . (Moore et al., 2006) introduced a discriminative model of 1-to-N and M-to-1 alignments, and similarly to (Lacoste-Julien et al., 2006) the best results were obtained using HMMs trained to agree and intersected Model 4. LEAF is not bound by the structural restrictions present either directly in these models, or in the features derived from the generative models used. We also iterate the generative/discriminative process, which allows the discriminative predictions to influence the generative model.",
"cite_spans": [
{
"start": 486,
"end": 516,
"text": "(Ittycheriah and Roukos, 2005)",
"ref_id": "BIBREF10"
},
{
"start": 519,
"end": 547,
"text": "(Lacoste-Julien et al., 2006",
"ref_id": "BIBREF13"
},
{
"start": 730,
"end": 750,
"text": "(Liang et al., 2006)",
"ref_id": "BIBREF14"
},
{
"start": 778,
"end": 799,
"text": "(Ayan and Dorr, 2006)",
"ref_id": "BIBREF2"
},
{
"start": 977,
"end": 1001,
"text": "(Lopez and Resnik, 2005)",
"ref_id": "BIBREF16"
},
{
"start": 1004,
"end": 1024,
"text": "(Moore et al., 2006)",
"ref_id": "BIBREF19"
},
{
"start": 1109,
"end": 1138,
"text": "(Lacoste-Julien et al., 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "5"
},
{
"text": "Our work is most similar to work using discriminative log-linear models for alignment, which is similar to discriminative log-linear models used for the SMT decoding (translation) problem (Och and Ney, 2002; Och, 2003) . (Liu et al., 2005) presented a log-linear model combining IBM Model 3 trained in both directions with heuristic features which resulted in a 1-to-1 alignment. (Fraser and Marcu, 2006b) described symmetrized training of a 1-to-N log-linear model and a M-to-1 log-linear model. These models took advantage of features derived from both training directions, similar to the symmetrized lexicons of , including features derived from the HMM model and Model 4. However, despite the symmetric lexicons, these models were only able to optimize the performance of the 1-to-N model and the M-to-1 model separately, and the predictions of the two models required combination with symmetrization heuristics. We have overcome the limitations of that work by defining new feature functions, based on the LEAF generative model, which score non-consecutive Mto-N alignments so that the final performance criterion can be optimized directly.",
"cite_spans": [
{
"start": 188,
"end": 207,
"text": "(Och and Ney, 2002;",
"ref_id": "BIBREF20"
},
{
"start": 208,
"end": 218,
"text": "Och, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 221,
"end": 239,
"text": "(Liu et al., 2005)",
"ref_id": "BIBREF15"
},
{
"start": 380,
"end": 405,
"text": "(Fraser and Marcu, 2006b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "5"
},
{
"text": "We have found a new structure over which we can robustly predict which directly models translational correspondence commensurate with how it is used in hierarchical SMT systems. Our new generative model, LEAF, is able to model alignments which consist of M-to-N non-consecutive translational correspondences. Unsupervised LEAF is comparable with a strong baseline. When coupled with a discriminative training procedure, the model leads to increases between 3 and 9 F-score points in alignment accuracy and 1.2 and 2.8 BLEU points in translation accuracy over strong French/English and Arabic/English baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "All of the gold standard alignments were fully interlinked as distributed. We did not modify the gold standard alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022. We would like to thank the USC Center for High Performance Computing and Communications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Learning dependency translation models as collections of finite state head transducers. Computational Linguistics",
"authors": [
{
"first": "Hiyan",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Shona",
"middle": [],
"last": "Douglas",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "26",
"issue": "",
"pages": "45--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Learning dependency translation models as col- lections of finite state head transducers. Computa- tional Linguistics, 26(1):45-60.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to combining word alignments",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Necip Fazil Ayan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "96--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Necip Fazil Ayan and Bonnie J. Dorr. 2006. A maxi- mum entropy approach to combining word alignments. In Proceedings of HLT-NAACL, pages 96-103, New York.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A probability model to improve word alignment",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Dekang Lin. 2003. A probability model to improve word alignment. In Proceedings of ACL, pages 88-95, Sapporo, Japan.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL, pages 263-270, Ann Arbor, MI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hmm word and phrase alignment for statistical machine translation",
"authors": [
{
"first": "Yonggang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonggang Deng and William Byrne. 2005. Hmm word and phrase alignment for statistical machine trans- lation. In Proceedings of HLT-EMNLP, Vancouver, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Measuring word alignment quality for statistical machine translation",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Fraser and Daniel Marcu. 2006a. Measuring word alignment quality for statistical machine transla- tion. In Technical Report ISI-TR-616, ISI/University of Southern California.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semisupervised training for statistical word alignment",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "769--776",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Fraser and Daniel Marcu. 2006b. Semi- supervised training for statistical word alignment. In Proceedings of COLING-ACL, pages 769-776, Syd- ney, Australia.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fast decoding and optimal decoding for machine translation",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jahr",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
}
],
"year": 2004,
"venue": "Artificial Intelligence",
"volume": "154",
"issue": "1-2",
"pages": "127--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2004. Fast decoding and optimal decoding for machine translation. Artificial Intelligence, 154(1-2):127-143.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A maximum entropy word aligner for Arabic-English machine translation",
"authors": [
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "89--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abraham Ittycheriah and Salim Roukos. 2005. A max- imum entropy word aligner for Arabic-English ma- chine translation. In Proceedings of HLT-EMNLP, pages 89-96, Vancouver, Canada.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Decoding complexity in wordreplacement translation models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "607--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, 25(4):607-615.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL, pages 127-133, Edmonton, Canada.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Word alignment via quadratic assignment",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "112--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Lacoste-Julien, Dan Klein, Ben Taskar, and Michael Jordan. 2006. Word alignment via quadratic assignment. In Proceedings of HLT-NAACL, pages 112-119, New York, NY.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Alignment by agreement",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proceedings of HLT-NAACL, New York.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Log-linear models for word alignment",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "459--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2005. Log-linear models for word alignment. In Proceedings of ACL, pages 459-466, Ann Arbor, MI.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improved hmm alignment models for languages with scarce resources",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts",
"volume": "",
"issue": "",
"pages": "83--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Lopez and Philip Resnik. 2005. Improved hmm alignment models for languages with scarce resources. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, pages 83-86, Ann Arbor, MI.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "133--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine trans- lation. In Proceedings of EMNLP, pages 133-139, Philadelphia, PA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Symmetric word alignments for statistical machine translation",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Matusov, Richard Zens, and Hermann Ney. 2004. Symmetric word alignments for statistical machine translation. In Proceedings of COLING, Geneva, Switzerland.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improved discriminative bilingual word alignment",
"authors": [
{
"first": "Robert",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Bode",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "513--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Moore, Wen-Tau Yih, and Andreas Bode. 2006. Improved discriminative bilingual word align- ment. In Proceedings of COLING-ACL, pages 513- 520, Sydney, Australia.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL, pages 295-302, Philadelphia, PA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "1",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(1):417-449.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 2003. Minimum error rate training in sta- tistical machine translation. In Proceedings of ACL, pages 160-167, Sapporo, Japan.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Extensions to hmm-based statistical word alignment models",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Tolga Ilhan",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, H. Tolga Ilhan, and Christopher D. Manning. 2002. Extensions to hmm-based statistical word alignment models. In Proceedings of EMNLP, Philadelphia, PA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical trans- lation. In Proceedings of COLING, pages 836-841, Copenhagen, Denmark.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling with structures in statistical machine translation",
"authors": [
{
"first": "Ye-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL",
"volume": "2",
"issue": "",
"pages": "1357--1363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye-Yi Wang and Alex Waibel. 1998. Modeling with structures in statistical machine translation. In Pro- ceedings of COLING-ACL, volume 2, pages 1357- 1363, Montreal, Canada.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of ACL, pages 523-530, Toulouse, France.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improved word alignment using a symmetric lexicon model",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Zens, Evgeny Matusov, and Hermann Ney. 2004. Improved word alignment using a symmetric lexicon model. In Proceedings of COLING, Geneva, Switzerland.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Syntax-based alignment: Supervised or unsupervised?",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Daniel Gildea. 2004. Syntax-based alignment: Supervised or unsupervised? In Proceed- ings of COLING, Geneva, Switzerland.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Generative story example, (number) indicates step number",
"uris": null
},
"TABREF1": {
"text": "(e i )) choosing a head word 10 d >2 ( j|class f (f j )) movement for subsequent target non-head words 3 t 1 (f j |e i ) head word translation 11 t(f j |e i ) translation without dependency on word-type 4 s(\u03c8 i |e i , \u03b3 i ) \u03c8 i is number of words in target cept 12 t(f j |e i ) translation table from final HMM iteration 5 s 0 (\u03c8 0 | P i \u03c8 i ) number of unaligned target words 13 s(\u03c8 i |\u03b3 i ) target cept size without dependency on source head word e 6 t 0 (f j ) identity of unaligned target words 14 s(\u03c8 i |e i ) target cept size without dependency on \u03b3 i 7 t >1 (f j |e i , class h (\u03c4 i1 )) non-head word translation 15 target spurious word penalty 8 d 1 ( j|class e (e \u03c1 ), class f (f j ))",
"num": null,
"content": "<table><tr><td>1 chi(\u03c7 i |e i ) source word type</td><td>9</td><td>d 2 ( j|class f (f j )) movement for left-most target</td></tr><tr><td/><td/><td>non-head word</td></tr><tr><td>2 \u00b5( i|class e movement for target</td><td colspan=\"2\">16-30 (same features, other direction)</td></tr><tr><td>head words</td><td/><td/></tr><tr><td/><td/><td>.</td></tr><tr><td/><td colspan=\"2\">All of the data used is available from the Linguis-</td></tr><tr><td/><td colspan=\"2\">tic Data Consortium except for the French/English</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table><tr><td>: Feature functions</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "Data sets",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF6": {
"text": "Experimental Results",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}