ACL-OCL / Base_JSON /prefixP /json /P02 /P02-1039.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P02-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:30:47.941954Z"
},
"title": "A Decoder for Syntax-based Statistical MT",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "kyamada@isi.edu"
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "knight\u00a1@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a decoding algorithm for a syntax-based translation model (Yamada and Knight, 2001). The model has been extended to incorporate phrasal translations as presented here. In contrast to a conventional word-to-word statistical model, a decoder for the syntaxbased model builds up an English parse tree given a sentence in a foreign language. As the model size becomes huge in a practical setting, and the decoder considers multiple syntactic structures for each word alignment, several pruning techniques are necessary. We tested our decoder in a Chinese-to-English translation system, and obtained better results than IBM Model 4. We also discuss issues concerning the relation between this decoder and a language model.",
"pdf_parse": {
"paper_id": "P02-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a decoding algorithm for a syntax-based translation model (Yamada and Knight, 2001). The model has been extended to incorporate phrasal translations as presented here. In contrast to a conventional word-to-word statistical model, a decoder for the syntaxbased model builds up an English parse tree given a sentence in a foreign language. As the model size becomes huge in a practical setting, and the decoder considers multiple syntactic structures for each word alignment, several pruning techniques are necessary. We tested our decoder in a Chinese-to-English translation system, and obtained better results than IBM Model 4. We also discuss issues concerning the relation between this decoder and a language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A statistical machine translation system based on the noisy channel model consists of three components: a language model (LM), a translation model (TM), and a decoder. For a system which translates from a foreign language \u00a2 to English \u00a3 , the LM gives a prior probability P\u00a4 \u00a5 \u00a3 \u00a7 \u00a6 and the TM gives a channel translation probability P\u00a4 . These models are automatically trained using monolingual (for the LM) and bilingual (for the TM) corpora. A decoder then finds the best English sentence given a foreign sentence that maximizes P\u00a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u00a5 \u00a3 \u00a2 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ", which also maximizes P\u00a4 are not simple probability tables but are parameterized models, a decoder must conduct a search over the space defined by the models. For the IBM models defined by a pioneering paper (Brown et al., 1993) , a decoding algorithm based on a left-to-right search was described in (Berger et al., 1996) . Recently (Yamada and Knight, 2001 ) introduced a syntax-based TM which utilized syntactic structure in the channel input, and showed that it could outperform the IBM model in alignment quality. In contrast to the IBM models, which are word-to-word models, the syntax-based model works on a syntactic parse tree, so the decoder builds up an English parse tree given a sentence in a foreign language. This paper describes an algorithm for such a decoder, and reports experimental results.",
"cite_spans": [
{
"start": 209,
"end": 229,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF2"
},
{
"start": 302,
"end": 323,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF1"
},
{
"start": 335,
"end": 359,
"text": "(Yamada and Knight, 2001",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Other statistical machine translation systems such as (Wu, 1997) and (Alshawi et al., 2000) also produce a tree given a sentence . Their models are based on mechanisms that generate two languages at the same time, so an English tree is obtained as a subproduct of parsing . However, their use of the LM is not mathematically motivated, since their models do not decompose into P\u00a4 space. Section 5 describes how to prune the search space for practical decoding. Section 6 shows experimental results. Section 7 discusses LM issues, and is followed by conclusions.",
"cite_spans": [
{
"start": 54,
"end": 64,
"text": "(Wu, 1997)",
"ref_id": "BIBREF8"
},
{
"start": 69,
"end": 91,
"text": "(Alshawi et al., 2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The syntax-based TM defined by (Yamada and Knight, 2001) assumes an English parse tree as a channel input. The channel applies three kinds of stochastic operations on each node : reordering children nodes ( ), inserting an optional extra word to the left or right of the node (! ), and translating leaf words (\" ). 1 These operations are independent of each other and are conditioned on the features (# ,$ ,% ) of the node. Figure 1 shows an example. The child node sequence of the top node VB is reordered from PRP-VB1-VB2 into PRP-VB2-VB1 as seen in the second tree (Reordered). An extra word ha is inserted at the leftmost node PRP as seen in the third tree (Inserted). The English word He under the same node is translated into a foreign word kare as seen in the fourth tree (Translated). After these operations, the channel emits a foreign word sentence by taking the leaves of the modified tree. Formally, the channel probability P\u00a4",
"cite_spans": [
{
"start": 31,
"end": 56,
"text": "(Yamada and Knight, 2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 424,
"end": 432,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Syntax-based TM",
"sec_num": "2"
},
{
"text": "\u00a5 & ' \u00a6 is P( 0 ) 2 13 & 4 6 5 7 8 & 9@ B A D C E A G F I H P H P Q S R T U V Q X W P(Y V 1a V 4 P(Y V 1a V 4 6 5 b d c ( 0 e V 1f V 4 h g i ( p V 1q V 4 if a V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax-based TM",
"sec_num": "2"
},
{
"text": "is terminal",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax-based TM",
"sec_num": "2"
},
{
"text": "r ( P s V 1t V 4 2 g i ( p V 1q V 4 otherwise where u v w B x h y w y h h h y w v x 2 y ! x y \" x , y ! y \" I y h h h y y ! y \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax-based TM",
"sec_num": "2"
},
{
"text": ", and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax-based TM",
"sec_num": "2"
},
{
"text": "\u00a4 u \u00a4 ' \u00a6 E \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax-based TM",
"sec_num": "2"
},
{
"text": "is a sequence of leaf words of a tree transformed by u from .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntax-based TM",
"sec_num": "2"
},
{
"text": "i \u00a4 d# e \u00a6 , f g \u00a4 ! \u1e27$ i \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": ", and are called the r-table, n-table, and t-table, respectively. These tables contain the probabilities of the channel operations ( , ). In Figure 1 , the r-table specifies the probability of having the second tree (Reordered) given the first tree. The n-table specifies the probability of having the third tree (Inserted) given the second tree. The t-table specifies the probability of having the fourth tree (Translated) given the third tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 52,
"text": "are called the r-table, n-table, and t-table,",
"ref_id": null
},
{
"start": 142,
"end": 150,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "j h \u00a4 \u00a5 \" \u1e27 % \u00a7 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "The probabilities in the model tables are automatically obtained by an EM-algorithm using pairs of (channel input) and (channel output) as a training corpus. Usually a bilingual corpus comes as pairs of translation sentences, so we need to parse the corpus. As we need to parse sentences on the channel input side only, many X-to-English translation systems can be developed with an English parser alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "The conditioning features (# ,$ ,% ) can be anything that is available on a tree , however they should be carefully selected not to cause datasparseness problems. Also, the choice of features may affect the decoding algorithm. In our experiment, a sequence of the child node label was used for # , a pair of the node label and the parent label was used for $ , and the identity of the English word is used for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "% . For exam- ple, i \u00a4 k# l \u00a6 v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "P\u00a4 PRP-VB2-VB1\u00a8PRP-VB1-VB2\u00a6 for the top node in Figure 1 . Similarly for the node PRP,",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "f g \u00a4 ! \u1e27$ i \u00a6 m v P\u00a4 right, ha\u00a8VB-PRP\u00a6 and j h \u00a4 \u00a5 \" \u1e27 % \u00a7 \u00a6 n v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "P\u00a4 kare\u00a8he\u00a6 . More detailed examples are found in (Yamada and Knight, 2001 ).",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "(Yamada and Knight, 2001",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The model tables",
"sec_num": null
},
{
"text": "In (Yamada and Knight, 2001) , the translation \" is a 1-to-1 lexical translation from an English word o to a foreign word",
"cite_spans": [
{
"start": 3,
"end": 28,
"text": "(Yamada and Knight, 2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "p , i.e., j h \u00a4 \u00a5 \" \u1e27 % \u00a7 \u00a6 q v r j \u00a4 p s o \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": ". To allow non 1-to-1 translation, such as for idiomatic phrases or compound nouns, we extend the model as follows. First we use fertility t as used in IBM models to allow 1-to-N mapping. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c ( 0 e u 1f s 4 6 5 c ( w v W v I x X y z y z y { v | 1} 4 5 ( \u00a5 1} 4 | U V Q u W c ( w v V",
"eq_num": "1}"
}
],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "( i 1 k 4 6 5 c ( w v W v x y z y z y { v | 1} W } x y z y z y { } ' 4 5 ( 1} W } x y y z y } 4 | U V Q X W c ( w v V 1} W } x y z y z y } 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "and linearly mix this phrasal translation with the word-to-word translation, i.e., is non-terminal. In practice, the phrase lengths (\u00c0 ,\u00c1 ) are limited to reduce the model size. In our experiment (Section 5), we restricted them as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "P(Y V 1a V 4 6 5 S S ( V 1 V 4 e ( \u00a5 k S 4 r ( P s V 1t V 4 h g i ( p V 1q V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "\u00a4 \u00a5 \u00a6 \u00a7\u00a9\u00a5\u00aa \u00ab\u00ac \u00aa\u00a5\u00ae \u00ac \u00ae\u00b0\u00b1 \u00aa \u00ac\u00b2 \u00a8 V B PRP VB1 VB2 VB TO TO NN VB VB2 TO \u00a6 \u00a7\u00a9\u00a5\u00aa VB1 \u00ab\u00ac \u00aa\u00a5\u00ae \u00ac \u00ae\u00af V B \u00a4 \u00a5 PRP\u00b0\u00b1 \u00aa \u00ac \u00b2 NN \u00a8 T O VB 3 \u00b5 \u00b6\u2022 \u00b9 \u00ba \u00bb VB2 TO VB \u00ab\u00ac \u00aa\u00a5 \u00ae \u00ac \u00ae\u00a6 \u00a7\u00a9\u00a5\u00aa VB1 \u00a4 \u00a5 PRP\u00b0\u00b1 \u00aa \u00ac \u00b2 NN \u00a8 T O VB 3 \u00b5 \u00b6\u2022 \u00b9 \u00ba \u00bb VB2 TO VB PRP NN TO VB1 1 \u00bd \u00b9 \u00b6 \u00b5 \u2022 \u00bc \u00bb \u00bc \u00be\u00bc \u00bb \u00be \u00ba \u00bb \u00bc \u00be \u00bf \u00b6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "\u00c2 \u1ea6 \u00c1 \u00c4 \u00c3 \u00c5 \u00c7 AE \u00c0 AE \u00c2 \u00c9 \u00c8 \u00c1 \u00cb \u00ca \u00c8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": ", to avoid pairs of extremely different lengths. This formula was obtained by randomly sampling the length of translation pairs. See (Yamada, 2002) for details.",
"cite_spans": [
{
"start": 133,
"end": 147,
"text": "(Yamada, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Translation",
"sec_num": "3"
},
{
"text": "Our statistical MT system is based on the noisychannel model, so the decoder works in the reverse direction of the channel. Given a supposed channel output (e.g., a French or Chinese sentence), it will find the most plausible channel input (an English parse tree) based on the model parameters and the prior probability of the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "In the syntax-based model, the decoder's task is to find the most plausible English parse tree given an observed foreign sentence. Since the task is to build a tree structure from a string of words, we can use a mechanism similar to normal parsing, which builds an English parse tree from a string of English words. Here we need to build an English parse tree from a string of foreign (e.g., French or Chinese) words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "To parse in such an exotic way, we start from an English context-free grammar obtained from the training corpus, 2 and extend the grammar to in- 2 The training corpus for the syntax-based model consists of corporate the channel operations in the translation model. For each non-lexical rule in the original English grammar (such as \"VP \u00cc VB NP PP\"), we supplement it with reordered rules (e.g. \"VP \u00cc NP PP VB\", \"VP \u00cc NP VB PP \", etc.) and associate them with the original English order and the reordering probability from the r-table. Similarly, rules such as \"VP \u00cc VP X\" and \"X \u00cc word\" are added for extra word insertion, and they are associated with a probability from the n-table. For each lexical rule in the English grammar, we add rules such as \"englishWord \u00cc foreignWord\" with a probability from the t-table.",
"cite_spans": [
{
"start": 145,
"end": 146,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "Now we can parse a string of foreign words and build up a tree, which we call a decoded tree. An example is shown in Figure 2 . The decoded tree is built up in the foreign language word order. To obtain a tree in the English order, we apply the reverse of the reorder operation (back-reordering) using the information associated to the rule expanded by the r-table. In Figure 2 , the numbers in the dashed oval near the top node shows the original english order. Then, we obtain an English parse tree by removing the leaf nodes (foreign words) from the backreordered tree. Among the possible decoded trees, we pick the best tree in which the product of the LM probability (the prior probability of the English tree) and the TM probability (the probabilities associated pairs of English parse trees and foreign sentences. with the rules in the decoded tree) is the highest. The use of an LM needs consideration. Theoretically we need an LM which gives the prior probability of an English parse tree. However, we can approximate it with an n-gram LM, which is wellstudied and widely implemented. We will discuss this point later in Section 7.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 125,
"text": "Figure 2",
"ref_id": null
},
{
"start": 369,
"end": 377,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00cd \u00ce \u00cd \u00cd \u00ce \u00cd \u00cf \u00d0 \u00cf \u00d0 \u00d1 \u00d1 \u00d2 \u00d3 \u00cf \u00d0 \u00cf \u00d0 \u00d2 \u00d3 \u00cf \u00d0 \u00cf \u00d0 1 2 1 2 ongaku wo kiku no ga \u00d4 \u00d4 \u00d4 suki \u00d4 da kare ha \u00d5 \u00d6 \u00d7 \u00d8 \u00d9 \u00da \u00db \u00dc \u00dd \u00cf \u00d0 1 3",
"eq_num": "\u00de"
}
],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "If we use a trigram model for the LM, a convenient implementation is to first build a decodedtree forest and then to pick out the best tree using a trigram-based forest-ranking algorithm as described in (Langkilde, 2000) . The ranker uses two leftmost and rightmost leaf words to efficiently calculate the trigram probability of a subtree, and finds the most plausible tree according to the trigram and the rule probabilities. This algorithm finds the optimal tree in terms of the model probability -but it is not practical when the vocabulary size and the rule size grow. The next section describes how to make it practical.",
"cite_spans": [
{
"start": 203,
"end": 220,
"text": "(Langkilde, 2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "We use our decoder for Chinese-English translation in a general news domain. The TM becomes very huge for such a domain. In our experiment (see Section 6 for details), there are about 4M non-zero entries in the trained",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "j h \u00a4 p s o \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "table. About 10K CFG rules are used in the parsed corpus of English, which results in about 120K non-lexical rules for the decoding grammar (after we expand the CFG rules as described in Section 4). We applied the simple algorithm from Section 4, but this experiment failed -no complete translations were produced. Even four-word sentences could not be decoded. This is not only because the model size is huge, but also because the decoder considers multiple syntactic structures for the same word alignment, i.e., there are several different decoded trees even when the translation of the sentence is the same. We then applied the following measures to achieve practical decoding. The basic idea is to use additional statistics from the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "beam search: We give up optimal decoding by using a standard dynamic-programming parser with beam search, which is similar to the parser used in (Collins, 1999) . A standard dynamicprogramming parser builds up \u00f5 nonterminal, inputsubstring\u00f6 tuples from bottom-up according to the grammar rules. When the parsing cost 3 comes only from the features within a subtree (TM cost, in our case), the parser will find the optimal tree by keeping the single best subtree for each tuple. When the cost depends on the features outside of a subtree, we need to keep all the subtrees for possible different outside features (boundary words for the trigram LM cost) to obtain the optimal tree. Instead of keeping all the subtrees, we only retain subtrees within a beam width for each input-substring. Since the outside features are not considered for the beam pruning, the optimality of the parse is not guaranteed, but the required memory size is reduced. Section 2). The pair must appear more than once in the Viterbi alignments 4 of the training corpus. Then we use the top-10 pairs ranked similarly to t- . By this pruning, we effectively remove junk phrase pairs, most of which come from misaligned sentences or untranslated phrases in the training corpus.",
"cite_spans": [
{
"start": 145,
"end": 160,
"text": "(Collins, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "r-table pruning: To reduce the number of rules for the decoding grammar, we use the top-N rules ranked by P\u00a4 rule\u00a6 P\u00a4 reord\u00a6 so that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "\u00a2 \u00a4 \u00a3 \u00a6 \u00a5 x P\u00a4 rule \u00a6 P\u00a4 reord \u00a6 d \u00f6 \u00a7 u \u00a9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": ", where P \u00a4 rule\u00a6 is a prior probability of the rule (in the original English order) found in the parsed English corpus, and P\u00a4 reord\u00a6 is the reordering probability in the TM. The product is a rough estimate of how likely a rule is used in decoding. Because only a limited number of reorderings are used in actual translation, a small number of rules are highly probable. In fact, among a total of 138,662 reorder-expanded rules, the most likely 875 rules contribute 95% of the probability mass, so discarding the rules which contribute the lower 5% of the probability mass efficiently eliminates more than 99% of the total rules. zero-fertility words: An English word may be translated into a null (zero-length) foreign word. This happens when the fertility",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "t g \u00a4 \u00a7 o \u00a6 \u00f6 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": ", and such English word o (called a zero-fertility word) must be inserted during the decoding. The decoding parser is modified to allow inserting zero-fertility words, but unlimited insertion easily blows up the memory space. Therefore only limited insertion is allowed. Observing the Viterbi alignments of the training corpus, the top-20 frequent zero-fertility words 5 cover over 70% of the cases, thus only those are allowed to be inserted. Also we use syntactic context to limit the insertion. For example, a zero-fertility word in is inserted as IN when \"PP \u00cc IN NP-A\" rule is applied. Again, observing the Viterbi alignments, the top-20 frequent contexts cover over 60% of the cases, so we allow insertions only in these contexts. This kind of context sensitive insertion is possible because the decoder builds a syntactic tree. Such selective insertion by syntactic context is not easy for , and P\u00a4 rule\u00a6 . These statistics may be considered as a part of the LM P\u00a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "\u00a5 \u00a3 \u00a7 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": ", and such syntactic probabilities are essential when we mainly use trigrams for the LM. In this respect, the pruning is useful not only for reducing the search space, but also improving the quality of translation. We also use statistics from the Viterbi alignments, such as the phrase translation frequency and the zero-fertility context frequency. These are statistics which are not modeled in the TM. The frequency count is essentially a joint probability P\u00a4 . Utilizing statistics outside of a model is an important idea for statistical machine translation in general. For example, a decoder in (Och and Ney, 2000) uses alignment template statistics found in the Viterbi alignments.",
"cite_spans": [
{
"start": 599,
"end": 618,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "This section describes results from our experiment using the decoder as described in the previous section. We used a Chinese-English translation corpus for the experiment. After discarding long sentences (more than 20 words in English), the English side of the corpus consisted of about 3M words, and it was parsed with Collins' parser (Collins, 1999) . Training the TM took about 8 hours using a 54-node unix cluster. We selected 347 short sentences (less than 14 words in the reference English translation) from the held-out portion of the corpus, and they were used for evaluation. Table 1 shows the decoding performance for the test sentences. The first system ibm4 is a reference system, which is based on IBM Model4. The second and the third (syn and syn-nozf) are our decoders. Both used the same decoding algorithm and pruning as described in the previous sections, except that syn-nozf allowed no zero-fertility insertions. The average decoding speed was about 100 seconds 6 per sentence for both syn and syn-nozf.",
"cite_spans": [
{
"start": 336,
"end": 351,
"text": "(Collins, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 585,
"end": 592,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results: Chinese/English",
"sec_num": "6"
},
{
"text": "As an overall decoding performance measure, we used the BLEU metric (Papineni et al., 2002) . This measure is a geometric average of n-gram accuracy, adjusted by a length penalty factor LP. 7 The n-gram accuracy (in percentage) is shown in Table 1 as P1/P2/P3/P4 for unigram/bigram/trigram/4-gram. Overall, our decoder performed better than the IBM system, as indicated by the higher BLEU score. We obtained better n-gram accuracy, but the lower LP score penalized the overall score. Interestingly, the system with no explicit zero-fertility word insertion (syn-nozf) performed better than the one with zerofertility insertion (syn). It seems that most zerofertility words were already included in the phrasal translations, and the explicit zero-fertility word insertion produced more garbage than expected words. To verify that the pruning was effective, we relaxed the pruning threshold and checked the decoding coverage for the first 92 sentences of the test data. Table 2 shows the result. On the left, the r-table pruning was relaxed from the 95% level to 98% or 100%. On the right, the t-table pruning was relaxed from the top-5 (o , \u00f8 ) pairs to the top-10 or top-20 pairs. The system r95 and w5 are identical to syn-nozf in Table 1 .",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 240,
"end": 247,
"text": "Table 1",
"ref_id": null
},
{
"start": 968,
"end": 975,
"text": "Table 2",
"ref_id": "TABREF8"
},
{
"start": 1232,
"end": 1239,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results: Chinese/English",
"sec_num": "6"
},
{
"text": "When r-table pruning was relaxed from 95% to 98%, only about half (47/92) of the test sentences were decoded, others were aborted due to lack of memory. When it was further relaxed to 100% (i.e., no pruning was done), only 20 sentences were decoded. Similarly, when the t-table pruning threshold was relaxed, fewer sentences could be decoded due to the memory limitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results: Chinese/English",
"sec_num": "6"
},
{
"text": "Although our decoder performed better than the 6 Using a single-CPU 800Mhz Pentium III unix system with 1GB memory. IBM system in the BLEU score, the obtained gain was less than what we expected. We have thought the following three reasons. First, the syntax of Chinese is not extremely different from English, compared with other languages such as Japanese or Arabic. Therefore, the TM could not take advantage of syntactic reordering operations. Second, our decoder looks for a decoded tree, not just for a decoded sentence. Thus, the search space is larger than IBM models, which might lead to more search errors caused by pruning. Third, the LM used for our system was exactly the same as the LM used by the IBM system. Decoding performance might be heavily influenced by LM performance. In addition, since the TM assumes an English parse tree as input, a trigram LM might not be appropriate. We will discuss this point in the next section. Phrasal translation worked pretty well. Figure 3 shows the top-20 frequent phrase translations observed in the Viterbi alignment. The leftmost column shows how many times they appeared. Most of them are correct. It even detected frequent sentenceto-sentence translations, since we only imposed a relative length limit for phrasal translations (Section 3). However, some of them, such as the one with (in cantonese), are wrong. We expected that these junk phrases could be eliminated by phrase pruning (Section 5), however the junk phrases present many times in the corpus were not effectively filtered out.",
"cite_spans": [],
"ref_spans": [
{
"start": 985,
"end": 993,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Experimental Results: Chinese/English",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "7 BLEU 5 & ( \u00a2 \" ! T Q u W $ # T & % (",
"eq_num": "' 0 )"
}
],
"section": "Experimental Results: Chinese/English",
"sec_num": "6"
},
{
"text": "The BLEU score measures the quality of the decoder output sentences. We were also interested in the syntactic structure of the decoded trees. The leftmost tree in Figure 4 is a decoded tree from the syn-nozf system. Surprisingly, even though the decoded sentence is passable English, the tree structure is totally unnatural. We assumed that a good parse tree gives high trigram probabilities. But it seems a bad parse tree may give good trigram probabilities too. We also noticed that too many unary rules (e.g. \"NPB \u00cc PRN\") were used. This is because the reordering probability is always 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "To remedy this, we added CFG probabilities (PCFG) in the decoder search, i.e., it now looks for a tree which maximizes P\u00a4 trigram\u00a6 P\u00a4 cfg\u00a6 P\u00a4 TM\u00a6 . The CFG probability was obtained by counting the rule & w Figure 4 is the output for the same sentence. The syntactic structure now looks better, but we found three problems. First, the BLEU score is worse (0.078). Second, the decoded trees seem to prefer noun phrases. In many trees, an entire sentence was decoded as a large noun phrase. Third, it uses more frequent node reordering than it should.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "H X H S F x a s ( v F S H Y a P y b H d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "The BLEU score may go down because we weighed the LM (trigram and PCFG) more than the TM. For the problem of too many noun phrases, we thought it was a problem with the corpus. Our training corpus contained many dictionary entries, and the parliament transcripts also included a list of participants' names. This may cause the LM to prefer noun phrases too much. Also our corpus contains noise. There are two types of noise. One is sentence alignment error, and the other is English parse error. The corpus was sentence aligned by automatic software, so it has some bad alignments. When a sentence was misaligned, or the parse was wrong, the Viterbi alignment becomes an over-reordered tree as it picks up plausible translation word pairs first and reorders trees to fit them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "To see if it was really a corpus problem, we selected a good portion of the corpus and re-trained the r-table. To find good pairs of sentences in the corpus, we used the following: 1) Both English and Chinese sentences end with a period. 2) The En-glish word is capitalized at the beginning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "3) The sentences do not contain symbol characters, such as colon, dash etc, which tend to cause parse errors. 4) The Viterbi-ratio 8 is more than the average of the pairs which satisfied the first three conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "Using the selected sentence pairs, we retrained only the r-table and the PCFG. The rightmost tree in Figure 4 is the decoded tree using the re-trained TM. The BLEU score was improved (0.085), and the tree structure looks better, though there are still problems. An obvious problem is that the goodness of syntactic structure depends on the lexical choices. For example, the best syntactic structure is different if a verb requires a noun phrase as object than it is if it does not. The PCFG-based LM does not handle this.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "At this point, we gave up using the PCFG as a component of the LM. Using only trigrams obtains the best result for the BLEU score. However, the BLEU metric may not be affected by the syntactic aspect of translation quality, and as we saw in Figure 4 , we can improve the syntactic quality by introducing the PCFG using some corpus selection techniques. Also, the pruning methods described in Section 5 use syntactic statistics from the training corpus. Therefore, we are now investigating more sophisticated LMs such as (Charniak, 2001) which Figure 4 : Effect of PCFG and re-training: No CFG probability (PCFG) was used (left). PCFG was used for the search (middle). The r-table was re-trained and PCFG was used (right). Each tree was back reordered and is shown in the English order.",
"cite_spans": [
{
"start": 520,
"end": 536,
"text": "(Charniak, 2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 241,
"end": 249,
"text": "Figure 4",
"ref_id": null
},
{
"start": 543,
"end": 551,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "incorporate syntactic features and lexical information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoded Trees",
"sec_num": "7"
},
{
"text": "We have presented a decoding algorithm for a syntax-based statistical machine translation. The translation model was extended to incorporate phrasal translations. Because the input of the channel model is an English parse tree, the decoding algorithm is based on conventional syntactic parsing, and the grammar is expanded by the channel operations of the TM. As the model size becomes huge in a practical setting, and the decoder considers multiple syntactic structures for a word alignment, efficient pruning is necessary. We applied several pruning techniques and obtained good decoding quality and coverage. The choice of the LM is an important issue in implementing a decoder for the syntaxbased TM. At present, the best result is obtained by using trigrams, but a more sophisticated LM seems promising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The channel operations are designed to model the difference in the word order (SVO for English vs. VSO for Arabic) and case-marking schemes (word positions in English vs. casemarker particles in Japanese).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Viterbi alignment is the most probable word alignment according to the trained TM tables.5 They are the,to, of, a, in, is, be, that, on, and, are, for, will, with, have, it, 's, has, i, and by.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Viterbi-ratio is the ratio of the probability of the most plausible alignment with the sum of the probabilities of all the alignments. Low Viterbi-ratio is a good indicator of misalignment or parse error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by DARPA-ITO grant N66001-00-1-9814.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " D F E H G H I Q P ( R F S U T F V H W H S H X F Y H H S F Y a P c b F d e H f H g Q P ( R F S i h p V a q H q ( V F r a s ( Y F T U b H d f H G H D t H u F v H u H & w H X H S F x a s ( v F S H Y a P y b H d f H E H I Q s ( Y P ( R s ( x U F V H Y H Y F S H a P H s p V H Y & b H d g H I H g t H u F ",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 381,
"text": "D F E H G H I Q P ( R F S U T F V H W H S H X F Y H H S F Y a P c b F d e H f H g Q P ( R F S i h p V a q H q ( V F r a s ( Y F T U b H d f H G H D t H u F v H u H & w H X H S F x a s ( v F S H Y a P y b H d f H E H I Q s ( Y P ( R s ( x U F V H Y H Y F S H a P H s p V H Y & b H d g H I H g t H u F",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning dependency translation models as collections of finite state head transducers",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Douglas",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Alshawi, S. Bangalore, and S. Douglas. 2000. Learn- ing dependency translation models as collections of fi- nite state head transducers. Computational Linguis- tics, 26(1).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language Translation Apparatus and Method Using Context-Based Translation Models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gillett",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Printz",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ures",
"suffix": ""
}
],
"year": 1996,
"venue": "U.S. Patent",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, P. Brown, S. Della Pietra, V. Della Pietra, J. Gillett, J. Lafferty, R. Mercer, H. Printz, and L. Ures. 1996. Language Translation Apparatus and Method Using Context-Based Translation Models. U.S. Patent 5,510,981.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer. 1993. The mathematics of statistical machine trans- lation: Parameter estimation. Computational Linguis- tics, 19(2).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Immediate-head parsing for language models",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak. 2001. Immediate-head parsing for language models. In ACL-01.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Forest-based statistical sentence generation",
"authors": [
{
"first": "I",
"middle": [],
"last": "Langkilde",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Langkilde. 2000. Forest-based statistical sentence gen- eration. In NAACL-00.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "ACL-2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney. 2000. Improved statistical alignment models. In ACL-2000.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL-02.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1997. Stochastic inversion transduction gram- mars and bilingual parsing of parallel corpora. Com- putational Linguistics, 23(3).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Yamada and K. Knight. 2001. A syntax-based statis- tical translation model. In ACL-01.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Syntax-Based Statistical Translation Model",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Yamada. 2002. A Syntax-Based Statistical Transla- tion Model. Ph.D. thesis, University of Southern Cali- fornia.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"uris": null,
"text": "Channel Operations: Reorder, Insert, and Translate if",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "system output length, and r is the reference length.",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Top-20 frequent phrase translations in the Viterbi alignment frequency in the parsed English side of the training corpus. The middle of",
"type_str": "figure"
},
"TABREF8": {
"num": null,
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}