| { |
| "paper_id": "P97-1047", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:15:23.480756Z" |
| }, |
| "title": "Decoding Algorithm in Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Ye-Yi", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": { |
| "addrLine": "5000 Forbes Avenue Pittsburgh", |
| "postCode": "15213", |
| "region": "PA", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": { |
| "addrLine": "5000 Forbes Avenue Pittsburgh", |
| "postCode": "15213", |
| "region": "PA", |
| "country": "USA" |
| } |
| }, |
| "email": "waibel@cs@cmu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Decoding algorithm is a crucial part in statistical machine translation. We describe a stack decoding algorithm in this paper. We present the hypothesis scoring method and the heuristics used in our algorithm. We report several techniques deployed to improve the performance of the decoder. We also introduce a simplified model to moderate the sparse data problem and to speed up the decoding process. We evaluate and compare these techniques/models in our statistical machine translation system.", |
| "pdf_parse": { |
| "paper_id": "P97-1047", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Decoding algorithm is a crucial part in statistical machine translation. We describe a stack decoding algorithm in this paper. We present the hypothesis scoring method and the heuristics used in our algorithm. We report several techniques deployed to improve the performance of the decoder. We also introduce a simplified model to moderate the sparse data problem and to speed up the decoding process. We evaluate and compare these techniques/models in our statistical machine translation system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "1 Introduction", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Statistical machine translation is based on a channel model. Given a sentence T in one language (German) to be translated into another language (English), it considers T as the target of a communication channel, and its translation S as the source of the channel. Hence the machine translation task becomes to recover the source from the target. Basically every English sentence is a possible source for a German target sentence. If we assign a probability P(S I T) to each pair of sentences (S, T), then the problem of translation is to find the source S for a given target T, such that P(S [ T) is the maximum. According to Bayes rule, P(S IT) = P(S)P(T I S) P(T)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Since the denominator is independent of S, we have --arg maxP(S)P(T I S)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "S Therefore a statistical machine translation system must deal with the following three problems:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "\u2022 Modeling Problem: How to depict the process of generating a sentence in a source language, and the process used by a channel to generate a target sentence upon receiving a source sentence? The former is the problem of language modeling, and the later is the problem of translation modeling. They provide a framework for calculating P(S) and P(W I S) in (2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "\u2022 Learning Problem: Given a statistical language model P(S) and a statistical translation model P(T I S), how to estimate the parameters in these models from a bilingual corpus of sentences?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "\u2022 Decoding Problem: With a fully specified (framework and parameters) language and translation model, given a target sentence T, how to efficiently search for the source sentence that satisfies (2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "The modeling and learning issues have been discussed in (Brown et ah, 1993) , where ngram model was used for language modeling, and five different translation models were introduced for the translation process. We briefly introduce the model 2 here, for which we built our decoder. In model 2, upon receiving a source English sentence e = el,. \u2022 -, el, the channel generates a German sentence g = gl, \u2022 \u2022 \", g,n at the target end in the following way:", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 75, |
| "text": "(Brown et ah, 1993)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "1. With a distribution P(m I e), randomly choose the length m of the German translation g. In model 2, the distribution is independent of m and e: P(m [ e) = e where e is a small, fixed number.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "2. For each position i (0 < i < m) in g, find the corresponding position ai in e according to an alignment distribution P(ai I i, a~ -1, m, e). In model 2, the distribution only depends on i, ai and the length of the English and German sentences: a~-l,m,e) = a(ai l i, m,l) 3. Generate the word gl at the position i of the German sentence from the English word ea~ at the aligned position ai of gi, according to a translation distribution P(gi t ~t~'~, st~i-t, e) = t (gl I ea~) . The distribution here only depends on gi and eai.", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 273, |
| "text": "a~-l,m,e) = a(ai l i, m,l)", |
| "ref_id": null |
| }, |
| { |
| "start": 468, |
| "end": 478, |
| "text": "(gl I ea~)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "P(ai l i,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Therefore, P(g l e) is the sum of the probabilities of generating g from e over all possible alignments A, in which the position i in the target sentence g is aligned to the position ai in the source sentence e: , ... ~\" IT t(g# le=jla(a~ Ij, l,m) ", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 247, |
| "text": ", ... ~\" IT t(g# le=jla(a~ Ij, l,m)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "P(gle) = I l m e ~", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "= al=0 amm0j=l", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "m ! e 1\"I ~ t(g# l e,)a(ilj, t, m)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "j=l i=0 (Brown et al., 1993) also described how to use the EM algorithm to estimate the parameters a (i I j,l, m) and $(g I e) in the aforementioned model.", |
| "cite_spans": [ |
| { |
| "start": 8, |
| "end": 28, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 101, |
| "end": 113, |
| "text": "(i I j,l, m)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Translation (Brown et al., 1993) and (Vogel, Ney, and Tillman, 1996) have discussed the first two of the three problems in statistical machine translation. Although the authors of (Brown et al., 1993) stated that they would discuss the search problem in a follow-up arti-\u2022 cle, so far there have no publications devoted to the decoding issue for statistical machine translation. On the other side, decoding algorithm is a crucial part in statistical machine translation. Its performance directly affects the quality and efficiency of translation. Without a good and efficient decoding algorithm, a statistical machine translation system may miss the best translation of an input sentence even if it is perfectly predicted by the model.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 32, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 37, |
| "end": 68, |
| "text": "(Vogel, Ney, and Tillman, 1996)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 180, |
| "end": 200, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding in Statistical Machine", |
| "sec_num": "1.2" |
| }, |
| { |
| "text": "Stack decoders are widely used in speech recognition systems. The basic algorithm can be described as following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stack Decoding Algorithm", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1. Initialize the stack with a null hypothesis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stack Decoding Algorithm", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2. Pop the hypothesis with the highest score off the stack, name it as current-hypothesis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stack Decoding Algorithm", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3. if current-hypothesis is a complete sentence, output it and terminate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stack Decoding Algorithm", |
| "sec_num": "2" |
| }, |
| { |
| "text": "4. extend current-hypothesis by appending a word in the lexicon to its end. Compute the score of the new hypothesis and insert it into the stack. Do this for all the words in the lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stack Decoding Algorithm", |
| "sec_num": "2" |
| }, |
| { |
| "text": "5. Go to 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stack Decoding Algorithm", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In stack search for statistical machine translation, a hypothesis H includes (a) the length l of the source sentence, and (b) the prefix words in the sentence. Thus a hypothesis can be written as H = l : ere2.. \"ek, which postulates a source sentence of length l and its first k words. The score of H, fit, consists of two parts: the prefix score gH for ele2\"\" ek and the heuristic score hH for the part ek+lek+2\"-et that is yet to be appended to H to complete the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring the hypotheses", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(3) can be used to assess a hypothesis. Although it was obtained from the alignment model, it would be easier for us to describe the scoring method if we interpret the last expression in the equation in the following way: each word el in the hypothesis contributes the amount e t(gj [ ei)a(i l J, l, m) to the probability of the target sentence word gj. For each hypothesis H = l : el,e2,-\",ek, we use SH(j) to denote the probability mass for the target word gl contributed by the words in the hypothesis: k", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 302, |
| "text": "[ ei)a(i l J, l, m)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prefix score gH", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "i=0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SH(j) = e~'~t(g~ lei)a(ilj, t,m)", |
| "sec_num": null |
| }, |
| { |
| "text": "Extending H with a new word will increase Sn(j),l < j < m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SH(j) = e~'~t(g~ lei)a(ilj, t,m)", |
| "sec_num": null |
| }, |
| { |
| "text": "To make the score additive, the logarithm of the probability in (3) was used. So the prefix score contributed by the translation model is :~']~=0 log St/(j).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SH(j) = e~'~t(g~ lei)a(ilj, t,m)", |
| "sec_num": null |
| }, |
| { |
| "text": "Because our objective is to maximize P(e, g), we have to include as well the logarithm of the language model probability of the hypothesis in the score, therefore we have ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SH(j) = e~'~t(g~ lei)a(ilj, t,m)", |
| "sec_num": null |
| }, |
| { |
| "text": "gH = gp+logP(eklek-N+t'''ek-t) m + ~-'~ log[1 + et(gj l ek)a(k Ij, l, m) ~=0 se(j) ] SH(j) = Sp(j)+et(gjlek)a(klj, l,m) (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SH(j) = e~'~t(g~ lei)a(ilj, t,m)", |
| "sec_num": null |
| }, |
| { |
| "text": "A practical problem arises here. For a many early stage hypothesis P, Sp(j) is close to 0. This causes problems because it appears as a denominator in (5) and the argument of the log function when calculating gp. We dealt with this by either limiting the translation probability from the null word (Brown et al., 1993) at the hypothetical 0-position (Brown et al., 1993) over a threshold during the EM training, or setting SHo (j) to a small probability 7r instead of 0 for the initial null hypothesis H0. Our experiments show that lr = 10 -4 gives the best result.", |
| "cite_spans": [ |
| { |
| "start": 298, |
| "end": 318, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 350, |
| "end": 370, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SH(j) = e~'~t(g~ lei)a(ilj, t,m)", |
| "sec_num": null |
| }, |
| { |
| "text": "To guarantee an optimal search result, the heuristic function must be an upper-bound of the score for all possible extensions ek+le/c+2...et (Nilsson, 1971 ) of a hypothesis. In other words, the benefit of extending a hypothesis should never be underestimated. Otherwise the search algorithm will conclude prematurely with a non-optimal hypothesis. On the other hand, if the heuristic function overestimates the merit of extending a hypothesis too much, the search algorithm will waste a huge amount of time after it hits a correct result to safeguard the optimality.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 155, |
| "text": "(Nilsson, 1971", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "To estimate the language model score h LM of the unrealized part of a hypothesis, we used the negative of the language model perplexity PPtrain on the training data as the logarithm of the average probability of predicting a new word in the extension from a history. So we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "h LM = -(1 -k)PPtrai, + C. (6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "Here is the motivation behind this. We assume that the perplexity on training data overestimates the likelihood of the forthcoming word string on average. However, when there are only a few words to be extended (k is close to 1), the language model probability for the words to be extended may be much higher than the average. This is why the constant term C was introduced in (6). When k << l, -(l-k)PPtrain is the dominating term in (6), so the heuristic language model score is close to the average. This can avoid overestimating the score too much. As k is getting closer to l, the constant term C plays a more important role in (6) to avoid underestimating the language model score. In our experiments, we used C = PPtrain +log(Pmax), where Pm== is the maximum ngram probability in the language model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "To estimate the translation model score, we introduce a variable va(j), the maximum contribution to the probability of the target sentence word gj from any possible source language words at any position between i and l: vit(j) = max t(g~ [e)a(klj, l,m ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "i<_/c<_l,eEL~ \" \"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "here LE is the English lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "Since vit (j) is independent of hypotheses, it only needs to be calculated once for a given target sentence. When k < 1, the heuristic function for the hypothesis H = 1 : ele2 -..e/c, is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "171 hH = ~max{0,1og(v(/c+Dl(j)) --logSH(j)} j=l -(t -k)PP,~=., + c (8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "where log(v(k+l)t(j))-logSg(j)) is the maximum increasement that a new word can bring to the likelihood of the j-th target word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "When k = l, since no words can be appended to the hypothesis, it is obvious that h~ = O.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "This heuristic function over-estimates the score of the upcoming words. Because of the constraints from language model and from the fact that a position in a source sentence cannot be occupied by two different words, normally the placement of words in those unfilled positions cannot maximize the likelihood of all the target words simultaneously.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "Due to physical space limitation, we cannot keep all hypotheses alive. We set a constant M, and whenever the number of hypotheses exceeds M, the algorithm will prune the hypotheses with the lowest scores. In our experiments, M was set to 20,000.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pruning and aborting search", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "There is time limitation too. It is of little practical interest to keep a seemingly endless search alive too long. So we set a constant T, whenever the decoder extends more than T hypotheses, it will abort the search and register a failure. In our experiments, T was set to 6000, which roughly corresponded to 2 and half hours of search effort.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pruning and aborting search", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The above decoder has one problem: since the heuristic function overestimates the merit of extending a hypothesis, the decoder always prefers hypotheses of a long sentence, which have a better chance to maximize the likelihood of the target words. The decoder will extend the hypothesis with large I first, and their children will soon occupy the stack and push the hypotheses of a shorter source sentence out of the stack. If the source sentence is a short one, the decoder will never be able to find it, for the hypotheses leading to it have been pruned permanently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "This \"incomparable\" problem was solved with multi-stack search (Magerman, 1994) . A separate stack was used for each hypothesized source sentence length 1. We do compare hypotheses in different stacks in the following cases. First, we compare a complete sentence in a stack with the hypotheses in other stacks to safeguard the optimality of search result; Second, the top hypothesis in a stack is compared with that of another stack. If the difference is greater than a constant ~, then the less probable one will not be extended. This is called soft-pruning, since whenever the scores of the hypotheses in other stacks go down, this hypothesis may revive. In the IBM translation model 2, the alignment parameters depend on the source and target sentence length I and m. While this is an accurate model, it causes the following difficulties:", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 79, |
| "text": "(Magerman, 1994)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "1. there are too many parameters and therefore too few trainingdata per parameter. This may not be a problem when massive training data are available. However, in our application, this is a severe problem. Figure 1 plots the length distribution for the English and German sentences. When sentences get longer, there are fewer training data available.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 206, |
| "end": 214, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "2. the search algorithm has to make multiple hypotheses of different source sentence length. For each source sentence length, it searches through almost the same prefix words and finally settles on a sentence length. This is a very time consuming process and makes the decoder very inefficient.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "To solve the first problem, we adjusted the count for the parameter a(i [ j, l, m) or i > l, or j > m. Although (9) can moderate the severity of the first data sparse problem, it does not ease the second inefficiency problem at all. We thus made a radical change to (9) by removing the precondition that (l, m) and (l', m') must be close enough. This results in a simplified translation model, in which the alignment parameters are independent of the sentence length 1 and m: P(ilj, m,e) = P (ilj, l,m) --a(i l J) here i,j < Lm, and L,n is the maximum sentence length allowed in the translation system. A slight change to the EM algorithm was made to estimate the parameters.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 82, |
| "text": "[ j, l, m)", |
| "ref_id": null |
| }, |
| { |
| "start": 492, |
| "end": 502, |
| "text": "(ilj, l,m)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "There is a problem with this model: given a sentence pair g and e, when the length of e is smaller than Lm, then the alignment parameters do not sum to 1: lel a(ilj) < 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "i--0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We deal with this problem by padding e to length Lm with dummy words that never gives rise to any word in the target of the channel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Since the parameters are independent of the source sentence length, we do not have to make an assumption about the length in a hypothesis. Whenever a hypothesis ends with the sentence end symbol </s> and its score is the highest, the decoder reports it as the search result. In this case, a hypothesis can be expressed as H = el,e2,...,ek, and IHI is used to denote the length of the sentence prefix of the hypothesis H, in this case, k.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stack Search", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Since we do not make assumption of the source sentence length, the heuristics described above can no longer be applied. Instead, we used the following heuristic function: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Heuristics", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h~./ = ~ max{0,1og( v(IHI+I)(IHI+n)(j))} S.(j) -n * PPt~ain + C", |
| "eq_num": "(" |
| } |
| ], |
| "section": "Heuristics", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Due to historical reasons, stack search got its current name. Unfortunately, the requirement for search states organization is far beyond what a stack and its push pop operations can handle. What we really need is a dynamic set which supports the following operations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "1. INSERT: to insert a new hypothesis into the set. 2. DELETE: to delete a state in hard pruning. 3. MAXIMUM: to find the state with the best score to extend. 4. MINIMUM: to find the state to be pruned.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We used the Red-Black tree data structure (Cormen, Leiserson, and Rivest, 1990) to implement the dynamic set, which guarantees that the above operations take O(log n) time in the worst case, where n is the number of search states in the set.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 79, |
| "text": "(Cormen, Leiserson, and Rivest, 1990)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We tested the performance of the decoders with the scheduling corpus (Suhm et al., 1995) . Around 30,000 parallel sentences (400,000 words altogether for both languages) were used to train the IBM model 2 and the simplified model with the EM algorithm. A larger English monolingual corpus with around 0.5 million words was used to train a bigram for language modelling. The lexicon contains 2,800 English and 4,800 German words in morphologically inflected form. We did not do any preprocessing/analysis of the data as reported in (Brown et al., 1992) . Table 1 shows the success rate of three models/decoders. As we mentioned before, the comparison between hypotheses of different sentence length made the single stack search for the IBM model 2 fail (return without a result) on a majority of the test sentences. While the multi-stack decoder improved this, the simplified model/decoder produced an output for all the 120 test sentences.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 88, |
| "text": "(Suhm et al., 1995)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 531, |
| "end": 551, |
| "text": "(Brown et al., 1992)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 554, |
| "end": 561, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Unlike the case in speech recognition, it is quite arguable what \"accurate translations\" means. In speech recognition an output can be compared with the sample transcript of the test data. In machine translation, a sentence may have several legitimate translations. It is difficult to compare an output from a decoder with a designated translation. Instead, we used human subjects to judge the machinemade translations. The translations are classified into three categories 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "1. Correct translations: translations that are grammatical and convey the same meaning as the inputs. 2. Okay translations: translations that convey the same meaning but with small grammatical mistakes or translations that convey most but not the entire meaning of the input.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "3. Incorrect translations: Translations that are ungrammatical or convey little meaningful information or the information is different from the input.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Examples of correct, okay, and incorrect translations are shown in Table 2 . Table 3 shows the statistics of the translation results. The accuracy was calculate by crediting a correct translation 1 point and an okay translation 1/2 point.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 67, |
| "end": 74, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 77, |
| "end": 84, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "There are two different kinds of errors in statistical machine translation. A modeling erivr occurs when the model assigns a higher score to an incorrect translation than a correct one. We cannot do anything about this with the decoder. A decoding Table 2 : Examples of Correct, Okay, and Incorrect Translations: for each translation, the first line is an input German sentence, the second line is the human made (target) translation for that input sentence, and the third line is the output from the decoder.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 248, |
| "end": 255, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "error or search error happens when the search algorithm misses a correct translation with a higher score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "When evaluating a decoding algorithm, it would be attractive if we can tell how many errors are caused by the decoder. Unfortunately, this is not attainable. Suppose that we are going to translate a German sentence g, and we know from the sample that e is one of its possible English translations. The decoder outputs an incorrect e ~ as the translation of g. If the score of e' is lower than that of e, we know that a search error has occurred. On the other hand, if the score of e' is higher, we cannot decide if it is a modeling error or not, since there may still be other legitimate translations with a score higher than e ~ --we just do not know what they are.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Although we cannot distinguish a modeling error from a search error, the comparison between the decoder output's score and that of a sample translation can still reveal some information about the performance of the decoder. If we know that the decoder can find a sentence with a better score than a \"correct\" translation, we will be more confident that the decoder is less prone to cause errors. Ta-ble 4 shows the comparison between the score of the outputs from the decoder and the score of the sample translations when the outputs are incorrect. In most cases, the incorrect outputs have a higher score than the sample translations. Again, we consider a \"okay\" translation a half error here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "This result hints that model deficiencies may be a major source of errors. The models we used here are very simple. With a more sophisticated model, more training data, and possibly some preprocessing, the total error rate is expected to decrease.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Another important issue is the efficiency of the decoder. Figure 3 plots the average number of states being extended by the decoders. It is grouped according to the input sentence length, and evaluated on those sentences on which the decoder succeeded.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 58, |
| "end": 66, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Decoding Speed", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The average number of states being extended in the model 2 single stack search is not available for long sentences, since the decoder failed on most of the long sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding Speed", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The figure shows that the simplified model/decoder works much more efficiently than the other mod- ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding Speed", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We have reported a stack decoding algorithm for the IBM statistical translation model 2 and a simplified model. Because the simplified model has fewer uarameters and does not have to posit hypotheses with the same prefixes but different length, it outperformed the IBM model 2 with regard to both accuracy and efficiency, especially in our application that lacks a massive amount of training data. In most cases, the erroneous outputs from the decoder have a higher score than the human made translations. Therefore it is less likely that the decoder is a major contributor of translation errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This is roughly the same as the classification in IBM statistical translation, except we do not have \"legitimate translation that conveys different meaning from the input\" --we did not observed this case in our outputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank John Lafferty for enlightening discussions on this work. We would also like to thank the anonymous ACL reviewers for valuable comments. This research was partly supported by ATR and the Verbmobil Project. The vmws and conclusions in this document are those of the authors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Mathematics of Statistical Machine Translation: Parameter Estimation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Dellaopietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Della-Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, P. F., S. A. Dellaopietra, V. J Della-Pietra, and R. L. Mercer. 1993. The Mathematics of Sta- tistical Machine Translation: Parameter Estima- tion. Computational Linguistics, 19(2):263-311.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Analysis, Statistical Transfer, and Synthesis in Machine Translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the fourth International Conference on Theoretical and Methodological Issues in Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "83--100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, P. F., S. A. Della Pietra, V. J. Della Pietra, J. D. Lafferty, and R. L. Mercer. 1992. Analy- sis, Statistical Transfer, and Synthesis in Machine Translation. In Proceedings of the fourth Interna- tional Conference on Theoretical and Methodolog- ical Issues in Machine Translation, pages 83-100.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Introduction to Algorithms", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "H" |
| ], |
| "last": "Cormen", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Charles", |
| "suffix": "" |
| }, |
| { |
| "first": "Ronald", |
| "middle": [ |
| "L" |
| ], |
| "last": "Leiserson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rivest", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to Al- gorithms. The MIT Press, Cambridge, Mas- sachusetts.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Natural Language Parsing as Statistical Pattern Recognition", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Magerman", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Magerman, D. 1994. Natural Language Parsing as Statistical Pattern Recognition. Ph.D. thesis, Stanford University.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Problem-Solving Methods in Artificial Intelligence", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 1971, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nilsson, N. 1971. Problem-Solving Methods in Arti- ficial Intelligence. McGraw Hill, New York, New York.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "JANUS: Towards multilingual spoken language translation", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Suhm", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Geutner", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kemp", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Mayfield", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mcnair", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Rogina", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Sloboda", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Woszczyna", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the ARPA Speech Spoken Language Technology Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Suhm, B., P.Geutner, T. Kemp, A. Lavie, L. May- field, A. McNair, I. Rogina, T. Schultz, T. Slo- boda, W. Ward, M. Woszczyna, and A. Waibel. 1995. JANUS: Towards multilingual spoken lan- guage translation. In Proceedings of the ARPA Speech Spoken Language Technology Workshop, Austin, TX, 1995.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "HMM-Based Word Alignment in Statistical Translation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the Seventeenth International Conference on Computational Linguistics: COLING-96", |
| "volume": "", |
| "issue": "", |
| "pages": "836--841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vogel, S., H. Ney, and C. Tillman. 1996. HMM- Based Word Alignment in Statistical Transla- tion. In Proceedings of the Seventeenth Interna- tional Conference on Computational Linguistics: COLING-96, pages 836-841, Copenhagen, Den- mark.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "P(el l ei-N+t'\" el-l).i=0 here N is the order of the ngram language model. The above g-score gH of a hypothesis H = l : ele?...ek can be calculated from the g-score of its parent hypothesis P = l : ele2.. \"ek-t:", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Figure 1: Sentence Length Distribution", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": "in the EM parameter estimation by adding to it the counts for the parameters a(i l j, l', m'), assuming (l, m) and(1', m') are close enough. The closeness were measured in Each x/y position represents a different source/target sentence length. The dark dot at the intersection (l, m) corresponds to the set of counts for the alignment parameters a(.[ o,l, m) in the EM estimation. The adjusted counts are the sum of the counts in the neighboring sets residing inside the circle centered at (1, m) with radius r. We took r = 3 in our experiment.Euclidean distance(Figure 2).So we have e(i I J, t, m) = e(ilj, l',m';e,g ) (9) (I-l')~+(m-m')~<r~;e,g where ~(i I J, l, m) is the adjusted count for the parameter a(i I J, 1, m), c(i I J, l, m; e, g) is the expected count for a(i I J, l, m) from a paired sentence (e g), and c(ilj, l,m;e,g) = 0 when lel \u2022 l, or Igl \u00a2 m,", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "text": "Figure 3: Extended States versus Target Sentence Length", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF3": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"3\">Total Errors Scoree > Scoree, Scoree < Seoree,</td></tr><tr><td>Model 2, Multi-Stack</td><td>38</td><td>3.5 (7.9%)</td><td>34.5 (92.1%)</td></tr><tr><td>Simplified Model</td><td>48.5</td><td>4.5 (9.3%)</td><td>44 (90.7%)</td></tr></table>", |
| "text": "Translation Accuracy" |
| }, |
| "TABREF4": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Sample Translations versus Machine-Made Translations" |
| } |
| } |
| } |
| } |