ACL-OCL / Base_JSON /prefixP /json /P05 /P05-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P05-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:37:20.543858Z"
},
"title": "Scaling Phrase-Based Statistical Machine Translation to Larger Corpora and Longer Phrases",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memory than is required by current decoder implementations. We detail the computational complexity and average retrieval times for looking up phrase translations in our suffix array-based data structure. We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality.",
"pdf_parse": {
"paper_id": "P05-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memory than is required by current decoder implementations. We detail the computational complexity and average retrieval times for looking up phrase translations in our suffix array-based data structure. We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Statistical machine translation (SMT) has an advantage over many other statistical natural language processing applications in that training data is regularly produced by other human activity. For some language pairs very large sets of training data are now available. The publications of the European Union and United Nations provide gigbytes of data between various language pairs which can be easily mined using a web crawler. The Linguistics Data Consortium provides an excellent set of off the shelf Arabic-English and Chinese-English parallel corpora for the annual NIST machine translation evaluation exercises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The size of the NIST training data presents a problem for phrase-based statistical machine translation. Decoders such as Pharaoh (Koehn, 2004) primarily use lookup tables for the storage of phrases and their translations. Since retrieving longer segments of hu-man translated text generally leads to better translation quality, participants in the evaluation exercise try to maximize the length of phrases that are stored in lookup tables. The combination of large corpora and long phrases means that the table size can quickly become unwieldy.",
"cite_spans": [
{
"start": 129,
"end": 142,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A number of groups in the 2004 evaluation exercise indicated problems dealing with the data. Coping strategies included limiting the length of phrases to something small, not using the entire training data set, computing phrases probabilities on disk, and filtering the phrase table down to a manageable size after the testing set was distributed. We present a data structure that is easily capable of handling the largest data sets currently available, and show that it can be scaled to much larger data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Motivate the problem with storing enumerated phrases in a table by examining the memory requirements of the method for the NIST data set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Detail the advantages of using long phrases in SMT, and examine their potential coverage",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Describe a suffix array-based data structure which allows for the retrieval of translations of arbitrarily long phrases, and show that it requires far less memory than a table",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Calculate the computational complexity and average time for retrieving phrases and show how this can be sped up by orders of magnitude with no loss in translation accuracy 2 Related Work Koehn et al. (2003) translation including the joint probability phrasebased model (Marcu and Wong, 2002) and a variant on the alignment template approach (Och and Ney, 2004) , and contrast them to the performance of the word-based IBM Model 4 (Brown et al., 1993) . Most relevant for the work presented in this paper, they compare the effect on translation quality of using various lengths of phrases, and the size of the resulting phrase probability tables. Tillmann (2003) further examines the relationship between maximum phrase length, size of the translation table, and accuracy of translation when inducing block-based phrases from word-level alignments. and present methods for achieving better translation quality by growing incrementally larger phrases by combining smaller phrases with overlapping segments. Table 1 gives statistics about the Arabic-English parallel corpus used in the NIST large data track. The corpus contains 3.75 million sentence pairs, and has 127 million words in English, and 106 million words in Arabic. The table shows the number of unique Arabic phrases, and gives the average number of translations into English and their average length. Table 2 gives estimates of the size of the lookup tables needed to store phrases of various lengths, based on the statistics in Table 3 : Lengths of phrases from the training data that occur in the NIST-2004 test set phrases times the average number of translations. The number of words in the table is calculated as the number of unique phrases times the phrase length plus the number of entries times the average translation length. The memory is calculated assuming that each word is represented with a 4 byte integer, that each entry stores its probability as an 8 byte double and that each word alignment is stored as a 2 byte short. Note that the size of the table will vary depending on the phrase extraction technique. Table 3 gives the percent of the 35,313 word long test set which can be covered using only phrases of the specified length or greater. The table shows the efficacy of using phrases of different lengths. The table shows that while the rate of falloff is rapid, there are still multiple matches of phrases of length 10. The longest matching phrase was one of length 18. There is little generalization in current SMT implementations, and consequently longer phrases generally lead to better translation quality.",
"cite_spans": [
{
"start": 189,
"end": 208,
"text": "Koehn et al. (2003)",
"ref_id": "BIBREF1"
},
{
"start": 271,
"end": 293,
"text": "(Marcu and Wong, 2002)",
"ref_id": "BIBREF5"
},
{
"start": 343,
"end": 362,
"text": "(Och and Ney, 2004)",
"ref_id": "BIBREF6"
},
{
"start": 432,
"end": 452,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF0"
},
{
"start": 648,
"end": 663,
"text": "Tillmann (2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1007,
"end": 1014,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1365,
"end": 1372,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1493,
"end": 1500,
"text": "Table 3",
"ref_id": null
},
{
"start": 2092,
"end": 2099,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Statistical machine translation made considerable advances in translation quality with the introduction of phrase-based translation. By increasing the size of the basic unit of translation, phrase-based machine translation does away with many of the problems associated with the original word-based formulation of statistical machine translation (Brown et al., 1993) , in particular:",
"cite_spans": [
{
"start": 346,
"end": 366,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why use phrases?",
"sec_num": "3.1"
},
{
"text": "\u2022 The Brown et al. (1993) formulation doesn't have a direct way of translating phrases; instead they specify a fertility parameter which is used to replicate words and translate them individually.",
"cite_spans": [
{
"start": 6,
"end": 25,
"text": "Brown et al. (1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why use phrases?",
"sec_num": "3.1"
},
{
"text": "\u2022 With units as small as words, a lot of reordering has to happen between languages with different word orders. But the distortion parameter is a poor explanation of word order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why use phrases?",
"sec_num": "3.1"
},
{
"text": "Phrase-based SMT overcomes the first of these problems by eliminating the fertility parameter and directly handling word-to-phrase and phrase-tophrase mappings. The second problem is alleviated through the use of multi-word units which reduce the dependency on the distortion parameter. Less word re-ordering need occur since local dependencies are frequently captured. For example, common adjective-noun alternations are memorized. However, since this linguistic information is not encoded in the model, unseen adjective noun pairs may still be handled incorrectly. By increasing the length of phrases beyond a few words, we might hope to capture additional non-local linguistic phenomena. For example, by memorizing longer phrases we may correctly learn case information for nouns commonly selected by frequently occurring verbs; we may properly handle discontinuous phrases (such as French negation, some German verb forms, and English verb particle constructions) that are neglected by current phrasebased models; and we may by chance capture some agreement information in coordinated structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why use phrases?",
"sec_num": "3.1"
},
{
"text": "Despite the potential gains from memorizing longer phrases, the fact remains that as phrases get longer Table 4 : Coverage using only repeated phrases of the specified length there is a decreasing likelihood that they will be repeated. Because of the amount of memory required to store a phrase table, in current implementations a choice is made as to the maximum length of phrase to store.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deciding what length of phrase to store",
"sec_num": "3.2"
},
{
"text": "Based on their analysis of the relationship between translation quality and phrase length, Koehn et al. (2003) suggest limiting phrase length to three words or less. This is entirely a practical suggestion for keeping the phrase table to a reasonable size, since they measure minor but incremental improvement in translation quality up to their maximum tested phrase length of seven words. 1 Table 4 gives statistics about phrases which occur more than once in the English section of the Europarl corpus (Koehn, 2002) which was used in the Koehn et al. (2003) experiments. It shows that the percentage of words in the corpus that can be covered by repeated phrases falls off rapidly at length 6, but that even phrases up to length 10 are able to cover a non-trivial portion of the corpus. This draws into question the desirability of limiting phrase retrieval to length three.",
"cite_spans": [
{
"start": 91,
"end": 110,
"text": "Koehn et al. (2003)",
"ref_id": "BIBREF1"
},
{
"start": 504,
"end": 517,
"text": "(Koehn, 2002)",
"ref_id": "BIBREF2"
},
{
"start": 540,
"end": 559,
"text": "Koehn et al. (2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 392,
"end": 399,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deciding what length of phrase to store",
"sec_num": "3.2"
},
{
"text": "The decision concerning what length of phrases to store in the phrase table seems to boil down to a practical consideration: one must weigh the likelihood of retrieval against the memory needed to store longer phrases. We present a data structure where this is not a consideration. Our suffix arraybased data structure allows the retrieval of arbitrarily long phrases, while simultaneously requiring far less memory than the standard ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deciding what length of phrase to store",
"sec_num": "3.2"
},
{
"text": "The suffix array data structure (Manber and Myers, 1990 ) was introduced as a space-economical way of creating an index for string searches. The suffix array data structure makes it convenient to compute the frequency and location of any substring or ngram in a large corpus. Abstractly, a suffix array is an alphabetically-sorted list of all suffixes in a corpus, where a suffix is a substring running from each position in the text to the end. However, rather than actually storing all suffixes, a suffix array can be constructed by creating a list of references to each of the suffixes in a corpus. Figure 1 shows how a suffix array is initialized for a corpus with one sentence. Each index of a word in the corpus has a corresponding place in the suffix array, which is identical in length to the corpus. Figure 2 shows the final state of the suffix array, which is as a list of the indices of words in the corpus that corresponds to an alphabetically sorted list of the suffixes.",
"cite_spans": [
{
"start": 32,
"end": 55,
"text": "(Manber and Myers, 1990",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 602,
"end": 610,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 809,
"end": 817,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Suffix Arrays",
"sec_num": "4"
},
{
"text": "The advantages of this representation are that it is compact and easily searchable. The total size of the suffix array is a constant amount of memory. Typically it is stored as an array of integers where the array is the same length as the corpus. Because it is organized alphabetically, any phrase can be quickly located within it using a binary search algorithm. Yamamoto and Church (2001) show how to use suffix arrays to calculate a number of statistics that are interesting in natural language processing applications. They demonstrate how to calculate term fre- Figure 2: A sorted suffix array and its corresponding suffixes quency / inverse document frequency (tf / idf) for all n-grams in very large corpora, as well as how to use these frequencies to calculate n-grams with high mutual information and residual inverse document frequency. Here we show how to apply suffix arrays to parallel corpora to calculate phrase translation probabilities.",
"cite_spans": [
{
"start": 365,
"end": 391,
"text": "Yamamoto and Church (2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suffix Arrays",
"sec_num": "4"
},
{
"text": "In order to adapt suffix arrays to be useful for statistical machine translation we need a data structure with the following elements:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "\u2022 A suffix array created from the source language portion of the corpus, and another created from the target language portion of the corpus,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "\u2022 An index that tells us the correspondence between sentence numbers and positions in the source and target language corpora,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "\u2022 An alignment a for each sentence pair in the parallel corpus, where a is defined as a subset of the Cartesian product of the word positions in a sentence e of length I and a sentence f of length J:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "a \u2286 {(i, j) : i = 1...I; j = 1...J}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "\u2022 A method for extracting the translationally equivalent phrase for a subphrase given an aligned sentence pair containing that subphrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "The total memory usage of the data structure is thus the size of the source and target corpora, plus the size of the suffix arrays (identical in length to the corpora), plus the size of the two indexes that correlate sentence positions with word positions, plus the size of the alignments. Assuming we use ints to represent words and indices, and shorts to represent word alignments, we get the following memory usage: Or just over 2 Gigabytes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "2 *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applied to parallel corpora",
"sec_num": "4.1"
},
{
"text": "In order to produce a set of phrase translation probabilities, we need to examine the ways in which they are calculated. We consider two common ways of calculating the translation probability: using the maximum likelihood estimator (MLE) and smoothing the MLE using lexical weighting. The maximum likelihood estimator for the probability of a phrase is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(f |\u0113) = count(f ,\u0113) f count(f ,\u0113)",
"eq_num": "(1)"
}
],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "Where count(f ,\u0113) gives the total number of times the phrasef was aligned with the phrase\u0113 in the parallel corpus. We define phrase alignments as follows. A substring\u0113 consisting of the words at positions l...m is aligned with the phrasef by way of the subalignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "s = a \u2229 {(i, j) : i = l...m, j = 1...J}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "The aligned phrasef is the subphrase in f which spans from min(j) to max(j) for j|(i, j) \u2208 s. The procedure for generating the counts that are used to calculate the MLE probability using our suffix array-based data structures is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "1. Locate all the suffixes in the English suffix array which begin with the phrase\u0113. Since the suffix array is sorted alphabetically we can easily find the first occurrence s 3. Use a to extract the target phrasef that aligns with the phrase\u0113 that we are searching for. Increment the count for <f ,\u0113 >.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "4. Calculate the probability for each unique matching phrasef using the formula in Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "A common alternative formulation of the phrase translation probability is to lexically weight it as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "p lw (f |\u0113, s) = n i=1 1 |{i|(i, j) \u2208 s}| \u2200(i,j)\u2208s p(f j |e i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "(2) Where n is the length of\u0113.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "In order to use lexical weighting we would need to repeat steps 1-4 above for each word e i in\u0113. This would give us the values for p(f j |e i ). We would further need to retain the subphrase alignment s in order to know the correspondence between the words (i, j) \u2208 s in the aligned phrases, and the total number of foreign words that each e i is aligned with (|{i|(i, j) \u2208 s}|). Since a phrase alignment <f ,\u0113 > may have multiple possible word-level alignments, we retain a set of alignments S and take the maximum:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(f |\u0113, S) = p(f |\u0113) * arg max s\u2208S p lw (f |\u0113, s)",
"eq_num": "(3)"
}
],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "Thus our suffix array-based data structure can be used straightforwardly to look up all aligned translations for a given phrase and calculate the probabilities on-the-fly. In the next section we turn to the computational complexity of constructing phrase translation probabilities in this way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating phrase translation probabilities",
"sec_num": "4.2"
},
{
"text": "Computational complexity is relevant because there is a speed-memory tradeoff when adopting our data structure. What we gained in memory efficiency may be rendered useless if the time it takes to calculate phrase translation probabilities is unreasonably long. The computational complexity of looking up items in a hash table, as is done in current tablebased data structures, is extremely fast. Looking up a single phrase can be done in unit time, O(1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "The computational complexity of our method has the following components:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "\u2022 The complexity of finding all occurrences of the phrase in the suffix array",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "\u2022 The complexity of retrieving the associated aligned sentence pairs given the positions of the phrase in the corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "\u2022 The complexity of extracting all aligned phrases using our phrase extraction algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "\u2022 The complexity of calculating the probabilities given the aligned phrases",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "The methods we use to execute each of these, and their complexities are as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "\u2022 Since the array is sorted, finding all occurrences of the English phrase is extremely fast. We can do two binary searches: one to find the first occurrence of the phrase and a second to find the last. The computational complexity is therefore bounded by O(2 log(n)) where n is the length of the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "\u2022 We use a similar method to look up the sentences e i and f i and word-level alignment a i \u2022 The complexity of extracting the aligned phrase for a single occurrence of\u0113 i is O(2 log(|a i |) to get the subphrase alignment s i , since we store the alignments in a sorted array. The complexity of then gettingf i from s i is O(length(f i )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "\u2022 The complexity of summing over all aligned phrases and simultaneously calculating their probabilities is O(k).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "Thus we have a total complexity of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O(2 log(n) + k * 2 log(m) (4) +\u0113 1 ...\u0113 k a i ,f i |\u0113 i (2 log(|a i |) + length(f i )) + k)",
"eq_num": "(5)"
}
],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "for the MLE estimation of the translation probabilities for a single phrase. The complexity is dominated by the k terms in the equation, when the number of occurrences of the phrase in the corpus is high. Phrases with high frequency may cause excessively long retrieval time. This problem is exacerbated when we shift to a lexically weighted calculation of the phrase translation probability. The complexity will be multiplied across each of the component words in the phrase, and the component words themselves will be more frequent than the phrase. Table 5 shows example times for calculating the translation probabilities for a number of phrases. For frequent phrases like of the these times get unacceptably long. While our data structure is perfect for overcoming the problems associated with storing the translations of long, infrequently occurring phrases, it in a way introduces the converse problem. It has a clear disadvantage in the amount of time it takes to retrieve commonly occurring phrases. In the next section we examine the use of sampling to speed up the calculation of translation probabilities for very frequent phrases.",
"cite_spans": [],
"ref_spans": [
{
"start": 551,
"end": 558,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computational Complexity",
"sec_num": "5"
},
{
"text": "Rather than compute the phrase translation probabilities by examining the hundreds of thousands of occurrences of common phrases, we instead sample from a small subset of the occurrences. It is unlikely that we need to extract the translations of all occurrences of a high frequency phrase in order to get a good approximation of their probabilities. We instead cap the number of occurrences that we consider, and thus give a maximum bound on k in Equation 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "In order to determine the effect of different levels of sampling, we compare the translation quality against cumulative retrieval time for calculating the phrase translation probabilities for all subphrases in an evaluation set. We translated a held out set of 430 German sentences with 50 words or less into English. The test sentences were drawn from the 01/17/00 proceedings of the Europarl corpus. The remainder of the corpus (1 million sentences) was used as training data to calculate the phrase translation probabilities. We calculated the translation quality using Bleu's modified n-gram precision metric (Papineni et al., 2002) for n-grams of up to length four. The framework that we used to calculate the translation probabilities was similar to that detailed in Koehn et al. (2003) . That is:",
"cite_spans": [
{
"start": 613,
"end": 636,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF7"
},
{
"start": 773,
"end": 792,
"text": "Koehn et al. (2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e = arg max e I 1 p(e I 1 |f I 1 )",
"eq_num": "(6)"
}
],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "= arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "e I 1 p LM (e I 1 ) * (7) I i=1 p(f i |\u0113 i )d(a i \u2212 b i\u22121 )p lw (f i |\u0113 i , a) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "Where p LM is a language model probability and d is a distortion probability which penalizes movement. Table 6 : A comparison of retrieval times and translation quality when the number of translations is capped at various sample sizes curacy fluctuates very slightly it essentially remains uniformly high for all levels of sampling. There are a number of possible reasons for the fact that the quality does not decrease:",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "\u2022 The probability estimates under sampling are sufficiently good that the most probable translations remain unchanged,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "\u2022 The interaction with the language model probability rules out the few misestimated probabilities, or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "\u2022 The decoder tends to select longer or less frequent phrases which are not affected by the sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "While the translation quality remains essentially unchanged, the cumulative time that it takes to calculate the translation probabilities for all subphrases in the 430 sentence test set decreases radically. The total time drops by orders of magnitude from an hour and a half without sampling down to a mere 10 seconds with a cavalier amount of sampling. This suggests that the data structure is suitable for deployed SMT systems and that no additional caching need be done to compensate for the structure's computational complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "6"
},
{
"text": "The paper has presented a super-efficient data structure for phrase-based statistical machine translation. We have shown that current table-based methods are unwieldily when used in conjunction with large data sets and long phrases. We have contrasted this with our suffix array-based data structure which provides a very compact way of storing large data sets while simultaneously allowing the retrieval of arbitrarily long phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "For the NIST-2004 Arabic-English data set, which is among the largest currently assembled for statistical machine translation, our representation uses a very manageable 2 gigabytes of memory. This is less than is needed to store a table containing phrases with a maximum of three words, and is ten times less than the memory required to store a table with phrases of length eight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We have further demonstrated that while computational complexity can make the retrieval of translation of frequent phrases slow, the use of sampling is an extremely effective countermeasure to this. We demonstrated that calculating phrase translation probabilities from sets of 100 occurrences or less results in nearly no decrease in translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "The implications of the data structure presented in this paper are significant. The compact representation will allow us to easily scale to parallel corpora consisting of billions of words of text, and the retrieval of arbitrarily long phrases will allow experiments with alternative decoding strategies. These facts in combination allow for an even greater exploitation of training data in statistical machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "While the improvements to translation quality reported inKoehn et al. (2003) are minor, their evaluation metric may not have been especially sensitive to adding longer phrases. They used the Bleu evaluation metric(Papineni et al., 2002), but capped the n-gram precision at 4-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The mathematics of machine translation: Parameter estimation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of ma- chine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-311, June.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of HLT/NAACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Europarl: A multilingual corpus for evaluation of machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2002. Europarl: A multilingual corpus for evaluation of machine translation. Unpublished Draft.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pharaoh: A beam search decoder for phrase-based statistical machine translation models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation mod- els. In Proceedings of AMTA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Suffix arrays: A new method for on-line string searches",
"authors": [
{
"first": "Udi",
"middle": [],
"last": "Manber",
"suffix": ""
},
{
"first": "Gene",
"middle": [],
"last": "Myers",
"suffix": ""
}
],
"year": 1990,
"venue": "The First Annual ACM-SIAM Symposium on Dicrete Algorithms",
"volume": "",
"issue": "",
"pages": "319--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Udi Manber and Gene Myers. 1990. Suffix arrays: A new method for on-line string searches. In The First Annual ACM-SIAM Symposium on Dicrete Algo- rithms, pages 319-327.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine transla- tion. In Proceedings of EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computational Linguistics, 30(4):417-450, De- cember.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic evalu- ation of machine translation. In Proceedings of ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A projection extension algorithm for statistical machine translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillmann. 2003. A projection extension algo- rithm for statistical machine translation. In Proceed- ings of EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Effective phrase translation extraction from alignment models",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Venugopal, Stephan Vogel, and Alex Waibel. 2003. Effective phrase translation extraction from alignment models. In Proceedings of ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The CMU statistical machine translation system",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Tribble",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of MT Summit 9",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Ying Zhang, Fei Huang, Alicia Trib- ble, Ashish Venugopal, Bing Zhao, and Alex Waibel. 2003. The CMU statistical machine translation sys- tem. In Proceedings of MT Summit 9.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using suffix arrays to compute term frequency and document frequency for all substrings in a corpus",
"authors": [
{
"first": "Mikio",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 2001,
"venue": "Compuatational Linguistics",
"volume": "27",
"issue": "1",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikio Yamamoto and Kenneth Church. 2001. Using suf- fix arrays to compute term frequency and document frequency for all substrings in a corpus. Compuata- tional Linguistics, 27(1):1-30.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An initialized, unsorted suffix array for a very small corpus",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "spain declined to aid morocco morocco spain declined to aid morocco declined to confirm that spain declined to aid morocco declined to aid morocco confirm that spain declined to aid morocco aid morocco that spain declined to aid morocco spain declined to confirm that spain declined to aid morocco",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "num words in source corpus * sizeof (int)+ 2 * num words in target corpus * sizeof (int)+ 2 * number sentence pairs * sizeof (int)+ number of word alignments * sizeof (short) The total amount of memory required to store the NIST Arabic-English data using this data structure is 2 * 105,994,774 * sizeof (int)+ 2 * 127,450,473 * sizeof (int)+ 2 * 3,758,904 * sizeof (int)+ 92,975,229 * sizeof (short)",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "[k] and the last occurrence s[l]. The length of the span in the suffix array l\u2212k+1 indicates the number of occurrences of\u0113 in the corpus. Thus the denominator f count(f ,\u0113) can be calculated as l \u2212 k + 1.2. For each of the matching phrases s[i] in the span s[k]...s[l], look up the value of s[i] which is the word index w of the suffix in the English corpus. Look up the sentence number that includes w, and retrieve the corresponding sentences e and f , and their alignment a.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Examples of O and calculation times for phrases of different frequencies that are associated with the position w i in the corpus of each phrase occurrence\u0113 i . The complexity is O(k * 2 log(m)) where k is the number of occurrences of\u0113 and m is the number of sentence pairs in the parallel corpus.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "The number of unique entries is calculated as the number unique",
"content": "<table><tr><td colspan=\"2\">length entries</td><td>words</td><td>memory</td><td>including</td></tr><tr><td/><td>(mil)</td><td>(mil)</td><td>(gigs)</td><td>alignments</td></tr><tr><td>1</td><td>7.3</td><td>10</td><td>.1</td><td>.11</td></tr><tr><td>2</td><td>36</td><td>111</td><td>.68</td><td>.82</td></tr><tr><td>3</td><td>86</td><td>412</td><td>2.18</td><td>2.64</td></tr><tr><td>4</td><td>149</td><td>933</td><td>4.59</td><td>5.59</td></tr><tr><td>5</td><td>216</td><td colspan=\"2\">1,645 7.74</td><td>9.46</td></tr><tr><td>6</td><td>284</td><td colspan=\"2\">2,513 11.48</td><td>14.07</td></tr><tr><td>7</td><td>351</td><td colspan=\"2\">3,513 15.70</td><td>19.30</td></tr><tr><td>8</td><td>416</td><td colspan=\"2\">4,628 20.34</td><td>25.05</td></tr><tr><td>9</td><td>479</td><td colspan=\"2\">5,841 25.33</td><td>31.26</td></tr><tr><td>10</td><td>539</td><td colspan=\"2\">7,140 30.62</td><td>37.85</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "",
"content": "<table><tr><td colspan=\"4\">: Estimated size of lookup tables for the</td></tr><tr><td colspan=\"3\">NIST-2004 Arabic-English data</td><td/></tr><tr><td colspan=\"4\">length coverage length coverage</td></tr><tr><td>1</td><td>93.5%</td><td>6</td><td>4.70%</td></tr><tr><td>2</td><td>73.3%</td><td>7</td><td>2.95%</td></tr><tr><td>3</td><td>37.1%</td><td>8</td><td>2.14%</td></tr><tr><td>4</td><td>15.5%</td><td>9</td><td>1.99%</td></tr><tr><td>5</td><td>8.05%</td><td>10</td><td>1.49%</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"content": "<table><tr><td colspan=\"2\">Index of words:</td><td/><td/><td colspan=\"2\">Corpus</td><td/><td/><td/><td/></tr><tr><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td></tr><tr><td colspan=\"5\">spain declined to confirm that</td><td colspan=\"2\">spain declined</td><td>to</td><td>aid</td><td>morocco</td></tr><tr><td colspan=\"3\">Initialized, unsorted Suffix Array</td><td colspan=\"4\">Suffixes denoted by s[i]</td><td/><td/><td/></tr><tr><td>s[0]</td><td>0</td><td colspan=\"8\">spain declined to confirm that spain declined to aid morocco</td></tr><tr><td>s[1]</td><td>1</td><td colspan=\"8\">declined to confirm that spain declined to aid morocco</td></tr><tr><td>s[2]</td><td>2</td><td colspan=\"6\">to confirm that spain declined to aid morocco</td><td/><td/></tr><tr><td>s[3]</td><td>3</td><td colspan=\"6\">confirm that spain declined to aid morocco</td><td/><td/></tr><tr><td>s[4]</td><td>4</td><td colspan=\"5\">that spain declined to aid morocco</td><td/><td/><td/></tr><tr><td>s[5]</td><td>5</td><td colspan=\"4\">spain declined to aid morocco</td><td/><td/><td/><td/></tr><tr><td>s[6]</td><td>6</td><td colspan=\"4\">declined to aid morocco</td><td/><td/><td/><td/></tr><tr><td>s[7]</td><td>7</td><td colspan=\"3\">to aid morocco</td><td/><td/><td/><td/><td/></tr><tr><td>s[8]</td><td>8</td><td colspan=\"2\">aid morocco</td><td/><td/><td/><td/><td/><td/></tr><tr><td>s[9]</td><td>9</td><td colspan=\"2\">morocco</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>based represen-</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>tation.</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "",
"content": "<table><tr><td>gives a comparison of the translation qual-</td></tr><tr><td>ity under different levels of sampling. While the ac-</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}