ACL-OCL / Base_JSON /prefixP /json /P12 /P12-1017.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P12-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:28:35.245699Z"
},
"title": "Deciphering Foreign Language by Combining Language Models and Context Vectors",
"authors": [
{
"first": "Malte",
"middle": [],
"last": "Nuhn",
"suffix": "",
"affiliation": {
"laboratory": "Human Language Technology and Pattern Recognition Group",
"institution": "RWTH Aachen University",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Arne",
"middle": [],
"last": "Mauser",
"suffix": "",
"affiliation": {
"laboratory": "Human Language Technology and Pattern Recognition Group",
"institution": "RWTH Aachen University",
"location": {
"country": "Germany"
}
},
"email": "amauser@google.com"
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": "",
"affiliation": {
"laboratory": "Human Language Technology and Pattern Recognition Group",
"institution": "RWTH Aachen University",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we show how to train statistical machine translation systems on reallife tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.",
"pdf_parse": {
"paper_id": "P12-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we show how to train statistical machine translation systems on reallife tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It has long been a vision of science fiction writers and scientists to be able to universally communicate in all languages. In these visions, even previously unknown languages can be learned automatically from analyzing foreign language input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we attempt to learn statistical translation models from only monolingual data in the source and target language. The reasoning behind this idea is that the elements of languages share statistical similarities that can be automatically identified and matched with other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is a big step towards large-scale and large-vocabulary unsupervised training of statistical translation models. Previous approaches have faced constraints in vocabulary or data size. We show how to scale unsupervised training to real-life translation tasks and how large-scale experiments can be done. Monolingual data is more readily available, if not abundant compared to true parallel or even just translated data. Learning from only monolingual data in real-life translation tasks could improve especially low resource language pairs where few or no parallel texts are available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to that, this approach offers the opportunity to decipher new or unknown languages and derive translations based solely on the available monolingual data. While we do tackle the full unsupervised learning task for MT, we make some very basic assumptions about the languages we are dealing with:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We have large amounts of data available in source and target language. This is not a very strong assumption as books and text on the internet are readily available for almost all languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We can divide the given text in tokens and sentence-like units. This implies that we know enough about the language to tokenize and sentence-split a given text. Again, for the vast majority of languages, this is not a strong restriction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. The writing system is one-dimensional left-toright. It has been shown (Lin and Knight, 2006) that the writing direction can be determined separately and therefore this assumption does not pose a real restriction.",
"cite_spans": [
{
"start": 82,
"end": 95,
"text": "Knight, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous approaches to unsupervised training for SMT prove feasible only for vocabulary sizes up to around 500 words (Ravi and Knight, 2011) and data sets of roughly 15,000 sentences containing only about 4 tokens per sentence on average. Real data as it occurs in texts such as web pages or news texts does not meet any of these characteristics.",
"cite_spans": [
{
"start": 117,
"end": 140,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we will develop, describe, and evaluate methods for large vocabulary unsupervised learning of machine translation models suitable for real-world tasks. The remainder of this paper is structured as follows: In Section 2, we will review the related work and describe how our approach extends existing work. Section 3 describes the model and training criterion used in this work. The implementation and the training of this model is then described in Section 5 and experimentally evaluated in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised training of statistical translations systems without parallel data and related problems have been addressed before. In this section, we will review previous approaches and highlight similarities and differences to our work. Several steps have been made in this area, such as (Knight and Yamada, 1999) , (Ravi and Knight, 2008) , or (Snyder et al., 2010) , to name just a few. The main difference of our work is, that it allows for much larger vocabulary sizes and more data to be used than previous work while at the same time not being dependent on seed lexica and/or any other knowledge of the languages.",
"cite_spans": [
{
"start": 288,
"end": 313,
"text": "(Knight and Yamada, 1999)",
"ref_id": "BIBREF4"
},
{
"start": 316,
"end": 339,
"text": "(Ravi and Knight, 2008)",
"ref_id": "BIBREF11"
},
{
"start": 345,
"end": 366,
"text": "(Snyder et al., 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Close to the methods described in this work, Ravi and Knight (2011) treat training and translation without parallel data as a deciphering problem. Their best performing approach uses an EM-Algorithm to train a generative word based translation model. They perform experiments on a Spanish/English task with vocabulary sizes of about 500 words and achieve a performance of around 20 BLEU compared to 70 BLEU obtained by a system that was trained on parallel data. Our work uses the same training criterion and is based on the same generative story. However, we use a new training procedure whose critical parts have constant time and memory complexity with respect to the vocabulary size so that our methods can scale to much larger vocabulary sizes while also being faster.",
"cite_spans": [
{
"start": 45,
"end": 67,
"text": "Ravi and Knight (2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In a different approach, Koehn and Knight (2002) induce a bilingual lexicon from only non-parallel data. To achieve this they use a seed lexicon which they systematically extend by using orthographic as well as distributional features such as context, and frequency. They perform their experiments on nonparallel German-English news texts, and test their mappings against a bilingual lexicon. We use a greedy method similar to (Koehn and Knight, 2002) for extending a given lexicon, and we implicitly also use the frequency as a feature. However, we perform fully unsupervised training and do not start with a seed lexicon or use linguistic features. Similarly, Haghighi et al. (2008) induce a oneto-one translation lexicon only from non-parallel monolingual data. Also starting with a seed lexicon, they use a generative model based on canonical correlation analysis to systematically extend the lexicon using context as well as spelling features. They evaluate their method on a variety of tasks, ranging from inherently parallel data (EUROPARL) to unrelated corpora (100k sentences of the GIGA-WORD corpus). They report F-measure scores of the induced entries between 30 to 70. As mentioned above, our work neither uses a seed lexicon nor orthographic features.",
"cite_spans": [
{
"start": 25,
"end": 48,
"text": "Koehn and Knight (2002)",
"ref_id": "BIBREF5"
},
{
"start": 427,
"end": 451,
"text": "(Koehn and Knight, 2002)",
"ref_id": "BIBREF5"
},
{
"start": 662,
"end": 684,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the statistical training criterion and the translation model that is trained using monolingual data. In addition to the mathematical formulation of the model we describe approximations used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "Throughout this work, we denote the source language words as f and target language words as e. The source vocabulary is V f and we write the size of this vocabulary as |V f |. The same notation holds for the target vocabulary with V e and |V e |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "As training criterion for the translation model's parameters \u03b8, Ravi and Knight (2011) ",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "Ravi and Knight (2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "suggest arg max \u03b8 \uf8f1 \uf8f2 \uf8f3 f e P (e) \u2022 p \u03b8 (f |e) \uf8fc \uf8fd \uf8fe",
"eq_num": "(1)"
}
],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "We would like to obtain \u03b8 from Equation 1 using the EM Algorithm (Dempster et al., 1977) . This becomes increasingly difficult with more complex translation models. Therefore, we use a simplified translation model that still contains all basic phenomena of a generic translation process. We formulate the translation process with the same generative story presented in (Ravi and Knight, 2011) :",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF2"
},
{
"start": 369,
"end": 392,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "1. Stochastically generate the target sentence according to an n-gram language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "2. Insert NULL tokens between any two adjacent positions of the target string with uniform probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "3. For each target token e i (including NULL) choose a foreign translation f i (including NULL) with probability P \u03b8 (f i |e i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "4. Locally reorder any two adjacent foreign words f i\u22121 , f i with probability P (SWAP) = 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3"
},
{
"text": "In practice, however, it is not feasible to deal with the full parameter table P \u03b8 (f i |e i ) which models the lexicon. Instead we only allow translation models where for each source word f the number of words e with P (f |e ) = 0 is below some fixed value. We will refer to this value as the maximum number of candidates of the translation model and denote it with N C . Note that for a given e this does not necessarily restrict the number of entries P (f |e) = 0. Also note that with a fixed value of N C , time and memory complexity of the EM step is O(1) with respect to |V e | and |V f |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remove the remaining NULL tokens.",
"sec_num": "5."
},
{
"text": "In the following we divide the problem of maximizing Equation 1 into two parts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remove the remaining NULL tokens.",
"sec_num": "5."
},
{
"text": "1. Determining a set of active lexicon entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Remove the remaining NULL tokens.",
"sec_num": "5."
},
{
"text": "given set of active lexicon entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing the translation probabilities for the",
"sec_num": "2."
},
{
"text": "The second task can be achieved by running the EM algorithm on the restricted translation model. We deal with the first task in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing the translation probabilities for the",
"sec_num": "2."
},
{
"text": "As described in Section 3 we need some mechanism to iteratively choose an active set of translation candidates. Based on the assumption that some of the active candidates and their respective probabilities are already correct, we induce new active candidates. In the context of information retrieval, Salton et al. (1975) introduce a document space where each document identified by one or more index terms is represented by a high dimensional vector of term weights. Given two vectors v 1 and v 2 of two documents it is then possible to calculate a similarity coefficient between those given documents (which is usually denoted as s(v 1 , v 2 )). Similar to this we represent source and target words in a high dimensional vector space of target word weights which we call context vectors and use a similarity coefficient to find possible translation pairs. We first initialize these context vectors using the following procedure:",
"cite_spans": [
{
"start": 301,
"end": 321,
"text": "Salton et al. (1975)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "1. Using only the monolingual data for the target language, prepare the context vectors v e i with entries v e i ,e j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "(a) Initialize all v e i ,e j = 0 (b) For each target sentence E: For each word e i in E: For each word e j = e i in E: v e i ,e j = v e i ,e j + 1. (c) Normalize each vector v e i such that e j (v e i ,e j ) 2 ! = 1 holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "Using the notation e i = e j : v e i ,e j , . . . these vectors might for example look like work = (early : 0.2, late : 0.1, . . . ) time = (early : 0.2, late : 0.2, . . . ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "2. Prepare context vectors v f i ,e j for the source language using only the monolingual data for the source language and the translation model's current parameter estimate \u03b8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "(a) Initialize all v f i ,e j = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "(b) Let\u1ebc \u03b8 (F ) denote the most probable translation of the foreign sentence F obtained by using the current estimate \u03b8. (c) For each source sentence F :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "For each word f i in F :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "For each word e j = E \u03b8 (f i ) 1 in E \u03b8 (F ): v f i ,e j = v f i ,e j + 1 (d) Normalize each vector v f i such that e j (v f i ,e j ) 2 ! = 1 holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "Adapting the notation described above, these vectors might for example look like Arbeit = (early : 0.25, late : 0.05, . . . ) Zeit = (early : 0.15, late : 0.25, . . . )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "Once we have set up the context vectors v e and v f , we can retrieve translation candidates for some source word f by finding those words e that maximize the similarity coefficient s(v e , v f ), as well as candidates for a given target word e by finding those words f that maximize s(v e , v f ). In our implementation we use the Euclidean distance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d(v e , v f ) = ||v e \u2212 v f || 2 .",
"eq_num": "(2)"
}
],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "as distance measure. 2 The normalization of context vectors described above is motivated by the fact that the context vectors should be invariant with respect to the absolute number of occurrences of words. 3 Instead of just finding the best candidates for a given word, we are interested in an assignment that involves all source and target words, minimizing the sum of distances between the assigned words. In case of a one-to-one mapping the problem of assigning translation candidates such that the sum of distances is minimal can be solved optimally in polynomial time using the hungarian algorithm (Kuhn, 1955) . In our case we are dealing with a manyto-many assignment that needs to satisfy the maximum number of candidates constraints. For this, we solve the problem in a greedy fashion by simply choosing the best pairs (e, f ) first. As soon as a target word e or source word f has reached the limit of maximum candidates, we skip all further candidates for that word e (or f respectively). This step involves calculating and sorting all |V e | \u2022 |V f | distances which can be done in time O(",
"cite_spans": [
{
"start": 21,
"end": 22,
"text": "2",
"ref_id": null
},
{
"start": 604,
"end": 616,
"text": "(Kuhn, 1955)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "V 2 \u2022 log(V )), with V = max(|V e |, |V f |).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "A simplified example of this procedure is depicted in Figure 1 . The example already shows that the assignment obtained by this algorithm is in general not optimal. 2 We then obtain pairs (e, f ) that minimize d. 3 This gives the same similarity ordering as using unnormalized vectors with the cosine similarity measure",
"cite_spans": [
{
"start": 165,
"end": 166,
"text": "2",
"ref_id": null
},
{
"start": 213,
"end": 214,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "ve\u2022v f ||ve|| 2 \u2022||v f ||",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "2 which can be interpreted as measuring the cosine of the angle between the vectors, see (Manning et al., 2008) . Still it is noteworthy that this procedure is not equivalent to the tf-IDF context vectors described in (Salton et al., 1975) . ",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF8"
},
{
"start": 218,
"end": 239,
"text": "(Salton et al., 1975)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Context Similarity",
"sec_num": "4"
},
{
"text": "Given the model presented in Section 3 and the methods illustrated in Section 4, we now describe how to train this model. As described in Section 4, the overall procedure is divided into two alternating steps: After initialization we first perform EM training of the translation model for 20-30 iterations using a 2-gram or 3-gram language model in the target language. With the obtained best translations we induce new translation candidates using context similarity. This procedure is depicted in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 499,
"end": 507,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Training Algorithm and Implementation",
"sec_num": "5"
},
{
"text": "Let N C be the maximum number of candidates per source word we allow, V e and V f be the target/source vocabulary and r(e) and r(f ) the frequency rank of a source/target word. Each word f \u2208 V f with frequency rank r(f ) is assigned to all words e \u2208 V e with frequency rank",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r(e) \u2208 [ start(f ) , end(f ) ]",
"eq_num": "(3)"
}
],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "start(f ) = max(0 , min |Ve| \u2212 Nc , |Ve| |V f | \u2022 r(f ) \u2212 Nc 2 )",
"eq_num": "(4)"
}
],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "end(f ) = min (start(f ) + Nc, |Ve|) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "This defines a diagonal beam 4 when visualizing the lexicon entries in a matrix where both source and target words are sorted by their frequency rank. However, note that the result of sorting by frequency (e, f ) for which e is a translation candidate of f , while dots represent word pairs (e, f ) for which this is not the case. Source and target words are sorted by frequency so that the most frequent source words appear on the very left, and the most frequent target words appear at the very bottom.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "(e, f )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "and thus the frequency ranks are not unique when there are words with the same frequency. In this case, we initially obtain some not further specified frequency ordering, which is then kept throughout the procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "This initialization proves useful as we show by taking an IBM1 lexicon P (f |e) extracted on the parallel VERBMOBIL corpus (Wahlster, 2000): For each word e we calculate the weighted rank difference",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "\u2206r avg (e) = f P (f |e) \u2022 |(r(e) \u2212 r(f )| (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "and count how many of those weighted rank differences are smaller than a given value N C 2 . Here we see that for about 1% of the words the weighted rank difference lies within N C = 50, and even about 3% for N C = 150 respectively. This shows that the initialization provides a first solid guess of possible translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "5.1"
},
{
"text": "The generative story described in Section 3 is implemented as a cascade of a permutation, insertion, lexicon, deletion and language model finite state transducers using OpenFST (Allauzen et al., 2007) . Our FST representation of the LM makes use of failure transitions as described in (Allauzen et al., 2003) . We use the forward-backward algorithm on the composed transducers to efficiently train the lexicon model using the EM algorithm.",
"cite_spans": [
{
"start": 177,
"end": 200,
"text": "(Allauzen et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 285,
"end": 308,
"text": "(Allauzen et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EM Algorithm",
"sec_num": "5.2"
},
{
"text": "Given the trained parameters \u03b8 from the previous run of the EM algorithm we set the context vectors v e and v f up as described in Section 4. We then calculate and sort all |V e |\u2022|V f | distances which proves feasible in a few CPU hours even for vocabulary sizes of more than 50,000 words. This is achieved with the GNU SORT tool, which uses external sorting for sorting large amounts of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Vector Step",
"sec_num": "5.3"
},
{
"text": "To set up the new lexicon we keep the N C 2 best translations for each source word with respect to P (e|f ), which we obtained in the previous EM run. Experiments showed that it is helpful to also limit the number of candidates per target words. We therefore prune the resulting lexicon using P (f |e) to a maximum of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Vector Step",
"sec_num": "5.3"
},
{
"text": "N C 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Vector Step",
"sec_num": "5.3"
},
{
"text": "candidates per target word afterwards. Then we fill the lexicon with new candidates using the previously sorted list of candidate pairs such that the final lexicon has at most N C candidates per source word and at most N C candidates per target word. We set N C to some value N C > N C . All experiments in this work were run with N C = 300. Values of N C \u2248 N C seem to produce poorer results. Not limiting the number of candidates per target word at all also typically results in weaker performance. After the lexicon is filled with candidates, we initialize the probabilities to be uniform. With this new lexicon the process is iterated starting with the EM training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Vector Step",
"sec_num": "5.3"
},
{
"text": "We evaluate our method on three different corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "6"
},
{
"text": "At first we apply our method to non-parallel Spanish/English data that is based on the OPUS corpus (Tiedemann, 2009) and that was also used in (Ravi and Knight, 2011) . We show that our method performs better by 1.6 BLEU than the best performing method described in (Ravi and Knight, 2011) being approximately 15 to 20 times faster than their n-gram based approach.",
"cite_spans": [
{
"start": 99,
"end": 116,
"text": "(Tiedemann, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 143,
"end": 166,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 266,
"end": 289,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "6"
},
{
"text": "After that we apply our method to a non-parallel version of the German/English VERBMOBIL corpus, which has a vocabulary size of 6,000 words on the German side, and 3,500 words on the target side and which thereby is approximately one order of magnitude larger than the previous OPUS experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "6"
},
{
"text": "We finally run our system on a subset of the nonparallel French/English GIGAWORD corpus, which has a vocabulary size of 60,000 words for both French and English. We show first interesting results on such a big task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "6"
},
{
"text": "In case of the OPUS and VERBMOBIL corpus, we evaluate the results using BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) to reference translations. We report all scores in percent. For BLEU higher values are better, for TER lower values are better. We also compare the results on these corpora to a system trained on parallel data.",
"cite_spans": [
{
"start": 77,
"end": 100,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF10"
},
{
"start": 109,
"end": 130,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "6"
},
{
"text": "In case of the GIGAWORD corpus we show lexicon entries obtained during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "6"
},
{
"text": "We apply our method to the corpus described in Table 6 . This exact corpus was also used in (Ravi and Knight, 2011) . The best performing methods in (Ravi and Knight, 2011) use the full 411 \u00d7 579 lexicon model and apply standard EM training. Using a 2-gram LM they obtain 15.3 BLEU and with a whole segment LM, they achieve 19.3 BLEU. In comparison to this baseline we run our algorithm with N C = 50 candidates per source word for both, a 2-gram and a 3-gram LM. We use 30 EM iterations between each context vector step. For both cases we run 7 EM+Context cycles. Figure 3 and Figure 4 show the evolution of BLEU and TER scores for applying our method using a 2gram and a 3-gram LM.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 149,
"end": 172,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 6",
"ref_id": null
},
{
"start": 565,
"end": 573,
"text": "Figure 3",
"ref_id": null
},
{
"start": 578,
"end": 586,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1.1"
},
{
"text": "In case of the 2-gram LM (Figure 3 ) the translation quality increases until it reaches a plateau after 5 EM+Context cycles. In case of the 3-gram LM (Figure 4 ) the statement only holds with respect to TER. It is notable that during the first iterations TER only improves very little until a large chunk of the language unravels after the third iteration. This behavior may be caused by the fact that the corpus only provides a relatively small amount of context information for each word, since sentence lengths are 3-4 words on average. Figure 3 : Results on the OPUS corpus with a 2-gram LM, N C = 50, and 30 EM iterations between each context vector step. The dashed line shows the best result using a 2-gram LM in (Ravi and Knight, 2011) . Table 2 summarizes these results and compares them with (Ravi and Knight, 2011) . Our 3-gram based method performs by 1.6 BLEU better than their best system which is a statistically significant improvement at 95% confidence level. Furthermore, Table 2 compares the CPU time needed for training. Our 3-gram based method is 15-20 times faster than running the EM based training procedure presented in (Ravi and Knight, 2011) with a 3-gram LM 5 . Figure 4 : Results on the OPUS corpus with a 3-gram LM, N C = 50, and 30 EM iterations between each context vector step. The dashed line shows the best result using a whole-segment LM in (Ravi and Knight, 2011) To summarize: Our method is significantly faster than n-gram LM based approaches and obtains better results than any previously published method.",
"cite_spans": [
{
"start": 720,
"end": 743,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 802,
"end": 825,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 1145,
"end": 1168,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 1377,
"end": 1400,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 25,
"end": 34,
"text": "(Figure 3",
"ref_id": null
},
{
"start": 150,
"end": 159,
"text": "(Figure 4",
"ref_id": null
},
{
"start": 540,
"end": 548,
"text": "Figure 3",
"ref_id": null
},
{
"start": 746,
"end": 753,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 990,
"end": 997,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1190,
"end": 1198,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1.2"
},
{
"text": "6 Estimated by running full EM using the 2-gram LM using our implementation for 90 Iterations yielding 15.2 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1.2"
},
{
"text": "7 \u22484,000h when running full EM using a 3-gram LM, using our implementation. Estimated by running only the first iteration and by assuming that the final result will be obtained after 90 iterations. However, (Ravi and Knight, 2011) report results using a whole segment LM, assigning P (e) > 0 only to sequences seen in training. This seems to work for the given task but we believe that it can not be a general replacement for higher order n-gram LMs.",
"cite_spans": [
{
"start": 207,
"end": 230,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1.2"
},
{
"text": "8 Estimated by running our method for 5 \u00d7 30 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1.2"
},
{
"text": "6.2 VERBMOBIL Corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1.2"
},
{
"text": "The VERBMOBIL corpus is a German/English corpus dealing with short sentences for making appointments. We prepared a non-parallel subset of the original VERBMOBIL (Wahlster, 2000) by splitting the corpus into two parts and then selecting only the German side from the first half, and the English side from the second half such that the target side is not the translation of the source side. The source and target vocabularies of the resulting non-parallel corpus are both more than 9 times bigger compared to the OPUS vocabularies. Also the total amount of word tokens is more than 5 times larger compared to the OPUS corpus. Table 6 shows the statistics of this corpus. We run our method for 5 EM+Context cycles (30 EM iterations each) using a 2-gram LM. After that we run another five EM+Context cycles using a 3-gram LM.",
"cite_spans": [
{
"start": 162,
"end": 178,
"text": "(Wahlster, 2000)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 625,
"end": 632,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.2.1"
},
{
"text": "Our results on the VERBMOBIL corpus are summarized in Table 3 . Even on this more complex task our method achieves encouraging results: The translation quality increases from iteration to iteration until the algorithm finally reaches 11.7 BLEU using only the 2-gram LM. Running further five cycles using a 3-gram LM achieves a final performance of 15.5 BLEU. Och (2002) reports results of 48.2 BLEU for a single-word based translation system and 56.1 BLEU using the alignment template approach, both trained on parallel data. However, it should be noted that our experiment only uses 50% of the original VERBMOBIL training data to simulate a truly non-parallel setup. This setup is based on a subset of the monolingual GIGAWORD corpus. We selected 100,000 French sentences from the news agency AFP and 100,000 sentences from the news agency Xinhua. To have a more reliable set of training instances, we selected only sentences with more than 7 tokens. Note that these corpora form true non-parallel data which, besides the length filtering, were not specifically preselected or pre-processed. More details on these non-parallel corpora are summarized in Table 6 . The vocabularies have a size of approximately 60,000 words which is more than 100 times larger than the vocabularies of the OPUS corpus. Also it incorporates more than 25 times as many tokens as the OPUS corpus.",
"cite_spans": [
{
"start": 359,
"end": 369,
"text": "Och (2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 1154,
"end": 1161,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2.2"
},
{
"text": "After initialization, we run our method with N C = 150 candidates per source word for 20 EM iterations using a 2-gram LM. After the first context vector step with N C = 50 we run another 4 \u00d7 20 iterations with N C = 50 with a 2-gram LM. Table 4 shows example lexicon entries we obtained. Note that we obtained these results by using purely non-parallel data, and that we neither used a seed lexicon, nor orthographic features to assign e.g. numbers or proper names: All results are obtained using 2-gram statistics and the context of words only. We find the results encouraging and think that they show the potential of large-scale unsupervised techniques for MT in the future.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2.2"
},
{
"text": "We presented a method for learning statistical machine translation models from non-parallel data. The key to our method lies in limiting the translation model to a limited set of translation candidates and then using the EM algorithm to learn the probabilities. Based on the translations obtained with this model we obtain new translation candidates using a context vector approach. This method increased the training speed by a factor of 10-20 compared to methods known in literature and also resulted in a 1.6 BLEU point increase compared to previous approaches. Due to this efficiency improvement we were able to tackle larger tasks, such as a nonparallel version of the VERBMOBIL corpus having a nearly 10 times larger vocabulary. We also had a look at first results of our method on an even larger Task, incorporating a vocabulary of 60,000 words. We have shown that, using a limited set of translation candidates, we can significantly reduce the computational complexity of the learning task. This work serves as a big step towards large-scale unsupervised training for statistical machine translation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "denoting that ej is not the translation of fi in E \u03b8 (F )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The diagonal has some artifacts for the highest and lowest frequency ranks. See, for example, left side ofFigure 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(Ravi and Knight, 2011) only report results using a 2-gram LM and a whole-segment LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation. The authors would like to thank Sujith Ravi and Kevin Knight for providing us with the OPUS subtitle corpus and David Rybach for kindly sharing his knowledge about the OpenFST library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generalized algorithms for constructing statistical language models",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing sta- tistical language models. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 40-47. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Openfst: A general and efficient weighted finite-state transducer library",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Schalkwyk",
"suffix": ""
}
],
"year": 2007,
"venue": "CIAA",
"volume": "4783",
"issue": "",
"pages": "11--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Allauzen, Michael Riley, Johan Schalkwyk, Woj- ciech Skut, and Mehryar Mohri. 2007. Openfst: A general and efficient weighted finite-state transducer library. In Jan Holub and Jan Zd\u00e1rek, editors, CIAA, volume 4783 of Lecture Notes in Computer Science, pages 11-23. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "Arthur",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "Donald",
"middle": [
"B"
],
"last": "",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P. Dempster, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B, 39.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning Bilingual Lexicons from Monolingual Corpora",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "771--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi, Percy Liang, T Berg-Kirkpatrick, and Dan Klein. 2008. Learning Bilingual Lexicons from Monolingual Corpora. In Proceedings of ACL08 HLT, pages 771-779. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A computational approach to deciphering unknown scripts",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL Workshop on Unsupervised Learning in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "37--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight and Kenji Yamada. 1999. A computa- tional approach to deciphering unknown scripts. In ACL Workshop on Unsupervised Learning in Natural Language Processing, number 1, pages 37-44. Cite- seer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning a translation lexicon from monolingual corpora",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL02 workshop on Unsupervised lexical acquisition, number July",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Pro- ceedings of the ACL02 workshop on Unsupervised lex- ical acquisition, number July, pages 9-16. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Hungarian method for the assignment problem",
"authors": [
{
"first": "Harold",
"middle": [
"W"
],
"last": "Kuhn",
"suffix": ""
}
],
"year": 1955,
"venue": "Naval Research Logistic Quarterly",
"volume": "2",
"issue": "",
"pages": "83--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harold W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistic Quar- terly, 2:83-97.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discovering the linear writing order of a two-dimensional ancient hieroglyphic script",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Shou-De Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2006,
"venue": "Artificial Intelligence",
"volume": "170",
"issue": "",
"pages": "409--421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shou-de Lin and Kevin Knight. 2006. Discovering the linear writing order of a two-dimensional ancient hieroglyphic script. Artificial Intelligence, 170:409- 421, April.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Schuetze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hin- rich Schuetze. 2008. Introduction to Information Re- trieval. Cambridge University Press, 1 edition, July.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical Machine Translation: From Single-Word Models to Alignment Templates",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 2002. Statistical Machine Translation: From Single-Word Models to Alignment Templates. Ph.D. thesis, RWTH Aachen University, Aachen, Ger- many, October.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, ACL '02, pages 311-318, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attacking decipherment problems optimally with low-order n-gram models",
"authors": [
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08",
"volume": "",
"issue": "",
"pages": "812--819",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujith Ravi and Kevin Knight. 2008. Attacking decipher- ment problems optimally with low-order n-gram mod- els. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 812-819, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deciphering foreign language",
"authors": [
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujith Ravi and Kevin Knight. 2011. Deciphering for- eign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies, pages 12-21, Portland, Oregon, USA, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A vector space model for automatic indexing",
"authors": [
{
"first": "Gerard",
"middle": [
"M"
],
"last": "Salton",
"suffix": ""
},
{
"first": "K",
"middle": [
"C"
],
"last": "Andrew",
"suffix": ""
},
{
"first": "Chang",
"middle": [
"S"
],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 1975,
"venue": "Commun. ACM",
"volume": "18",
"issue": "11",
"pages": "613--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard M. Salton, Andrew K. C. Wong, and Chang S. Yang. 1975. A vector space model for automatic in- dexing. Commun. ACM, 18(11):613-620, November.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Study of Translation Edit Rate with Targeted Human Annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Anno- tation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas, pages 223-231, Cambridge, Massachusetts, USA, Au- gust.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A statistical model for lost language decipherment",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2010,
"venue": "48th Annual Meeting of the Association for Computational Linguistics, number July",
"volume": "",
"issue": "",
"pages": "1048--1057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipher- ment. In 48th Annual Meeting of the Association for Computational Linguistics, number July, pages 1048- 1057.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "News from OPUS -A collection of multilingual parallel corpora with tools and interfaces",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2009,
"venue": "Recent Advances in Natural Language Processing",
"volume": "V",
"issue": "",
"pages": "237--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2009. News from OPUS -A collec- tion of multilingual parallel corpora with tools and in- terfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237-248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bul- garia.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Verbmobil: Foundations of speech-to-speech translations",
"authors": [],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Wahlster, editor. 2000. Verbmobil: Foun- dations of speech-to-speech translations. Springer- Verlag, Berlin.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Hypothetical example for a greedy one-to-one assignment of translation candidates. The optimal assignment would contain (time,Zeit) and (work,Arbeit).",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Visualization of the training procedure. The big rectangles represent word lexica in different stages of the training procedure. The small rectangles represent word pairs",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "while",
"content": "<table><tr><td>Name</td><td>Lang.</td><td>Sent.</td><td>Words</td><td>Voc.</td></tr><tr><td>OPUS</td><td>Spanish English</td><td>13,181 19,770</td><td>39,185 61,835</td><td>562 411</td></tr><tr><td>VERBMOBIL</td><td>German English</td><td>27,861 27,862</td><td>282,831 294,902</td><td>5,964 3,723</td></tr><tr><td>GIGAWORD</td><td colspan=\"4\">French 100,000 1,725,993 68,259 English 100,000 1,788,025 64,621</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"text": "Statistics of the corpora used in this paper.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "Results obtained on the OPUS corpus.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"text": "Results obtained on the VERBMOBIL corpus.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF7": {
"text": "Lexicon entries obtained by running our method on the non-parallel GIGAWORD corpus. The first column shows in which iteration the algorithm found the first correct translations f (compared to a parallely trained lexicon)",
"content": "<table><tr><td>among the top 5 candidates</td></tr><tr><td>6.3 GIGAWORD</td></tr><tr><td>6.3.1 Experimental Setup</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}