ACL-OCL / Base_JSON /prefixN /json /N09 /N09-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:03.783474Z"
},
"title": "Using a maximum entropy model to build segmentation lattices for MT",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": "",
"affiliation": {
"laboratory": "Laboratory for Computational Linguistics and Information Processing",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work has shown that translating segmentation lattices (lattices that encode alternative ways of breaking the input to an MT system into words), rather than text in any particular segmentation, improves translation quality of languages whose orthography does not mark morpheme boundaries. However, much of this work has relied on multiple segmenters that perform differently on the same input to generate sufficiently diverse source segmentation lattices. In this work, we describe a maximum entropy model of compound word splitting that relies on a few general features that can be used to generate segmentation lattices for most languages with productive compounding. Using a model optimized for German translation, we present results showing significant improvements in translation quality in German-English, Hungarian-English, and Turkish-English translation over state-ofthe-art baselines.",
"pdf_parse": {
"paper_id": "N09-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work has shown that translating segmentation lattices (lattices that encode alternative ways of breaking the input to an MT system into words), rather than text in any particular segmentation, improves translation quality of languages whose orthography does not mark morpheme boundaries. However, much of this work has relied on multiple segmenters that perform differently on the same input to generate sufficiently diverse source segmentation lattices. In this work, we describe a maximum entropy model of compound word splitting that relies on a few general features that can be used to generate segmentation lattices for most languages with productive compounding. Using a model optimized for German translation, we present results showing significant improvements in translation quality in German-English, Hungarian-English, and Turkish-English translation over state-ofthe-art baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Compound words pose significant challenges to the lexicalized models that are currently common in statistical machine translation. This problem has been widely acknowledged, and the conventional solution, which has been shown to work well for many language pairs, is to segment compounds into their constituent morphemes using either morphological analyzers or empirical methods and then to translate from or to this segmented variant (Koehn et al., 2008; Dyer et al., 2008; Yang and Kirchhoff, 2006) .",
"cite_spans": [
{
"start": 435,
"end": 455,
"text": "(Koehn et al., 2008;",
"ref_id": "BIBREF13"
},
{
"start": 456,
"end": 474,
"text": "Dyer et al., 2008;",
"ref_id": "BIBREF6"
},
{
"start": 475,
"end": 500,
"text": "Yang and Kirchhoff, 2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "But into what units should a compound word be segmented? Taken as a stand-alone task, the goal of a compound splitter is to produce a segmentation for some input that matches the linguistic intuitions of a native speaker of the language. However, there are often advantages to using elements larger than single morphemes as the minimal lexical unit for MT, since they may correspond more closely to the units of translation. Unfortunately, determining the optimal segmentation is challenging, typically requiring extensive experimentation (Koehn and Knight, 2003; Habash and Sadat, 2006; . Recent work has shown that by combining a variety of segmentations of the input into a segmentation lattice and effectively marginalizing over many different segmentations, translations superior to those resulting from any single single segmentation of the input can be obtained (Xu et al., 2005; Dyer et al., 2008; DeNeefe et al., 2008) . Unfortunately, this approach is difficult to utilize because it requires multiple segmenters that behave differently on the same input.",
"cite_spans": [
{
"start": 539,
"end": 563,
"text": "(Koehn and Knight, 2003;",
"ref_id": "BIBREF11"
},
{
"start": 564,
"end": 587,
"text": "Habash and Sadat, 2006;",
"ref_id": "BIBREF8"
},
{
"start": 869,
"end": 886,
"text": "(Xu et al., 2005;",
"ref_id": "BIBREF25"
},
{
"start": 887,
"end": 905,
"text": "Dyer et al., 2008;",
"ref_id": "BIBREF6"
},
{
"start": 906,
"end": 927,
"text": "DeNeefe et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe a maximum entropy word segmentation model that is trained to assign high probability to possibly several segmentations of an input word. This model enables generation of diverse, accurate segmentation lattices from a single model that are appropriate for use in decoders that accept word lattices as input, such as Moses (Koehn et al., 2007) . Since our model relies a small number of dense features, its parameters can be tuned using very small amounts of manually created reference lattices. Furthermore, since these parameters were chosen to have valid interpretation across a variety of languages, we find that the weights estimated for one apply quite well to another. We show that these lattices significantly improve translation quality when translating into English from three languages exhibiting productive compounding: German, Turkish, and Hungarian.",
"cite_spans": [
{
"start": 348,
"end": 368,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. In the next sec-tion, we describe translation from segmentation lattices and give a motivating example, Section 3 describes our segmentation model and its tuning and how it is used to generate segmentation lattices, Section 5 presents experimental results, Section 6 reviews relevant related work, and in Section 7 we conclude and discuss future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we give a brief overview of lattice translation and then describe the characteristics of segmentation lattices that are appropriate for translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation lattice translation",
"sec_num": "2"
},
{
"text": "Word lattices have been used to represent ambiguous input to machine translation systems for a variety of tasks, including translating automatic speech recognition transcriptions and translating from morphologically complex languages Dyer et al., 2008) . The intuition behind using lattices in both approaches is to avoid the error propagation effects that are found when a one-best guess is used. By carrying a certain amount of uncertainty forward in the processing pipeline, information contained in the translation models can be leveraged to help resolve the upstream ambiguity. In our case, we want to propagate uncertainty about the proper segmentation of a compound forward to the decoder, which can use its full translation model to select proper segmentation for translation. Mathematically, this can be understood as follows: whereas the goal in conventional machine translation is to find the sentence\u00ea I 1 that maximizes P r(e I 1 |f J 1 ), the lattice adds a latent variable, the pathf from a designated start start to a designated goal state in the lattice G: Figure 1 : Segmentation lattice examples. The dotted structure indicates linguistically implausible segmentation that might be generated using dictionary-driven approaches.",
"cite_spans": [
{
"start": 234,
"end": 252,
"text": "Dyer et al., 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1074,
"end": 1082,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lattice translation",
"sec_num": "2.1"
},
{
"text": "e I 1 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lattice translation",
"sec_num": "2.1"
},
{
"text": "state transducer, the search represented by equation 3can be carried out efficiently using dynamic programming (Dyer et al., 2008) . Figure 1 shows two lattices that encode the most linguistically plausible ways of segmenting two prototypical German compounds with compositional meanings. However, while these words are structurally quite similar, translating them into English would seem to require different amounts of segmentation. For example, the dictionary fragment shown in Table 1 illustrates that tonbandaufnahme can be rendered into English by following 3 different paths in the lattice, ton/audio band/tape aufnahme/recording, tonband/tape aufnahme/recording, and tonbandaufnahme/tape recording. In contrast, wiederaufnahme can only be translated correctly using the unsegmented form, even though in German the meaning of the full form is a composition of the meaning of the individual morphemes. 1 It should be noted that phrase-based models can translate multiple words as a unit, and therefore capture non-compositional meaning. Thus, by default if the training data is processed such that, for example, aufnahme, in its sense of recording, is segmented into two words, then more paths in the lattices be- come plausible translations. However, using a strategy of \"over segmentation\" and relying on phrase models to learn the non-compositional translations has been shown to degrade translation quality significantly on several tasks (Xu et al., 2004; Habash and Sadat, 2006) . We thus desire lattices containing as little oversegmentation as possible.",
"cite_spans": [
{
"start": 111,
"end": 130,
"text": "(Dyer et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 1448,
"end": 1465,
"text": "(Xu et al., 2004;",
"ref_id": "BIBREF24"
},
{
"start": 1466,
"end": 1489,
"text": "Habash and Sadat, 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure 1",
"ref_id": null
},
{
"start": 481,
"end": 488,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Lattice translation",
"sec_num": "2.1"
},
{
"text": "We have now have a concept of a \"gold standard\" segmentation lattice for translation: it should contain all linguistically motivated segmentations that also correspond to plausible word-for-word translations into English. Figure 2 shows an example of the reference lattice for the two words we just discussed. For the experiments in this paper, we generated a development and test set by randomly choosing 19 German newspaper articles, identifying all words greater than 6 characters is length, and segmenting each word so that the resulting units could be translated compositionally into English. This resulted in 489 training sentences corresponding to 564 paths for the dev set (which was drawn from 15 articles), and 279 words (302 paths) for the test set (drawn from the remaining 4 articles).",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Segmentation lattices",
"sec_num": "2.2"
},
{
"text": "We now turn to the problem of modeling word segmentation in a way that facilitates lattice construction. As a starting point, we consider the work of Koehn and Knight (2003) who observe that in most languages that exhibit compounding, the mor- phemes used to construct compounds frequently also appear as individual tokens. Based on this observation, they propose a model of word segmentation that splits compound words into pieces found in the dictionary based on a variety heuristic scoring criteria. While these models have been reasonably successful (Koehn et al., 2008) , they are problematic for two reasons. First, there is no principled way to incorporate additional features (such as phonotactics) which might be useful to determining whether a word break should occur. Second, the heuristic scoring offers little insight into which segmentations should be included in a lattice.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "Koehn and Knight (2003)",
"ref_id": "BIBREF11"
},
{
"start": 554,
"end": 574,
"text": "(Koehn et al., 2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "We would like our model to consider a wide variety of segmentations of any word (including perhaps hypothesized morphemes that are not in the dictionary), to make use of a rich set of features, and to have a probabilistic interpretation of each hypothesized split (to incorporate into the downstream decoder). We decided to use the class of maximum entropy models, which are probabilistically sound, can make use of possibly many overlapping features, and can be trained efficiently (Berger et al., 1996) . We thus define a model of the conditional probability distribution P r(s N 1 |w), where w is a surface form and s N 1 is the segmented form consisting of N segments as:",
"cite_spans": [
{
"start": 483,
"end": 504,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "P r(s N 1 |w) = exp i \u03bb i h i (s N 1 , w) s exp i \u03bb i h i (s , w) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "To simplify inference and to make the lattice representation more natural, we only make use of local feature functions that depend on properties of each segment:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P r(s N 1 |w) \u221d exp i \u03bb i N j h i (s j , w)",
"eq_num": "(5)"
}
],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "3.1 From model to segmentation lattice",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "The segmentation model just introduced is equivalent to a lattice where each vertex corresponds to a particular coverage (in terms of letters consumed from left to right) of the input word. Since we only make use of local features, the number of vertices in a lattice for word w is |w| \u2212 m, where m is the minimum segment length permitted. In all experiments reported in this paper, we use m = 3. Each edge is labeled with a morpheme s (corresponding to the morpheme associated with characters delimited by the start and end nodes of the edge) as well as a weight, i \u03bb i h i (s, w). The cost of any path from the start to the goal vertex will be equal to the numerator in equation 4. The value of the denominator can be computed using the forward algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "In most of our experiments, s will be identical to the substring of w that the edge is designated to cover. However, this is not a requirement. For example, German compounds frequently have so-called Fugenelemente, one or two characters that \"glue together\" the primary morphemes in a compound. Since we permit these characters to be deleted, then an edge where they are deleted will have fewer characters than the coverage indicated by the edge's starting and ending vertices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A maximum entropy segmentation model",
"sec_num": "3"
},
{
"text": "Except for the minimum segment length restriction, our model defines probabilities for all segmentations of an input word, making the resulting segmentation lattices are quite large. Since large lattices are costly to deal with during translation (and may lead to worse translations because poor segmentations are passed to the decoder), we prune them using forward-backward pruning so as to contain just the highest probability paths (Sixtus and Ortmanns, 1999) . This works by computing the score of the best path passing through every edge in the lattice using the forward-backward algorithm. By finding the best score overall, we can then prune edges using a threshold criterion; i.e., edges whose score is some factor \u03b1 away from the global best edge score.",
"cite_spans": [
{
"start": 435,
"end": 462,
"text": "(Sixtus and Ortmanns, 1999)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lattice pruning",
"sec_num": "3.2"
},
{
"text": "Our model defines a conditional probability distribution over virtually all segmentations of a word w. To train our model, we wish to maximize the likelihood of the segmentations contained in the reference lattices by moving probability mass away from the segmentations that are not in the reference lattice. Thus, we wish to minimize the following objective (which can be computed using the forward algorithm over the unpruned hypothesis lattices):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum likelihood training",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 log i s\u2208R i p(s|w i )",
"eq_num": "(6)"
}
],
"section": "Maximum likelihood training",
"sec_num": "3.3"
},
{
"text": "The gradient with respect to the feature weights for a log linear model is simply:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum likelihood training",
"sec_num": "3.3"
},
{
"text": "\u2202L \u2202\u03bb k = i E p(s|w i ) [h k ] \u2212 E p(s|w i ,R i ) [h k ] (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum likelihood training",
"sec_num": "3.3"
},
{
"text": "To compute these values, the first expectation is computed using forward-backward inference over the full lattice. To compute the second expectation, the full lattice is intersected with the reference lattice R i , and then forward-backward inference is redone. 2 We use the standard quasi-Newtonian method L-BFGS to optimize the model (Liu et al., 1989) . Training generally converged in only a few hundred iterations.",
"cite_spans": [
{
"start": 262,
"end": 263,
"text": "2",
"ref_id": null
},
{
"start": 336,
"end": 354,
"text": "(Liu et al., 1989)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum likelihood training",
"sec_num": "3.3"
},
{
"text": "In some cases, such as when performing word alignment for translation model construction, lattices cannot be used easily. In these cases, a 1best segmentation (which can be determined from the lattice using the Viterbi algorithm) may be desired. To train the parameters of the model for this condition (which is arguably slightly different from the lattice generation case we just considered), we used the minimum error training (MERT) algorithm on the segmentation lattices to find the parameters that minimized the error on our dev set (Macherey et al., 2008) . The error function we used was WER (the minimum number of insertions, substitutions, and deletions along any path in the reference lattice, normalized by the length of this path). The WER on the held-out test set for a system tuned using MERT is 9.9%, compared to 11.1% for maximum likelihood training.",
"cite_spans": [
{
"start": 538,
"end": 561,
"text": "(Macherey et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training to minimize 1-best error",
"sec_num": "3.3.1"
},
{
"text": "We remark that since we did not have the resources to generate training data in all the languages we wished to generate segmentation lattices for, we have confined ourselves to features that we expect to be reasonably informative for a broad class of languages. A secondary advantage of this is that we used denser features than are often used in maximum entropy modeling, meaning that we could train our model with relatively less training data than might otherwise be required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.4"
},
{
"text": "The features we used in our compound segmentation model for the experiments reported below are shown in Table 2 . Building on the prior work that relied heavily on the frequency of the hypothesized constituent morphemes in a monolingual corpus, we included features that depend on this value, f (s i ). |s i | refers to the number of letters in the ith hypothesized segment. Binary predicates evaluate to 1 when true and 0 otherwise. f (s i ) is the frequency of the token s i as an independent word in a monolingual corpus. p(#|s i1 \u2022 \u2022 \u2022 s i4 ) is the probability of a word start preceding the letters s i1 \u2022 \u2022 \u2022 s i4 . We found it beneficial to include a feature that was the probability of a certain string of characters beginning a word, for which we used a reverse 5-gram character model and predicted the word boundary given the first five letters of the hypothesized word split. 3 Since we did have expertise in German morphology, we did build a special German model. For this, we permitted the strings s, n, and es to be deleted between words. Each deletion fired a count feature (listed as fugen in the table). Analysis of errors indicated that the segmenter would periodically propose an incorrect segmentation where a single word could be divided into a word and a nonword consisting of common in- flectional suffixes. To address this, an additional feature was added that fired when a proposed segment was one of a set N of 30 nonwords that we saw quite frequently. The weights shown in Table 2 are those learned by maximum likelihood training on models both with and without the special German features, which are indicated with \u2020.",
"cite_spans": [
{
"start": 887,
"end": 888,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 1500,
"end": 1507,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.4"
},
{
"text": "Feature de-only neutral \u2020 s i \u2208 N -3.55 - f (s i ) > 0.005 -3.13 -3.31 f (s i ) > 0 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.4"
},
{
"text": "|s i | \u2264 10, f (s i ) > 2 \u221210 -0.51 -0.82 log f (s i ) -0.32 -0.36 2 \u221210 < f (s i ) < 0.005 -0.26 -0.45",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.4"
},
{
"text": "To give some sense of the performance of the model in terms of its ability to generate lattices independently of a translation task, we present precision and recall of segmentations for pruning parameters (cf. Section 3.2) ranging from \u03b1 = 0 to \u03b1 = 5. Precision measures the number of paths in the hypothesized lattice that correspond to paths in the reference lattice; recall measures the number of paths in the reference lattices that are found in the hypothesis lattice. Figure 3 shows the effect of manipulating the density parameter on the precision and recall of the German lattices. Note that very high recall is possible; however, the German-only features have a significant impact, especially on recall, because the reference lattices include paths where Fugenelemente have been deleted.",
"cite_spans": [],
"ref_spans": [
{
"start": 474,
"end": 482,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model evalatuion",
"sec_num": "4"
},
{
"text": "We now review experiments using segmentation lattices produced by the segmentation model we just introduced in German-English, Hungarian-English, and Turkish-English translation tasks and then show results elucidating the effect of the lattice density parameter. We begin with a description of our MT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation experiments",
"sec_num": "5"
},
{
"text": "For all experiments, we used a 5-gram English language model trained on the AFP and Xinua portions of the Gigaword v3 corpus (Graff et al., 2007) with modified Kneser-Ney smoothing (Kneser and Ney, 1995) . The training, development, and test data for German-English and Hungarian-English systems used were distributed as part of the 2009 EACL Workshop on Machine Translation, 4 and the Turkish-English data corresponds to the training and test sets used in the work of Oflazer and Durgar El-Kahlout (2007) . Corpus statistics for all language pairs are summarized in Table 3 . We note that in all language pairs, the 1BEST segmentation variant of the training data results in a significant reduction in types.",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Graff et al., 2007)",
"ref_id": null
},
{
"start": 181,
"end": 203,
"text": "(Kneser and Ney, 1995)",
"ref_id": "BIBREF10"
},
{
"start": 469,
"end": 505,
"text": "Oflazer and Durgar El-Kahlout (2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 567,
"end": 574,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Data preparation and system description",
"sec_num": "5.1"
},
{
"text": "Word alignment was carried out by running Giza++ implementation of IBM Model 4 initialized with 5 iterations of Model 1, 5 of the HMM aligner, and 3 iterations of Model 4 (Och and Ney, 2003) in both directions and then symmetrizing using the grow-diag-final-and heuristic (Koehn et al., 2003) . For each language pair, the corpus was aligned twice, once in its non-segmented variant and once using the single-best segmentation variant.",
"cite_spans": [
{
"start": 171,
"end": 190,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF18"
},
{
"start": 272,
"end": 292,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation and system description",
"sec_num": "5.1"
},
{
"text": "For translation, we used a bottom-up parsing decoder that uses cube pruning to intersect the lan-guage model with the target side of the synchronous grammar. The grammar rules were extracted from the word aligned parallel corpus and scored as described in Chiang (2007) . The features used by the decoder were the English language model log probability, log f (\u0113|f ), the 'lexical translation' log probabilities in both directions (Koehn et al., 2003) , and a word count feature. For the lattice systems, we also included the unnormalized log p(f |G), as it is defined in Section 3, as well as an input word count feature. The feature weights were tuned on a heldout development set so as to maximize an equally weighted linear combination of BLEU and 1-TER (Papineni et al., 2002; Snover et al., 2006) using the minimum error training algorithm on a packed forest representation of the decoder's hypothesis space (Macherey et al., 2008) . The weights were independently optimized for each language pair and each experimental condition.",
"cite_spans": [
{
"start": 256,
"end": 269,
"text": "Chiang (2007)",
"ref_id": "BIBREF4"
},
{
"start": 431,
"end": 451,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 758,
"end": 781,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF20"
},
{
"start": 782,
"end": 802,
"text": "Snover et al., 2006)",
"ref_id": "BIBREF23"
},
{
"start": 914,
"end": 937,
"text": "(Macherey et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation and system description",
"sec_num": "5.1"
},
{
"text": "In this section, we report the results of an experiment to see if the compound lattices constructed using our maximum entropy model yield better translations than either an unsegmented baseline or a baseline consisting of a single-best segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation lattice results",
"sec_num": "5.2"
},
{
"text": "For each language pair, we define three conditions: BASELINE, 1BEST, and LATTICE. In the BASELINE condition, a lowercased and tokenized (but not segmented) version of the test data is translated using the grammar derived from a nonsegmented training data. In the 1BEST condition, the single best segmentation\u015d N 1 that maximizes P r(s N 1 |w) is chosen for each word using the MERTtrained model (the German model for German, and the language-neutral model for Hungarian and Turkish). This variant is translated using a grammar induced from a parallel corpus that has also been segmented according to the same decision rule. In the LATTICE condition, we constructed segmentation lattices using the technique described in Section 3.1. For all languages pairs, we used d = 2 as the pruning density parameter (which corresponds to the highest F-score on the held out test set). Additionally, if the unsegmented form of the word was removed from the lattice during pruning, it was restored to the lattice with zero weight. Table 4 summarizes the results of the translation f -tokens f -types e-tokens. e-types DE-BASELINE 38M 307k 40M 96k DE-1BEST 40M 136k \" \" HU-BASELINE 25M 646k 29M 158k HU-1BEST 27M 334k \" \" TR-BASELINE 1.0M 56k 1.3M 23k TR-1BEST 1.1M 41k \" \" experiments comparing the three input variants. For all language pairs, we see significant improvements in both BLEU and TER when segmentation lattices are used. 5 Additionally, we also confirmed previous findings that showed that when a large amount of training data is available, moving to a one-best segmentation does not yield substantial improvements (Yang and Kirchhoff, 2006) . Perhaps most surprisingly, the improvements observed when using lattices with the Hungarian and Turkish systems were larger than the corresponding improvement in the German system, but German was the only language for which we had segmentation training data. The smaller effect in German is probably due to there being more in-domain training data in the German system than in the (otherwise comparably sized) Hungarian system.",
"cite_spans": [
{
"start": 1636,
"end": 1662,
"text": "(Yang and Kirchhoff, 2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 1018,
"end": 1025,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 1105,
"end": 1239,
"text": "DE-BASELINE 38M 307k 40M 96k DE-1BEST 40M 136k \" \" HU-BASELINE 25M 646k 29M 158k HU-1BEST 27M 334k \" \" TR-BASELINE",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Segmentation lattice results",
"sec_num": "5.2"
},
{
"text": "Targeted analysis of the translation output shows that while both the 1BEST and LATTICE systems generally produce adequate translations of compound words that are out of vocabulary in the BASE-LINE system, the LATTICE system performs better since it recovers from infelicitous splits that the one-best segmenter makes. For example, one class of error we frequently observe is that the one-best segmenter splits an OOV proper name into two pieces when a portion of the name corresponds to a known word in the source language (e.g. tom tan-credo\u2192tom tan credo which is then translated as tom tan belief ). 6 Figure 4 shows the effect of manipulating the density parameter (cf. Section 3.2) on the performance and decoding time of the Turkish-English translation system. It further confirms the hypothesis that increased diversity of segmentations encoded in a segmentation lattice can improve translation performance; however, it also shows that once the density becomes too great, and too many implausible segmentations are included in the lattice, translation quality will be harmed.",
"cite_spans": [],
"ref_spans": [
{
"start": 606,
"end": 614,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Segmentation lattice results",
"sec_num": "5.2"
},
{
"text": "Aside from improving the vocabulary coverage of machine translation systems (Koehn et al., 2008; Yang and Kirchhoff, 2006; Habash and Sadat, 2006) , compound word segmentation (also referred to as decompounding) has been shown to be helpful in a variety of NLP tasks including mono-and crosslingual IR (Airio, 2006) and speech recognition (Hessen and Jong, 2003) . A number of researchers have demonstrated the value of using lattices to encode segmentation alternatives as input to a machine translation system (Dyer et al., 2008; DeNeefe et al., 2008; Xu et al., 2004) , but this is the first work to do so using a single segmentation model. Another strand of inquiry that is closely related is the work on adjusting the source language segmentation to match the granularity of the target language as a way of improving translation. The approaches suggested thus far have been mostly of a heuristic nature tailored to Chinese-English translation (Bai et al., 2008; Ma et al., 2007) .",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Koehn et al., 2008;",
"ref_id": "BIBREF13"
},
{
"start": 97,
"end": 122,
"text": "Yang and Kirchhoff, 2006;",
"ref_id": "BIBREF26"
},
{
"start": 123,
"end": 146,
"text": "Habash and Sadat, 2006)",
"ref_id": "BIBREF8"
},
{
"start": 302,
"end": 315,
"text": "(Airio, 2006)",
"ref_id": "BIBREF0"
},
{
"start": 339,
"end": 362,
"text": "(Hessen and Jong, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 512,
"end": 531,
"text": "(Dyer et al., 2008;",
"ref_id": "BIBREF6"
},
{
"start": 532,
"end": 553,
"text": "DeNeefe et al., 2008;",
"ref_id": "BIBREF5"
},
{
"start": 554,
"end": 570,
"text": "Xu et al., 2004)",
"ref_id": "BIBREF24"
},
{
"start": 948,
"end": 966,
"text": "(Bai et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 967,
"end": 983,
"text": "Ma et al., 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "In this paper, we have presented a maximum entropy model for compound word segmentation and used it to generate segmentation lattices for input into a statistical machine translation system. These segmentation lattices improve translation quality (over an already strong baseline) in three typologically distinct languages (German, Hungarian, Turkish) when translating into English. Previous approaches to generating segmentation lattices have been quite laborious, relying either on the existence of multiple segmenters (Dyer et al., 2008; Xu et al., 2005) or hand-crafted rules (DeNeefe et al., 2008) . Although the segmentation model we propose is discriminative, we have shown that it can be trained using a minimal amount of annotated training data. Furthermore, when even this minimal data cannot be acquired for a particular language (as was the situa-tion we faced with Hungarian and Turkish), we have demonstrated that the parameters obtained in one language work surprisingly well for others. Thus, with virtually no cost, this model can be used with a variety of diverse languages. While these results are already quite satisfying, there are a number of compelling extensions to this work that we intend to explore in the future. First, unsupervised segmentation approaches offer a very compelling alternative to the manually crafted segmentation lattices that we created. Recent work suggests that unsupervised segmentation of inflectional affixal morphology works quite well (Poon et al., 2009) , and extending this work to compounding morphology should be feasible, obviating the need for expensive hand-crafted reference lattices. Second, incorporating target language information into a segmentation model holds considerable promise for inducing more effective translation models that perform especially well for segmentation lattice inputs.",
"cite_spans": [
{
"start": 521,
"end": 540,
"text": "(Dyer et al., 2008;",
"ref_id": "BIBREF6"
},
{
"start": 541,
"end": 557,
"text": "Xu et al., 2005)",
"ref_id": "BIBREF25"
},
{
"start": 580,
"end": 602,
"text": "(DeNeefe et al., 2008)",
"ref_id": "BIBREF5"
},
{
"start": 1488,
"end": 1507,
"text": "(Poon et al., 2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "The English word resumption is likewise composed of two morphemes, the prefix re-and a kind of bound morpheme that never appears in other contexts (sometimes called a 'cranberry' morpheme), but the meaning of the whole is idiosyncratic enough that it cannot be called compositional.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The second expectation corresponds to the empirical feature observations in a standard maximum entropy model. Because this is an expectation and not an invariant observation, the log likelihood function is not guaranteed to be concave and the objective surface may have local minima. However, experimentation revealed the optimization performance was largely invariant with respect to its starting point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In general, this helped avoid situations where a word may be segemented into a frequent word and then a non-word string of characters since the non-word typically violated the phonotactics of the language in some way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/wmt09",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using bootstrap resampling(Koehn, 2004), the improvements in BLEU, TER, as well as the linear combination used in tuning are statistically significant at at least p < .05.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that our maximum entropy segmentation model could easily address this problem by incorporating information about whether a word is likely to be a named entity as a feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Special thanks to Kemal Oflazar and Reyyan Yeniterzi of Sabanc\u0131 University for providing the Turkish-English corpus and to Philip Resnik, Adam Lopez, Trevor Cohn, and especially Phil Blunsom for their helpful suggestions. This research was supported by the Army Research Laboratory. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the view of the sponsors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word normalization and decompounding in mono-and bilingual IR",
"authors": [
{
"first": "Eija",
"middle": [],
"last": "Airio",
"suffix": ""
}
],
"year": 2006,
"venue": "Information Retrieval",
"volume": "9",
"issue": "",
"pages": "249--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eija Airio. 2006. Word normalization and decompound- ing in mono-and bilingual IR. Information Retrieval, 9:249-271.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving word alignment by adjusting Chinese word segmentation",
"authors": [
{
"first": "Ming-Hong",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming-Hong Bai, Keh-Jiann Chen, and Jason S. Chang. 2008. Improving word alignment by adjusting Chi- nese word segmentation. In Proceedings of the Third International Joint Conference on Natural Language Processing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.L. Berger, V.J. Della Pietra, and S.A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Speech translation by confusion network decoding. In Proceeding of ICASSP",
"authors": [
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico ; Pi-Chuan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Bertoldi, R. Zens, and M. Federico. 2007. Speech translation by confusion network decoding. In Pro- ceeding of ICASSP 2007, Honolulu, Hawaii, April. Pi-Chuan Chang, Dan Jurafsky, and Christopher D. Man- ning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Transla- tion, Prague, Czech Republic, June.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overcoming vocabulary sparsity in mt using lattices",
"authors": [
{
"first": "S",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. DeNeefe, U. Hermjakob, and K. Knight. 2008. Over- coming vocabulary sparsity in mt using lattices. In Proceedings of AMTA, Honolulu, HI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generalizing word lattice translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of HLT-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dyer, S. Muresan, and P. Resnik. 2008. Generalizing word lattice translation. In Proceedings of HLT-ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Arabic preprocessing schemes for statistical machine translation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sadat",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Habash and F. Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In Proc. of NAACL, New York.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Compound decomposition in dutch large vocabulary speech recognition",
"authors": [
{
"first": "Arjan",
"middle": [],
"last": "Van Hessen",
"suffix": ""
},
{
"first": "Franciska De Jong",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Eurospeech",
"volume": "",
"issue": "",
"pages": "225--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arjan Van Hessen and Franciska De Jong. 2003. Com- pound decomposition in dutch large vocabulary speech recognition. In Proceedings of Eurospeech 2003, Gen- eve, pages 225-228.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improved backing-off for m-gram language modeling",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of IEEE Internation Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Kneser and H. Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of IEEE Internation Conference on Acoustics, Speech, and Sig- nal Processing, pages 181-184.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Empirical methods for compound splitting",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn and K. Knight. 2003. Empirical methods for compound splitting. In Proc. of the EACL 2003.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Koehn",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, F.J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL 2003, pages 48-54, Morristown, NJ, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Towards better machine translation quality for the German-English language pairs",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch Mayne",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual Meeting of the Association for Computation Linguistics (ACL), Demonstration Session",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, H. Hoang, A. Birch Mayne, C. Callison- Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Annual Meeting of the Association for Computation Linguistics (ACL), Demonstration Session, pages 177-180, June. Philipp Koehn, Abhishek Arun, and Hieu Hoang. 2008. Towards better machine translation quality for the German-English language pairs. In ACL Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Statistical signficiance tests for machine translation evluation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn. 2004. Statistical signficiance tests for machine translation evluation. In Proceedings of the 2004 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 388-395.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the limited memory BFGS method for large scale optimization",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dong",
"middle": [
"C"
],
"last": "Nocedal",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nocedal",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming B",
"volume": "45",
"issue": "3",
"pages": "503--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong C. Liu, Jorge Nocedal, Dong C. Liu, and Jorge No- cedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Program- ming B, 45(3):503-528.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bootstrapping word alignment via word packing",
"authors": [
{
"first": "Yanjun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Stroppa",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "304--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanjun Ma, Nicolas Stroppa, and Andy Way. 2007. Bootstrapping word alignment via word packing. In Proceedings of the 45th Annual Meeting of the Asso- ciation of Computational Linguistics, pages 304-311, Prague, Czech Republic, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lattice-based minimum error rate training for statistical machine translation",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Macherey, Franz Josef Och, Ignacio Thayer, and Jakob Uszkoreit. 2008. Lattice-based minimum error rate training for statistical machine translation. In Proceedings of EMNLP, Honolulu, HI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploring different representational units in English-to-Turkish statistical machine translation",
"authors": [
{
"first": "Kemal",
"middle": [],
"last": "Oflazer",
"suffix": ""
},
{
"first": "Ilknur Durgar El-Kahlout",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kemal Oflazer and Ilknur Durgar El-Kahlout. 2007. Ex- ploring different representational units in English-to- Turkish statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Trans- lation, pages 25-32, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meet- ing of the ACL, pages 311-318.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised morphological segmentation with log-linear models",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proc. of NAACL 2009.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "High quality word graphs using forward-backward pruning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sixtus",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ortmanns",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Sixtus and S. Ortmanns. 1999. High quality word graphs using forward-backward pruning. In Proceed- ings of ICASSP, Phoenix, AZ.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie J. Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Do we need Chinese word segmentation for statistical machine translation?",
"authors": [
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Third SIGHAN Workshop on Chinese Language Learning",
"volume": "",
"issue": "",
"pages": "122--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Xu, R. Zens, and H. Ney. 2004. Do we need Chi- nese word segmentation for statistical machine trans- lation? In Proceedings of the Third SIGHAN Work- shop on Chinese Language Learning, pages 122-128, Barcelona, Spain.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Integrated Chinese word segmentation in statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of IWSLT 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Xu, E. Matusov, R. Zens, and H. Ney. 2005. Inte- grated Chinese word segmentation in statistical ma- chine translation. In Proc. of IWSLT 2005, Pittsburgh.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Phrase-based backoff models for machine translation of highly inflected languages",
"authors": [
{
"first": "M",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the EACL 2006",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Yang and K. Kirchhoff. 2006. Phrase-based back- off models for machine translation of highly inflected languages. In Proceedings of the EACL 2006, pages 41-48.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Manually created reference lattices for the two words fromFigure 1. Although only a subset of all linguistically plausible segmentations, each path corresponds to a plausible segmentation for word-for-word German-English translation."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The effect of the lattice density parameter on precision and recall."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The effect of the lattice density parameter on translation quality and decoding time."
},
"TABREF2": {
"num": null,
"text": "",
"content": "<table><tr><td>: German-English dictionary fragment for words present in Figure 1.</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "Features and weights learned by maximum likelihood training, sorted by weight magnitude.",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"text": "Training corpus statistics.",
"content": "<table><tr><td/><td>BLEU TER</td></tr><tr><td colspan=\"2\">DE-BASELINE 21.0 60.6</td></tr><tr><td>DE-1BEST</td><td>20.7 60.1</td></tr><tr><td>DE-LATTICE</td><td>21.6 59.8</td></tr><tr><td colspan=\"2\">HU-BASELINE 11.0 71.1</td></tr><tr><td>HU-1BEST</td><td>10.7 70.4</td></tr><tr><td>HU-LATTICE</td><td>12.3 69.1</td></tr><tr><td>TR-BASELINE</td><td>26.9 61.0</td></tr><tr><td>TR-1BEST</td><td>27.8 61.2</td></tr><tr><td>TR-LATTICE</td><td>28.7 59.6</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"text": "",
"content": "<table><tr><td>: Translation results for German (DE)-English, Hungarian (HU)-English, and Turkish (TR)-English. Scores were computed using a single reference and are case insensitive.</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}