ACL-OCL / Base_JSON /prefixP /json /P01 /P01-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P01-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:30:07.460313Z"
},
"title": "Refined Lexicon Models for Statistical Machine Translation using a Maximum Entropy Approach",
"authors": [
{
"first": "Ismael",
"middle": [],
"last": "Garc\u00eda Varea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Univ. de Castilla-La Mancha Campus Universitario s/n",
"location": {
"postCode": "02071",
"settlement": "Albacete",
"country": "Spain"
}
},
"email": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": "",
"affiliation": {
"laboratory": "Dpto. de Sist. Inf. y Comp. Inst. Tecn. de Inf. (UPV) Avda. de Los Naranjos, s/n",
"institution": "",
"location": {
"postCode": "46071",
"settlement": "Valencia",
"country": "Spain"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Typically, the lexicon models used in statistical machine translation systems do not include any kind of linguistic or contextual information, which often leads to problems in performing a correct word sense disambiguation. One way to deal with this problem within the statistical framework is to use maximum entropy methods. In this paper, we present how to use this type of information within a statistical machine translation system. We show that it is possible to significantly decrease training and test corpus perplexity of the translation models. In addition, we perform a rescoring of \u00a2-Best lists using our maximum entropy model and thereby yield an improvement in translation quality. Experimental results are presented on the so-called \"Verbmobil Task\".",
"pdf_parse": {
"paper_id": "P01-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Typically, the lexicon models used in statistical machine translation systems do not include any kind of linguistic or contextual information, which often leads to problems in performing a correct word sense disambiguation. One way to deal with this problem within the statistical framework is to use maximum entropy methods. In this paper, we present how to use this type of information within a statistical machine translation system. We show that it is possible to significantly decrease training and test corpus perplexity of the translation models. In addition, we perform a rescoring of \u00a2-Best lists using our maximum entropy model and thereby yield an improvement in translation quality. Experimental results are presented on the so-called \"Verbmobil Task\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Typically, the lexicon models used in statistical machine translation systems are only single-word based, that is one word in the source language corresponds to only one word in the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Those lexicon models lack from context information that can be extracted from the same parallel corpus. This additional information could be: from WordNet), current/previous speech or dialog act.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To include this additional information within the statistical framework we use the maximum entropy approach. This approach has been applied in natural language processing to a variety of tasks. (Berger et al., 1996) applies this approach to the so-called IBM Candide system to build context dependent models, compute automatic sentence splitting and to improve word reordering in translation. Similar techniques are used in (Papineni et al., 1996; Papineni et al., 1998) for socalled direct translation models instead of those proposed in (Brown et al., 1993) . (Foster, 2000) describes two methods for incorporating information about the relative position of bilingual word pairs into a maximum entropy translation model. Other authors have applied this approach to language modeling (Rosenfeld, 1996; Martin et al., 1999; Peters and Klakow, 1999) . A short review of the maximum entropy approach is outlined in Section 3.",
"cite_spans": [
{
"start": 194,
"end": 215,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
},
{
"start": 424,
"end": 447,
"text": "(Papineni et al., 1996;",
"ref_id": "BIBREF11"
},
{
"start": 448,
"end": 470,
"text": "Papineni et al., 1998)",
"ref_id": "BIBREF12"
},
{
"start": 539,
"end": 559,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
},
{
"start": 562,
"end": 576,
"text": "(Foster, 2000)",
"ref_id": "BIBREF4"
},
{
"start": 785,
"end": 802,
"text": "(Rosenfeld, 1996;",
"ref_id": "BIBREF16"
},
{
"start": 803,
"end": 823,
"text": "Martin et al., 1999;",
"ref_id": "BIBREF5"
},
{
"start": 824,
"end": 848,
"text": "Peters and Klakow, 1999)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of the translation process in statistical machine translation can be formulated as follows: A source language string",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "\u00a4 \u00a6 \u00a5 \u00a7 \u00a9 \u00a4 \u00a7 \u00a6 \u00a4 \u00a5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "is to be translated into a target language string \u00a7 \u00a7 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": ". In the experiments reported in this paper, the source language is German and the target language is English. Every target string is considered as a possible translation for the input. If we assign a probability \u00a7 ! \u00a4 \u00a6 \u00a5 \u00a7 # \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "to each pair of strings \u00a7 % $ \u00a4 \u00a5 \u00a7 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": ", then according to Bayes' decision rule, we have to choose the target string that maximizes the product of the target language model \u00a7 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "and the string translation model & \u00a4 \u00a6 \u00a5 \u00a7 \u00a7 \" . Many existing systems for statistical machine translation (Berger et al., 1994; Wang and Waibel, 1997; Tillmann et al., 1997; Nie\u00dfen et al., 1998) make use of a special way of structuring the string translation model like proposed by (Brown et al., 1993) : The correspondence between the words in the source and the target string is described by alignments that assign one target word position to each source word position. The lexicon probability",
"cite_spans": [
{
"start": 107,
"end": 128,
"text": "(Berger et al., 1994;",
"ref_id": "BIBREF1"
},
{
"start": 129,
"end": 151,
"text": "Wang and Waibel, 1997;",
"ref_id": "BIBREF19"
},
{
"start": 152,
"end": 174,
"text": "Tillmann et al., 1997;",
"ref_id": "BIBREF18"
},
{
"start": 175,
"end": 195,
"text": "Nie\u00dfen et al., 1998)",
"ref_id": "BIBREF6"
},
{
"start": 283,
"end": 303,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "' ( & \u00a4 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "of a certain target word to occur in the target string is assumed to depend basically only on the source word \u00a4 aligned to it. These alignment models are similar to the concept of Hidden Markov models (HMM) in speech recognition. The alignment mapping is with the 'empty' word A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "to account for source words that are not aligned to any target word. In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "(statistical) alignment models & \u00a4 \u00a5 \u00a7 ( $ 3 \u00a5 \u00a7 % \u00a7 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": ", the alignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "is introduced as a hidden variable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "Typically, the search is performed using the socalled maximum approximation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "B \u00a7 C 7 D F E H G I C ! P Q S R T U \u00a7 \" W V ! X \u1ef2 b a T & \u00a4 \u00a5 \u00a7 $ 3 \u00a5 \u00a7 \u00a7 \" cC 7 D F E H G I C ! P QR T U \u00a7 \" W V G d C ! P Y a T & \u00a4 \u00a5 \u00a7 $ 3 \u00a5 \u00a7 \u00a7 \" c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "The search space consists of the set of all possible target language strings % \u00a7 and all possible alignments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": ". The overall architecture of the statistical translation approach is depicted in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "The translation probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "& \u00a4 \u00a5 \u00a7 $ 3 \u00a5 \u00a7 % \u00a7 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "can be rewritten as follows: Figure 1 : Architecture of the translation approach based on Bayes' decision rule.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "& \u00a4 \u00a5 \u00a7 $ 3 \u00a5 \u00a7 e \u00a7 f \" \u00a5 g 5 i h \u00a7 p q & \u00a4 5 $ 3 ! 5 \u00a4 5 r \u00a7 \u00a7 $ 3 5 % r \u00a7 \u00a7 $ \u00a7 f \"\u00a5 g 5 i h \u00a7 H s 3 ! 5 \u00a4 5 r \u00a7 \u00a7 $ 3 5 % r \u00a7 \u00a7 $ \u00a7 \" t V & \u00a4 5 \u00a4 5 % r \u00a7 \u00a7 $ 3 5 \u00a7 $ \u00a7 \" S u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "Typically, the probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "p q & \u00a4 5 \u00a4 5 r \u00a7 \u00a7 $ 3 5 \u00a7 $ \u00a7 \" is approximated by a lexicon model ' ( & \u00a4 5 Y w v \" by dropping the dependencies on \u00a4 5 r \u00a7 \u00a7 , 3 5 % r \u00a7 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": ", and \u00a7 . Obviously, this simplification is not true for a lot of natural language phenomena. The straightforward approach to include more dependencies in the lexicon model would be to add additional dependencies(e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "' t & \u00a4 5 Y S v $ Y w v F x T \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "). This approach would yield a significant data sparseness problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "Here, the role of maximum entropy (ME) is to build a stochastic model that efficiently takes a larger context into account. In the following, we will use",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "' t & \u00a4 y \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "to denote the probability that the ME model assigns to \u00a4 in the context y in order to distinguish this model from the basic lexicon model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "' ( & \u00a4 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": ". In the maximum entropy approach we describe all properties that we feel are useful by so-called feature functions ( y $ \u00a4 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": ". For example, if we want to model the existence or absence of a specific word in the context of an English word which has the translation \u00a4 we can express this dependency using the following feature function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q f Q w y $ \u00a4 \" U if \u00a4 \u00a4 and y 9 otherwise",
"eq_num": "(1)"
}
],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "The ME principle suggests that the optimal parametric form of a model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "' ( & \u00a4 y \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "taking into account only the feature functions $ $ $ F is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "' ( & \u00a4 y \" y \" ( P s e d X h \u00a7 f g y $ \u00a4 \" S u Here y \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "is a normalization factor. The resulting model has an exponential form with free parameters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "f $ $ $ F .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "The parameter values which maximize the likelihood for a given training corpus can be computed with the socalled GIS algorithm (general iterative scaling) or its improved version IIS (Pietra et al., 1997; Berger et al., 1996) .",
"cite_spans": [
{
"start": 179,
"end": 204,
"text": "IIS (Pietra et al., 1997;",
"ref_id": null
},
{
"start": 205,
"end": 225,
"text": "Berger et al., 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "It is important to notice that we will have to obtain one ME model for each target word observed in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum entropy modeling",
"sec_num": "3"
},
{
"text": "In order to train the ME model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual information and training events",
"sec_num": "4"
},
{
"text": "' Q & \u00a4 y \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual information and training events",
"sec_num": "4"
},
{
"text": "associated to a target word , we need to construct a corresponding training sample from the whole bilingual corpus depending on the contextual information that we want to use. To construct this sample, we need to know the word-to-word alignment between each sentence pair within the corpus. That is obtained using the Viterbi alignment provided by a translation model as described in (Brown et al., 1993) . Specifically, we use the Viterbi alignment that was produced by Model 5. We use the program GIZA++ (Och and Ney, 2000b; Och and Ney, 2000a) , which is an extension of the training program available in EGYPT (Al-Onaizan et al., 1999) . Berger et al. (1996) use the words that surround a specific word pair $ \u00a4 \"",
"cite_spans": [
{
"start": 384,
"end": 404,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
},
{
"start": 506,
"end": 526,
"text": "(Och and Ney, 2000b;",
"ref_id": "BIBREF8"
},
{
"start": 527,
"end": 546,
"text": "Och and Ney, 2000a)",
"ref_id": "BIBREF7"
},
{
"start": 614,
"end": 639,
"text": "(Al-Onaizan et al., 1999)",
"ref_id": "BIBREF0"
},
{
"start": 642,
"end": 662,
"text": "Berger et al. (1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual information and training events",
"sec_num": "4"
},
{
"text": "as contextual information. The authors propose as context the 3 words to the left and the 3 words to the right of the target word. In this work we use the following contextual information: \u00a3 Target context: As in (Berger et al., 1996) we consider a window of 3 words to the left and to the right of the target word considered.",
"cite_spans": [
{
"start": 213,
"end": 234,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual information and training events",
"sec_num": "4"
},
{
"text": "Source context: In addition, we consider a window of 3 words to the left of the source word \u00a4 which is connected to according to the Viterbi alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "Word classes: Instead of using a dependency on the word identity we include also a dependency on word classes. By doing this, we improve the generalization of the models and include some semantic and syntactic information with. The word classes are computed automatically using another statistical training procedure (Och, 1999) which often produces word classes including words with the same semantic meaning in the same class.",
"cite_spans": [
{
"start": 317,
"end": 328,
"text": "(Och, 1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "A training event, for a specific target word , is composed by three items:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "\u00a3 The source word \u00a4 aligned to . \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "The context in which the aligned pair $ \u00a4 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "appears.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "The number of occurrences of the event in the training corpus. Table 1 shows some examples of training events for the target word \"which\".",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a3",
"sec_num": null
},
{
"text": "Once we have a set of training events for each target word we need to describe our feature functions. We do this by first specifying a large pool of possible features and then by selecting a subset of \"good\" features from this pool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "All the features we consider form a triple ('",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "e h g i $ label-1 $ label-2) where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "\u00a3 pos: is the position that label-2 has in a specific context. or the word class to which these words belong (j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "k & \u00a4 \" $ m l \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": ").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "Using this notation and given a context Table 1 : Some training events for the English word \"which\". The symbol \" \" is the placeholder of the English word \"which\" in the English context. In the German part the placeholder (\" \") corresponds to the word aligned to \"which\", in the first example the German word \"die\", the word \"das\" in the second and the word \"was\" in the third. The considered English and German contexts are separated by the double bar \" p \".The last number in the rightmost position is the number of occurrences of the event in the whole corpus. ",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "y : \u00ff n r o p p % n p p % n r q o \u00a4 5 % r o p p \u00a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "' Q & \u00a4 5 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "determined by the empirical data. This is exactly the standard lexicon probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "' ( & \u00a4 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "employed in the translation model described in (Brown et al., 1993) and in Section 2.",
"cite_spans": [
{
"start": 47,
"end": 67,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "Category 2 describes features which depend in addition on the word one position to the left or to the right of % n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": ". The same explanation is valid for category 3 but in this case could appears in any position of the context y . Categories 4 and 5 are the analogous categories to 2 and 3 using word classes instead of words. In the categories 6, 7, 8 and 9 the source context is used instead of the target context. Table 2 gives an overview of the different feature categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "Examples of specific features and their respective category are shown in Table 3 . (-3,was,@@) 1.12052 6 (-1,was,@@) 1.11511 9 (-3,F26,F18) 1.11242",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Features definition",
"sec_num": "5.1"
},
{
"text": "The number of possible features that can be used according to the German and English vocabularies and word classes is huge. In order to reduce the number of features we perform a threshold based feature selection, that is every feature which occurs less than times is not used. The aim of the feature selection is two-fold. Firstly, we obtain smaller models by using less features, and secondly, we hope to avoid overfitting on the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "5.2"
},
{
"text": "In order to obtain the threshold we compare the test corpus perplexity for various thresholds. The different threshold used in the experiments range from 0 to 512. The threshold is used as a cut-off for the number of occurrences that a specific feature must appear. So a cut-off of 0 means that all features observed in the training data are used. A cut-off of 32 means those features that appear 32 times or more are considered to train the maximum entropy models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "5.2"
},
{
"text": "We select the English words that appear at least 150 times in the training sample which are in total 348 of the 4673 words contained in the English vocabulary. Table 4 shows the different number of features considered for the 348 English words selected using different thresholds.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "5.2"
},
{
"text": "In choosing a reasonable threshold we have to balance the number of features and observed perplexity. 6 Experimental results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "5.2"
},
{
"text": "The \"Verbmobil Task\" is a speech translation task in the domain of appointment scheduling, travel planning, and hotel reservation. The task is difficult because it consists of spontaneous speech and the syntactic structures of the sentences are less restricted and highly variable. For the rescoring experiments we use the corpus described in Table 5 . To train the maximum entropy models we used the \"Ristad ME Toolkit\" described in (Ristad, 1997) . We performed 100 iteration of the Improved Iterative Scaling algorithm (Pietra et al., 1997) using the corpus described in Table 6 , which is a subset of the corpus shown in Table 5 .",
"cite_spans": [
{
"start": 434,
"end": 448,
"text": "(Ristad, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 343,
"end": 350,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 574,
"end": 581,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 625,
"end": 632,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Training and test corpus",
"sec_num": "6.1"
},
{
"text": "In order to compute the training and test perplexities, we split the whole aligned training corpus in two parts as shown in Table 6 . The training and test perplexities are shown in Table 7 . As expected, the perplexity reduction in the test corpus is lower than in the training corpus, but in both cases better perplexities are obtained using the ME models. The best value is obtained when a threshold of 4 is used. We expected to observe strong overfitting effects when a too small cut-off for features gets used. Yet, for most words the best test corpus perplexity is observed when we use all features including those that occur only once. ",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 182,
"end": 189,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Training and test perplexities",
"sec_num": "6.2"
},
{
"text": "In order to make use of the ME models in a statistical translation system we implemented a rescoring algorithm. This algorithm take as input the standard lexicon model (not using maximum entropy) and the 348 models obtained with the ME training. For an hypothesis sentence \u00a7 and a corresponding alignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation results",
"sec_num": "6.3"
},
{
"text": "the algorithm modifies the score & \u00a4 \u00a5 \u00a7 ( $ 3 \u00a5 \u00a7 % \u00a7 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "according to the refined maximum entropy lexicon model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "We carried out some preliminary experiments with the \u00a2 -best lists of hypotheses provided by the translation system in order to make a rescoring of each i-th hypothesis and reorder the list according to the new score computed with the refined lexicon model. Unfortunately, our \u00a2 -best extraction algorithm is sub-optimal, i.e. not the true best \u00a2 translations are extracted. In addition, so far we had to use a limit of only 9 translations per sentence. Therefore, the results of the translation experiments are only preliminary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "For the evaluation of the translation quality we use the automatically computable Word Error Rate (WER). The WER corresponds to the edit distance between the produced translation and one predefined reference translation. A shortcoming of the WER is the fact that it requires a perfect word order. This is particularly a problem for the Verbmobil task, where the word order of the German-English sentence pair can be quite different. As a result, the word order of the automatically generated target sentence can be different from that of the target sentence, but nevertheless acceptable so that the WER measure alone can be misleading. In order to overcome this problem, we introduce as additional measure the position-independent word error rate (PER). This measure compares the words in the two sentences without taking the word order into account. Depending on whether the translated sentence is longer or shorter than the target translation, the remaining words result in either insertion or deletion errors in addition to substitution errors. The PER is guaranteed to be less than or equal to the WER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "We use the top-10 list of hypothesis provided by the translation system described in (Tillmann and Ney, 2000) for rescoring the hypothesis using the ME models and sort them according to the new maximum entropy score. The translation results in terms of error rates are shown in Table 8 . We use Model 4 in order to perform the translation experiments because Model 4 typically gives better translation results than Model 5.",
"cite_spans": [
{
"start": 85,
"end": 109,
"text": "(Tillmann and Ney, 2000)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "We see that the translation quality improves slightly with respect to the WER and PER. The translation quality improvements so far are quite small compared to the perplexity measure improvements. We attribute this to the fact that the algorithm for computing the \u00a2 -best lists is suboptimal. Table 9 shows some examples where the translation obtained with the rescoring procedure is better than the best hypothesis provided by the translation system.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 299,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a5 \u00a7",
"sec_num": "3"
},
{
"text": "We have developed refined lexicon models for statistical machine translation by using maximum entropy models. We have been able to obtain a significant better test corpus perplexity and also a slight improvement in translation quality. We believe that by performing a rescoring on translation word graphs we will obtain a more significant improvement in translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "For the future we plan to investigate more refined feature selection methods in order to make the maximum entropy models smaller and better generalizing. In addition, we want to investigate more syntactic, semantic features and to include features that go beyond sentence boundaries. Table 9 : Four examples showing the translation obtained with the Model 4 and the ME model for a given German source sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical machine translation, final report",
"authors": [
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Curin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jahr",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Purdy",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, David Purdy, Franz J. Och, Noah A. Smith, and David Yarowsky. 1999. Statistical ma- chine translation, final report, JHU workshop. http://www.clsp.jhu.edu/ws99/pro- jects/mt/final report/mt-final- report.ps.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The candide system for machine translation",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. , ARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "157--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Berger, P. F. Brown, S. A. Della Pietra, et al. 1994. The candide system for machine translation. In Proc. , ARPA Workshop on Human Language Technology, pages 157-162.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vin- cent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Compu- tational Linguistics, 22(1):39-72, March.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Pa- rameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incorporating position information into a maximum entropy/minimum divergence translation model",
"authors": [
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of CoNNL-2000 and LLL-2000",
"volume": "",
"issue": "",
"pages": "37--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Foster. 2000. Incorporating position informa- tion into a maximum entropy/minimum divergence translation model. In Proc. of CoNNL-2000 and LLL-2000, pages 37-52, Lisbon, Portugal.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Assessment of smoothing methods and complex stochastic language modeling",
"authors": [
{
"first": "Sven",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Hamacher",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Liermann",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Wessel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "I",
"issue": "",
"pages": "1939--1942",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven Martin, Christoph Hamacher, J\u00f6rg Liermann, Frank Wessel, and Hermann Ney. 1999. Assess- ment of smoothing methods and complex stochas- tic language modeling. In IEEE International Con- ference on Acoustics, Speech and Signal Process- ing, volume I, pages 1939-1942, Budapest, Hun- gary, September.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A DP-based search algorithm for statistical machine translation",
"authors": [
{
"first": "Sonja",
"middle": [],
"last": "Nie\u00dfen",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1998,
"venue": "COLING-ACL '98: 36th Annual Meeting of the Association for Computational Linguistics and 17th Int. Conf. on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "960--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonja Nie\u00dfen, Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1998. A DP-based search algorithm for statistical machine translation. In COLING-ACL '98: 36th Annual Meeting of the As- sociation for Computational Linguistics and 17th Int. Conf. on Computational Linguistics, pages 960-967, Montreal, Canada, August.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Giza++: Training of statistical translation models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2000a. Giza++: Training of statistical translation models. http://www-i6.Informatik.RWTH- Aachen.DE/\u02dcoch/software/GIZA++.html.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2000b. Improved sta- tistical alignment models. In Proc. of the 38th An- nual Meeting of the Association for Computational Linguistics, pages 440-447, Hongkong, China, Oc- tober.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SRC: Danach wollten wir eigentlich noch Abendessen gehen. M4: We actually concluding dinner together. ME: Afterwards we wanted to go to dinner. SRC: Bei mir oder bei Ihnen? M4: For me or for you? ME: At your or my place? SRC: Das w\u00e4re genau das richtige. M4: That is exactly it spirit. ME: That is the right thing. SRC: Ja, das sieht bei mir eigentlich im Januar ziemlich gut aus. M4: Yes, that does not suit me in January looks pretty good. ME: Yes",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SRC: Danach wollten wir eigentlich noch Abendessen gehen. M4: We actually concluding dinner together. ME: Afterwards we wanted to go to dinner. SRC: Bei mir oder bei Ihnen? M4: For me or for you? ME: At your or my place? SRC: Das w\u00e4re genau das richtige. M4: That is exactly it spirit. ME: That is the right thing. SRC: Ja, das sieht bei mir eigentlich im Januar ziemlich gut aus. M4: Yes, that does not suit me in January looks pretty good. ME: Yes, that looks pretty good for me actually in January.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An efficient method for determining bilingual word classes",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 1999,
"venue": "EACL '99: Ninth Conf. of the Europ. Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "71--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 1999. An efficient method for deter- mining bilingual word classes. In EACL '99: Ninth Conf. of the Europ. Chapter of the Association for Computational Linguistics, pages 71-76, Bergen, Norway, June.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Feature-based language understanding",
"authors": [
{
"first": "K",
"middle": [
"A"
],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Ward",
"suffix": ""
}
],
"year": 1996,
"venue": "ESCA, Eurospeech",
"volume": "",
"issue": "",
"pages": "1435--1438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.A. Papineni, S. Roukos, and R.T. Ward. 1996. Feature-based language understanding. In ESCA, Eurospeech, pages 1435-1438, Rhodes, Greece.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Maximum likelihood and discriminative training of direct translation models",
"authors": [
{
"first": "K",
"middle": [
"A"
],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Ward",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. Int. Conf. on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "189--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.A. Papineni, S. Roukos, and R.T. Ward. 1998. Maximum likelihood and discriminative training of direct translation models. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 189-192.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Compact maximum entropy language models",
"authors": [
{
"first": "Jochen",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jochen Peters and Dietrich Klakow. 1999. Compact maximum entropy language models. In Proceed- ings of the IEEE Workshop on Automatic Speech Recognition and Understanding, Keystone, CO, December.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Inducing features in random fields",
"authors": [
{
"first": "Vincent",
"middle": [
"Della"
],
"last": "Stephen Della Pietra",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Trans. on Pattern Analysis and Machine Inteligence",
"volume": "19",
"issue": "4",
"pages": "380--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features in random fields. IEEE Trans. on Pattern Analysis and Machine In- teligence, 19(4):380-393, July.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Maximum entropy modelling toolkit",
"authors": [
{
"first": "Eric",
"middle": [
"S"
],
"last": "Ristad",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric S. Ristad. 1997. Maximum entropy modelling toolkit. Technical report, Princeton Univesity.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A maximum entropy approach to adaptive statistical language modeling",
"authors": [
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1996,
"venue": "Computer, Speech and Language",
"volume": "10",
"issue": "",
"pages": "187--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modeling. Computer, Speech and Language, 10:187-228.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word re-ordering and dp-based search in statistical machine translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "8th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "850--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillmann and Hermann Ney. 2000. Word re-ordering and dp-based search in statistical ma- chine translation. In 8th International Confer- ence on Computational Linguistics (CoLing 2000), pages 850-856, Saarbr\u00fccken, Germany, July.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A DP-based search using monotone alignments in statistical translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zubiaga",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. 35th Annual Conf. of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "289--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Tillmann, S. Vogel, H. Ney, and A. Zubiaga. 1997. A DP-based search using monotone alignments in statistical translation. In Proc. 35th Annual Conf. of the Association for Computational Linguistics, pages 289-296, Madrid, Spain, July.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Decoding algorithm in statistical translation",
"authors": [
{
"first": "Ye-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. 35th Annual Conf. of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "366--372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye-Yi Wang and Alex Waibel. 1997. Decoding algo- rithm in statistical translation. In Proc. 35th Annual Conf. of the Association for Computational Linguis- tics, pages 366-372, Madrid, Spain, July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Simple context information: information of the words surrounding the word pair; \u00a3 Syntactic information: part-of-speech information, syntactic constituent, sentence mood;Semantic information: disambiguation information (e.g.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "label-2: is one word of the aligned word pair $ \u00a4 \"",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>f 1 J</td><td/><td/></tr><tr><td/><td/><td>Pr(f 1 J | e 1 I )</td><td/></tr><tr><td>maximize</td><td>Pr( e 1 I )</td><td>Pr(f 1 J | e 1 I )</td><td>Alignment Model</td></tr><tr><td/><td>e 1 I</td><td>Pr( e 1 I )</td><td/></tr><tr><td colspan=\"3\">Transformation</td><td/></tr></table>",
"text": ""
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>s represents a specific target word and</td><td>t repre-</td></tr></table>",
"text": "Meaning of different feature categories where"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Category</td><td>Feature</td><td/></tr><tr><td>1</td><td>(0,was,)</td><td>f 1.20787</td></tr><tr><td>1</td><td>(0,das,)</td><td>1.19333</td></tr><tr><td>5</td><td colspan=\"2\">(3,F35,E15) 1.17612</td></tr><tr><td>4</td><td colspan=\"2\">(1,F35,E15) 1.15916</td></tr><tr><td>3</td><td>(3,das,is)</td><td>1.12869</td></tr><tr><td>2</td><td>(1,das,is)</td><td>1.12596</td></tr><tr><td>1</td><td>(0,die,)</td><td>1.12596</td></tr><tr><td>5</td><td/><td/></tr></table>",
"text": "The 10 most important features and their respective category and f values for the English word \"which\"."
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\"># features used</td></tr><tr><td/><td colspan=\"2\">English English+German</td></tr><tr><td colspan=\"2\">0 846121</td><td>1581529</td></tr><tr><td colspan=\"2\">2 240053</td><td>500285</td></tr><tr><td colspan=\"2\">4 153225</td><td>330077</td></tr><tr><td>8</td><td>96983</td><td>210795</td></tr><tr><td>16</td><td>61329</td><td>131323</td></tr><tr><td>32</td><td>40441</td><td>80769</td></tr><tr><td>64</td><td>28147</td><td>49509</td></tr><tr><td>128</td><td>21469</td><td>31805</td></tr><tr><td>256</td><td>18511</td><td>22947</td></tr><tr><td>512</td><td>17193</td><td>19027</td></tr></table>",
"text": "Number of features used according to different cut-off threshold. In the second column of the table are shown the number of features used when only the English context is considered. The third column correspond to English, German and Word-Classes contexts."
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"2\">German English</td></tr><tr><td colspan=\"2\">Train Sentences</td><td>58 332</td></tr><tr><td/><td>Words</td><td colspan=\"2\">519 523 549 921</td></tr><tr><td/><td>Vocabulary</td><td>7 940</td><td>4 673</td></tr><tr><td>Test</td><td>Sentences</td><td>147</td></tr><tr><td/><td>Words</td><td>1 968</td><td>2 173</td></tr><tr><td/><td>PP (trigr. LM)</td><td>(40.3)</td><td>28.8</td></tr></table>",
"text": "Corpus characteristics for translation task."
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"2\">German English</td></tr><tr><td colspan=\"2\">Train Sentences</td><td colspan=\"2\">50 000</td></tr><tr><td/><td>Words</td><td colspan=\"2\">454 619 482 344</td></tr><tr><td/><td>Vocabulary</td><td>7 456</td><td>4 420</td></tr><tr><td>Test</td><td>Sentences</td><td>8073</td></tr><tr><td/><td>Words</td><td>64 875</td><td>65 547</td></tr><tr><td/><td>Vocabulary</td><td>2 579</td><td>1 666</td></tr></table>",
"text": "Corpus characteristics for perplexity quality experiments."
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"5\">: Training and Test perplexities us-</td></tr><tr><td colspan=\"6\">ing different contextual information and different</td></tr><tr><td colspan=\"2\">thresholds</td><td colspan=\"4\">. The reference perplexities obtained</td></tr><tr><td colspan=\"6\">with the basic translation model 5 are TrainPP =</td></tr><tr><td colspan=\"4\">10.38 and TestPP = 13.22.</td><td/></tr><tr><td/><td/><td colspan=\"2\">English</td><td colspan=\"2\">English+German</td></tr><tr><td/><td colspan=\"5\">TrainPP TestPP TrainPP TestPP</td></tr><tr><td>0</td><td/><td>5.03</td><td>11.39</td><td>4.60</td><td>9.28</td></tr><tr><td>2</td><td/><td>6.59</td><td>10.37</td><td>5.70</td><td>8.94</td></tr><tr><td>4</td><td/><td>7.09</td><td>10.28</td><td>6.17</td><td>8.92</td></tr><tr><td>8</td><td/><td>7.50</td><td>10.39</td><td>6.63</td><td>9.03</td></tr><tr><td>16</td><td/><td>7.95</td><td>10.64</td><td>7.07</td><td>9.30</td></tr><tr><td>32</td><td/><td>8.38</td><td>11.04</td><td>7.55</td><td>9.73</td></tr><tr><td>64</td><td/><td>9.68</td><td>11.56</td><td>8.05</td><td>10.26</td></tr><tr><td>128</td><td/><td>9.31</td><td>12.09</td><td>8.61</td><td>10.94</td></tr><tr><td>256</td><td/><td>9.70</td><td>12.62</td><td>9.20</td><td>11.80</td></tr><tr><td>512</td><td colspan=\"2\">10.07</td><td>13.12</td><td>9.69</td><td>12.45</td></tr></table>",
"text": ""
},
"TABREF8": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>English</td><td colspan=\"2\">English+German</td></tr><tr><td colspan=\"2\">WER PER WER</td><td>PER</td></tr><tr><td colspan=\"2\">0 54.57 42.98 54.02</td><td>42.48</td></tr><tr><td colspan=\"2\">2 54.16 42.43 54.07</td><td>42.71</td></tr><tr><td colspan=\"2\">4 54.53 42.71 54.11</td><td>42.75</td></tr><tr><td colspan=\"2\">8 54.76 43.21 54.39</td><td>43.07</td></tr><tr><td colspan=\"2\">16 54.76 43.53 54.02</td><td>42.75</td></tr><tr><td colspan=\"2\">32 54.80 43.12 54.53</td><td>42.94</td></tr><tr><td colspan=\"2\">64 54.21 42.89 54.53</td><td>42.89</td></tr><tr><td colspan=\"2\">128 54.57 42.98 54.67</td><td>43.12</td></tr><tr><td colspan=\"2\">256 54.99 43.12 54.57</td><td>42.89</td></tr><tr><td colspan=\"2\">512 55.08 43.30 54.85</td><td>43.21</td></tr></table>",
"text": "Preliminary translation results for the Verbmobil Test-147 for different contextual information and different thresholds using the top-10 translations. The baseline translation results for model 4 are WER=54.80 and PER=43.07."
}
}
}
}