ACL-OCL / Base_JSON /prefixP /json /P07 /P07-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P07-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:51:04.009266Z"
},
"title": "Statistical Machine Translation through Global Lexical Selection and Sentence Reconstruction",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs -Research",
"location": {
"addrLine": "180 Park Ave, Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Haffner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs -Research",
"location": {
"addrLine": "180 Park Ave, Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "haffner@research.att.com"
},
{
"first": "Stephan",
"middle": [],
"last": "Kanthak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs -Research",
"location": {
"addrLine": "180 Park Ave, Florham Park",
"postCode": "07932",
"region": "NJ"
}
},
"email": "skanthak@research.att.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine translation of a source language sentence involves selecting appropriate target language words and ordering the selected words to form a well-formed target language sentence. Most of the previous work on statistical machine translation relies on (local) associations of target words/phrases with source words/phrases for lexical selection. In contrast, in this paper, we present a novel approach to lexical selection where the target words are associated with the entire source sentence (global) without the need to compute local associations. Further, we present a technique for reconstructing the target language sentence from the selected words. We compare the results of this approach against those obtained from a finite-state based statistical machine translation system which relies on local lexical associations.",
"pdf_parse": {
"paper_id": "P07-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine translation of a source language sentence involves selecting appropriate target language words and ordering the selected words to form a well-formed target language sentence. Most of the previous work on statistical machine translation relies on (local) associations of target words/phrases with source words/phrases for lexical selection. In contrast, in this paper, we present a novel approach to lexical selection where the target words are associated with the entire source sentence (global) without the need to compute local associations. Further, we present a technique for reconstructing the target language sentence from the selected words. We compare the results of this approach against those obtained from a finite-state based statistical machine translation system which relies on local lexical associations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine translation can be viewed as consisting of two subproblems: (a) lexical selection, where appropriate target language lexical items are chosen for each source language lexical item and (b) lexical reordering, where the chosen target language lexical items are rearranged to produce a meaningful target language string. Most of the previous work on statistical machine translation, as exemplified in (Brown et al., 1993) , employs word-alignment algorithm (such as GIZA++ (Och and Ney, 2003) ) that provides local associations between source and target words. The source-to-target word alignments are sometimes augmented with target-to-source word alignments in order to improve precision. Further, the word-level alignments are extended to phraselevel alignments in order to increase the extent of local associations. The phrasal associations compile some amount of (local) lexical reordering of the target words -those permitted by the size of the phrase. Most of the state-of-the-art machine translation systems use phrase-level associations in conjunction with a target language model to produce sentences. There is relatively little emphasis on (global) lexical reordering other than the local reorderings permitted within the phrasal alignments. A few exceptions are the hierarchical (possibly syntax-based) transduction models (Wu, 1997; Alshawi et al., 1998; Yamada and Knight, 2001; Chiang, 2005) and the string transduction models (Kanthak et al., 2005) .",
"cite_spans": [
{
"start": 406,
"end": 426,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF5"
},
{
"start": 478,
"end": 497,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF17"
},
{
"start": 1340,
"end": 1350,
"text": "(Wu, 1997;",
"ref_id": "BIBREF22"
},
{
"start": 1351,
"end": 1372,
"text": "Alshawi et al., 1998;",
"ref_id": "BIBREF0"
},
{
"start": 1373,
"end": 1397,
"text": "Yamada and Knight, 2001;",
"ref_id": "BIBREF23"
},
{
"start": 1398,
"end": 1411,
"text": "Chiang, 2005)",
"ref_id": "BIBREF6"
},
{
"start": 1447,
"end": 1469,
"text": "(Kanthak et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present an alternate approach to lexical selection and lexical reordering. For lexical selection, in contrast to the local approaches of associating target to source words, we associate target words to the entire source sentence. The intuition is that there may be lexico-syntactic features of the source sentence (not necessarily a single source word) that might trigger the presence of a target word in the target sentence. Furthermore, it might be difficult to exactly associate a target word to a source word in many situations -(a) when the translations are not exact but paraphrases (b) when the target language does not have one lexical item to express the same concept that is expressed by a source word. Extending word to phrase alignments attempts to address some of these situations while alleviating the noise in word-level alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a consequence of this global lexical selection approach, we no longer have a tight association between source and target language words. The result of lexical selection is simply a bag of words in the target language and the sentence has to be reconstructed using this bag of words. The words in the bag, however, might be enhanced with rich syntactic information that could aid in reconstructing the target sentence. This approach to lexical selection and Figure 2 : Decoding phases for our system sentence reconstruction has the potential to circumvent limitations of word-alignment based methods for translation between languages with significantly different word order (e.g. English-Japanese).",
"cite_spans": [],
"ref_spans": [
{
"start": 460,
"end": 468,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present the details of training a global lexical selection model using classification techniques and sentence reconstruction models using permutation automata. We also present a stochastic finite-state transducer (SFST) as an example of an approach that relies on local associations and use it to compare and contrast our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we describe each of the components of our SFST system shown in Figure 1 . The SFST approach described here is similar to the one described in (Bangalore and Riccardi, 2000) which has subsequently been adopted by (Banchs et al., 2005 ).",
"cite_spans": [
{
"start": 159,
"end": 189,
"text": "(Bangalore and Riccardi, 2000)",
"ref_id": "BIBREF3"
},
{
"start": 229,
"end": 249,
"text": "(Banchs et al., 2005",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "SFST Training and Decoding",
"sec_num": "2"
},
{
"text": "The first stage in the process of training a lexical selection model is obtaining an alignment function (f ) that given a pair of source (s 1 s 2 . . . s n ) and target (t 1 t 2 . . . t m ) language sentences, maps source language word subsequences into target language word subsequences, as shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200i\u2203j(f (s i ) = t j \u2228 f (s i ) = )",
"eq_num": "(1)"
}
],
"section": "Word Alignment",
"sec_num": "2.1"
},
{
"text": "For the work reported in this paper, we have used the GIZA++ tool (Och and Ney, 2003) which implements a string-alignment algorithm. GIZA++ alignment however is asymmetric in that the word mappings are different depending on the direction of alignment -source-to-target or target-to-source. Hence in addition to the functions f as shown in Equation 1 we train another alignment function g :",
"cite_spans": [
{
"start": 66,
"end": 85,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200j\u2203i(g(t j ) = s i \u2228 g(t j ) = )",
"eq_num": "(2)"
}
],
"section": "Word Alignment",
"sec_num": "2.1"
},
{
"text": "English: I need to make a collect call Japanese: \u00cfH \u00c3 \u00c2k $*d \u00bb^%cW2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2.1"
},
{
"text": "Alignment: 1 5 0 3 0 2 4 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment",
"sec_num": "2.1"
},
{
"text": "From the alignment information (see Figure 3 ), we construct a bilanguage representation of each sentence in the bilingual corpus. The bilanguage string consists of source-target symbol pair sequences as shown in Equation 3. Note that the tokens of a bilanguage could be either ordered according to the word order of the source language or ordered according to the word order of the target language. Figure 4 shows an example alignment and the source-word-ordered bilanguage strings corresponding to the alignment shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 3",
"ref_id": "FIGREF0"
},
{
"start": 400,
"end": 408,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 523,
"end": 531,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Bilanguage Representation",
"sec_num": "2.2"
},
{
"text": "B f = b f 1 b f 2 . . . b f m (3) b f i = (s i\u22121 ; s i , f (s i )) if f (s i\u22121 ) = = (s i , f (s i\u22121 ); f (s i )) if s i\u22121 = = (s i , f (s i )) otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilanguage Representation",
"sec_num": "2.2"
},
{
"text": "We also construct a bilanguage using the alignment function g similar to the bilanguage using the alignment function f as shown in Equation 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilanguage Representation",
"sec_num": "2.2"
},
{
"text": "Thus, the bilanguage corpus obtained by combining the two alignment functions is B = B f \u222a B g .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilanguage Representation",
"sec_num": "2.2"
},
{
"text": "While word-to-word translation only approximates the lexical selection process, phrase-to-phrase mapping can greatly improve the translation of collocations, recurrent strings, etc. Using phrases also allows words within the phrase to be reordered into the correct target language order, thus partially solving the reordering problem. Additionally, SFSTs can take advantage of phrasal correlations to improve the computation of the probability P (W S , W T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Phrases and Local Reordering",
"sec_num": "2.3"
},
{
"text": "The bilanguage representation could result in some source language phrases to be mapped to (empty target phrase). In addition to these phrases, we compute subsequences of a given length k on the bilanguage string and for each subsequence we reorder the target words of the subsequence to be in the same order as they are in the target language sentence corresponding to that bilanguage string. This results in a retokenization of the bilanguage into tokens of source-target phrase pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Phrases and Local Reordering",
"sec_num": "2.3"
},
{
"text": "From the bilanguage corpus B, we train an n-gram language model using standard tools (Goffin et al., 2005) . The resulting language model is represented as a weighted finite-state automaton (S \u00d7 T \u2192 [0, 1]). The symbols on the arcs of this automaton (s i t i ) are interpreted as having the source and target symbols (s i :t i ), making it into a weighted finite-state transducer (S \u2192 T \u00d7[0, 1]) that provides a weighted string-to-string transduction from S into T :",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "(Goffin et al., 2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SFST Model",
"sec_num": "2.4"
},
{
"text": "T * = argmax T P (s i , t i |s i\u22121 , t i\u22121 . . . s i\u2212n\u22121 , t i\u2212n\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SFST Model",
"sec_num": "2.4"
},
{
"text": "Since we represent the translation model as a weighted finite-state transducer (T ransF ST ), the decoding process of translating a new source input (sentence or weighted lattice (I s )) amounts to a transducer composition (\u2022) and selection of the best probability path (BestP ath) resulting from the composition and projecting the target sequence (\u03c0 1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "2.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T * = \u03c0 1 (BestP ath(I s \u2022 T ransF ST ))",
"eq_num": "(4)"
}
],
"section": "Decoding",
"sec_num": "2.5"
},
{
"text": "However, we have noticed that on the development corpus, the decoded target sentence is typically shorter than the intended target sentence. This mismatch may be due to the incorrect estimation of the back-off events and their probabilities in the training phase of the transducer. In order to alleviate this mismatch, we introduce a negative word insertion penalty model as a mechanism to produce more words in the target sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "2.5"
},
{
"text": "The word insertion model is also encoded as a weighted finite-state automaton and is included in the decoding sequence as shown in Equation 5. The word insertion FST has one state and | T | number of arcs each weighted with a \u03bb weight representing the word insertion cost. On composition as shown in Equation 5, the word insertion model penalizes or rewards paths which have more words depending on whether \u03bb is positive or negative value. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Insertion Model",
"sec_num": "2.6"
},
{
"text": "T * = \u03c0 1 (BestP ath(I s \u2022T ransF ST \u2022W IP )) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Insertion Model",
"sec_num": "2.6"
},
{
"text": "Local reordering as described in Section 2.3 is restricted by the window size k and accounts only for different word order within phrases. As permuting non-linear automata is too complex, we apply global reordering by permuting the words of the best translation and weighting the result by an n-gram language model (see also Figure 2 ):",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 333,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Global Reordering",
"sec_num": "2.7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T * = BestP ath(perm(T ) \u2022 LM t )",
"eq_num": "(6)"
}
],
"section": "Global Reordering",
"sec_num": "2.7"
},
{
"text": "Even the size of the minimal permutation automaton of a linear automaton grows exponentially with the length of the input sequence. While decoding by composition simply resembles the principle of memoization (i.e. here: all state hypotheses of a whole sentence are kept in memory), it is necessary to either use heuristic forward pruning or constrain permutations to be within a local window of adjustable size (also see (Kanthak et al., 2005) ). We have chosen to constrain permutations here. Figure 5 shows the resulting minimal permutation automaton for an input sequence of 4 words and a window size of 2.",
"cite_spans": [
{
"start": 421,
"end": 443,
"text": "(Kanthak et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 494,
"end": 502,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Global Reordering",
"sec_num": "2.7"
},
{
"text": "Decoding ASR output in combination with global reordering uses n-best lists or extracts them from lattices first. Each entry of the n-best list is decoded separately and the best target sentence is picked from the union of the n intermediate results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Reordering",
"sec_num": "2.7"
},
{
"text": "The approach from the previous section is a generative model for statistical machine translation relying on local associations between source and target sentences. Now, we present our approach for a global lexical selection model based on discriminatively trained classification techniques. Discriminant modeling techniques have become the dominant method for resolving ambiguity in speech and other NLP tasks, outperforming generative models. Discriminative training has been used mainly for translation model combination (Och and Ney, 2002) and with the exception of (Wellington et al., 2006; Tillmann and Zhang, 2006) , has not been used to directly train parameters of a translation model. We expect discriminatively trained global lexical selection models to outperform generatively trained local lexical selection models as well as provide a framework for incorporating rich morpho-syntactic information. Statistical machine translation can be formulated as a search for the best target sequence that maximizes P (T |S), where S is the source sentence and T is the target sentence. Ideally, P (T |S) should be estimated directly to maximize the conditional likelihood on the training data (discriminant model). However, T corresponds to a sequence with a exponentially large combination of possible labels, and traditional classification approaches cannot be used directly. Although Conditional Random Fields (CRF) (Lafferty et al., 2001 ) train an exponential model at the sequence level, in translation tasks such as ours the computational requirements of training such models are prohibitively expensive.",
"cite_spans": [
{
"start": 523,
"end": 542,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 569,
"end": 594,
"text": "(Wellington et al., 2006;",
"ref_id": "BIBREF21"
},
{
"start": 595,
"end": 620,
"text": "Tillmann and Zhang, 2006)",
"ref_id": "BIBREF19"
},
{
"start": 1421,
"end": 1443,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminant Models for Lexical Selection",
"sec_num": "3"
},
{
"text": "We investigate two approaches to approximating the string level global classification problem, using different independence assumptions. A comparison of the two approaches is summarized in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discriminant Models for Lexical Selection",
"sec_num": "3"
},
{
"text": "In the first approach, we formulate a sequential local classification problem as shown in Equations 7. This approach is similar to the SFST approach in that it relies on local associations between the source and target words(phrases). We can use a conditional model (instead of a joint model as before) and the parameters are determined using discriminant training which allows for richer conditioning context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Lexical Choice Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (T |S) = N i=1 P (t i |\u03a6(S, i))",
"eq_num": "(7)"
}
],
"section": "Sequential Lexical Choice Model",
"sec_num": "3.1"
},
{
"text": "where \u03a6(S, i) is a set of features extracted from the source string S (shortened as \u03a6 in the rest of the section).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Lexical Choice Model",
"sec_num": "3.1"
},
{
"text": "The sequential lexical choice model described in the previous section treats the selection of a lexical choice for a source word in the local lexical context as a classification task. The data for training such models is derived from word alignments obtained by e.g. GIZA++. The decoded target lexical items have to be further reordered, but for closely related languages the reordering could be incorporated into correctly ordered target phrases as discussed previously. For pairs of languages with radically different word order (e.g. English-Japanese), there needs to be a global reordering of words similar to the case in the SFST-based translation system. Also, for such differing language pairs, the alignment algorithms such as GIZA++ perform poorly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Lexical Choice Model",
"sec_num": "3.2"
},
{
"text": "These observations prompted us to formulate the lexical choice problem without the need for word alignment information. We require a sentence aligned corpus as before, but we treat the target sentence as a bag-of-words or BOW assigned to the source sentence. The goal is, given a source sentence, to estimate the probability that we find a given word in the target sentence. This is why, instead of producing a target sentence, what we initially obtain is a target bag of words. Each word in the target vocabulary is detected independently, so we have here a very simple use of binary static classifiers. Training sentence pairs are considered as positive examples when the word appears in the target, and negative otherwise. Thus, the number of training examples equals the number of sentence pairs, in contrast to the sequential lexical choice model which has one training example for each token in the bilingual training corpus. The classifier is trained with ngram features (BOgrams(S)) from the source sentence. During decoding the words with conditional probability greater than a threshold \u03b8 are considered as the result of lexical choice decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Lexical Choice Model",
"sec_num": "3.2"
},
{
"text": "BOW * T = {t|P (t|BOgrams(S)) > \u03b8} (8) For reconstructing the proper order of words in the target sentence we consider all permutations of words in BOW * T and weight them by a target language model. This step is similar to the one described in Section 2.7. The BOW approach can also be modified to allow for length adjustments of target sentences, if we add optional deletions in the final step of permutation decoding. The parameter \u03b8 and an additional word deletion penalty can then be used to adjust the length of translated outputs. In Section 6, we discuss several issues regarding this model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Lexical Choice Model",
"sec_num": "3.2"
},
{
"text": "This section addresses the choice of the classification technique, and argues that one technique that yields excellent performance while scaling well is binary maximum entropy (Maxent) with L1regularization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing the classifier",
"sec_num": "4"
},
{
"text": "The Sequential and BOW models represent two different classification problems. In the sequential model, we have a multiclass problem where each class t i is exclusive, therefore, all the classifier outputs P (t i |\u03a6) must be jointly optimized such that i P (t i |\u03a6) = 1. This can be problematic: with one classifier per word in the vocabulary, even allocating the memory during training may exceed the memory capacity of current computers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass vs. Binary Classification",
"sec_num": "4.1"
},
{
"text": "In the BOW model, each class can be detected independently, and two different classes can be detected at the same time. This is known as the 1-vsother scheme. The key advantage over the multiclass scheme is that not all classifiers have to reside in memory at the same time during training which allows for parallelization. Fortunately for the sequential model, we can decompose a multiclass classification problem into separate 1-vs-other problems. In theory, one has to make an additional independence assumption and the problem statement becomes different. Each output label t is projected into a bit string with components b j (t) where probability of each component is estimated independently:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass vs. Binary Classification",
"sec_num": "4.1"
},
{
"text": "P (b j (t)|\u03a6) = 1 \u2212 P (b j (t)|\u03a6) = 1 1 + e \u2212(\u03bb j \u2212\u03bbj )\u2022\u03a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass vs. Binary Classification",
"sec_num": "4.1"
},
{
"text": "In practice, despite the approximation, the 1-vsother scheme has been shown to perform as well as the multiclass scheme (Rifkin and Klautau, 2004) .",
"cite_spans": [
{
"start": 120,
"end": 146,
"text": "(Rifkin and Klautau, 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass vs. Binary Classification",
"sec_num": "4.1"
},
{
"text": "As a consequence, we use the same type of binary classifier for the sequential and the BOW models. The excellent results recently obtained with the SEARN algorithm (Daume et al., 2007) also suggest that binary classifiers, when properly trained and combined, seem to be capable of matching more complex structured output approaches.",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Daume et al., 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass vs. Binary Classification",
"sec_num": "4.1"
},
{
"text": "We separate the most popular classification techniques into two broad categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Geometric vs. Probabilistic Interpretation",
"sec_num": "4.2"
},
{
"text": "\u2022 Geometric approaches maximize the width of a separation margin between the classes. The most popular method is the Support Vector Machine (SVM) (Vapnik, 1998) .",
"cite_spans": [
{
"start": 146,
"end": 160,
"text": "(Vapnik, 1998)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Geometric vs. Probabilistic Interpretation",
"sec_num": "4.2"
},
{
"text": "\u2022 Probabilistic approaches maximize the conditional likelihood of the output class given the input features. This logistic regression is also called Maxent as it finds the distribution with maximum entropy that properly estimates the average of each feature over the training data (Berger et al., 1996) .",
"cite_spans": [
{
"start": 281,
"end": 302,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Geometric vs. Probabilistic Interpretation",
"sec_num": "4.2"
},
{
"text": "In previous studies, we found that the best accuracy is achieved with non-linear (or kernel) SVMs, at the expense of a high test time complexity, which is unacceptable for machine translation. Linear SVMs and regularized Maxent yield similar performance. In theory, Maxent training, which scales linearly with the number of examples, is faster than SVM training, which scales quadratically with the number of examples. In our first experiments with lexical choice models, we observed that Maxent slightly outperformed SVMs. Using a single threshold with SVMs, some classes of words were over-detected. This suggests that, as theory predicts, SVMs do not properly approximate the posterior probability. We therefore chose to use Maxent as the best probability approximator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Geometric vs. Probabilistic Interpretation",
"sec_num": "4.2"
},
{
"text": "Traditionally, Maxent is regularized by imposing a Gaussian prior on each weight: this L2 regularization finds the solution with the smallest possible weights. However, on tasks like machine translation with a very large number of input features, a Laplacian L1 regularization that also attempts to maximize the number of zero weights is highly desirable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L1 vs. L2 regularization",
"sec_num": "4.3"
},
{
"text": "A new L1-regularized Maxent algorithms was proposed for density estimation (Dudik et al., 2004) and we adapted it to classification. We found this algorithm to converge faster than the current state-ofthe-art in Maxent training, which is L2-regularized L-BFGS (Malouf, 2002) 1 . Moreover, the number of trained parameters is considerably smaller.",
"cite_spans": [
{
"start": 75,
"end": 95,
"text": "(Dudik et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 260,
"end": 274,
"text": "(Malouf, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "L1 vs. L2 regularization",
"sec_num": "4.3"
},
{
"text": "We have performed experiments on the IWSLT06 Chinese-English training and development sets from 1 We used the implementation available at http://homepages.inf.ed.ac.uk/s0450736/maxent toolkit.html 2005 and 2006. The data are traveler task expressions such as seeking directions, expressions in restaurants and travel reservations. Table 2 presents some statistics on the data sets. It must be noted that while the 2005 development set matches the training data closely, the 2006 development set has been collected separately and shows slightly different statistics for average sentence length, vocabulary size and out-of-vocabulary words. Also the 2006 development set contains no punctuation marks in Chinese, but the corresponding English translations have punctuation marks. We also evaluated our models on the Chinese speech recognition output and we report results using 1-best with a word error rate of 25.2%.",
"cite_spans": [
{
"start": 96,
"end": 97,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 331,
"end": 338,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "5"
},
{
"text": "For the experiments, we tokenized the Chinese sentences into character strings and trained the models discussed in the previous sections. Also, we trained a punctuation prediction model using Maxent framework on the Chinese character strings in order to insert punctuation marks into the 2006 development data set. The resulting character string with punctuation marks is used as input to the translation decoder. For the 2005 development set, punctuation insertion was not needed since the Chinese sentences already had the true punctuation marks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "5"
},
{
"text": "In Table 3 we present the results of the three different translation models -FST, Sequential Maxent and BOW Maxent. There are a few interesting observations that can be made based on these results. First, on the 2005 development set, the sequential Maxent model outperforms the FST model, even though the two models were trained starting from the same GIZA++ alignment. The difference, however, is due to the fact that Maxent models can cope with increased lexical context 2 and the parameters of the model are discriminatively trained. The more surprising result is that the BOW Maxent model significantly outperforms the sequential Maxent model. The reason is that the sequential Maxent model relies on the word alignment, which, if erroneous, results in incorrect predictions by the sequential Maxent model. The BOW model does not rely on the word-level alignment and can be interpreted as a discriminatively trained model of dictionary lookup for a target word in the context of a source sentence. As indicated in the data release document, the 2006 development set was collected differently compared to the one from 2005. Due to this mismatch, the performance of the Maxent models are not very different from the FST model, indicating the lack of good generalization across different genres. However, we believe that the Maxent framework allows for incorporation of linguistic features that could potentially help in generalization across genres. For translation of ASR 1-best, we see a systematic degradation of about 3% in mBLEU score compared to translating the transcription.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "5"
},
{
"text": "In order to compensate for the mismatch between the 2005 and 2006 data sets, we computed a 10-fold average mBLEU score by including 90% of the 2006 development set into the training set and using 10% of the 2006 development set for testing, each time. The average mBLEU score across these 10 runs increased to 22.8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "5"
},
{
"text": "In Figure 6 we show the improvement of mBLEU scores with the increase in permutation window size. We had to limit to a permutation window size of 10 due to memory limitations, even though the curve has not plateaued. We anticipate using pruning techniques we can increase the window size further. 6.5 7 7.5 8 8.5 9 9.5 10 Permutation Window Size Figure 6 : Improvement in mBLEU score with the increase in size of the permutation window",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 6",
"ref_id": null
},
{
"start": 346,
"end": 354,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data and Experiments",
"sec_num": "5"
},
{
"text": "In order to test the scalability of the global lexical selection approach, we also performed lexical selection experiments on the United Nations (Arabic-English) corpus and the Hansard (French-English) corpus using the SFST model and the BOW Maxent model. We used 1,000,000 training sentence pairs and tested on 994 test sentences for the UN corpus. For the Hansard corpus we used the same training and test split as in (Zens and Ney, 2004) : 1.4 million training sentence pairs and 5432 test sentences. The vocabulary sizes for the two corpora are mentioned in Table 4 . Also in Table 4 , are the results in terms of F-measure between the words in the reference sentence and the decoded sentences. We can see that the BOW model outperforms the SFST model on both corpora significantly. This is due to a systematic 10% relative improvement for open class words, as they benefit from a much wider context. BOW performance on close class words is higher for the UN corpus but lower for the Hansard corpus. ",
"cite_spans": [
{
"start": 420,
"end": 440,
"text": "(Zens and Ney, 2004)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 562,
"end": 569,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 580,
"end": 587,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "United Nations and Hansard Corpora",
"sec_num": "5.1"
},
{
"text": "The BOW approach is promising as it performs reasonably well despite considerable losses in the transfer of information between source and target language. The first and most obvious loss is about word position. The only information we currently use to restore the target word position is the target language model. Information about the grammatical role of a word in the source sentence is completely lost. The language model might fortuitously recover this information if the sentence with the correct grammatical role for the word happens to be the maximum likelihood sentence in the permutation automaton.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We are currently working toward incorporating syntactic information on the target words so as to be able to recover some of the grammatical role information lost in the classification process. In preliminary experiments, we have associated the target lexical items with supertag information (Bangalore and Joshi, 1999) . Supertags are labels that provide linear ordering constraints as well as grammatical relation information. Although associating supertags to target words increases the class set for the classifier, we have noticed that the degradation in the F-score is on the order of 3% across different corpora. The supertag information can then be exploited in the sentence construction process. The use of supertags in phrase-based SMT system has been shown to improve results (Hassan et al., 2006) .",
"cite_spans": [
{
"start": 291,
"end": 318,
"text": "(Bangalore and Joshi, 1999)",
"ref_id": "BIBREF2"
},
{
"start": 786,
"end": 807,
"text": "(Hassan et al., 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "A less obvious loss is the number of times a word or concept appears in the target sentence. Function words like \"the\" and \"of\" can appear many times in an English sentence. In the model discussed in this paper, we index each occurrence of the function word with a counter. In order to improve this method, we are currently exploring a technique where the function words serve as attributes (e.g. definiteness, tense, case) on the contentful lexical items, thus enriching the lexical item with morphosyntactic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "A third issue concerning the BOW model is the problem of synonyms -target words which translate the same source word. Suppose that in the training data, target words t 1 and t 2 are, with equal probability, translations of the same source word. Then, in the presence of this source word, the probability to detect the corresponding target word, which we assume is 0.8, will be, because of discriminant learning, split equally between t 1 and t 2 , that is 0.4 and 0.4. Because of this synonym problem, the BOW threshold \u03b8 has to be set lower than 0.5, which is observed experimentally. However, if we set the threshold to 0.3, both t 1 and t 2 will be detected in the target sentence, and we found this to be a major source of undesirable insertions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The BOW approach is different from the parsing based approaches (Melamed, 2004; Zhang and Gildea, 2005; Cowan et al., 2006) where the translation model tightly couples the syntactic and lexical items of the two languages. The decoupling of the two steps in our model has the potential for generating paraphrased sentences not necessarily isomorphic to the structure of the source sentence.",
"cite_spans": [
{
"start": 64,
"end": 79,
"text": "(Melamed, 2004;",
"ref_id": "BIBREF15"
},
{
"start": 80,
"end": 103,
"text": "Zhang and Gildea, 2005;",
"ref_id": "BIBREF25"
},
{
"start": 104,
"end": 123,
"text": "Cowan et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We view machine translation as consisting of lexical selection and lexical reordering steps. These two steps need not necessarily be sequential and could be tightly integrated. We have presented the weighted finite-state transducer model of machine translation where lexical choice and a limited amount of lexical reordering are tightly integrated into a single transduction. We have also presented a novel approach to translation where these two steps are loosely coupled and the parameters of the lexical choice model are discriminatively trained using a maximum entropy model. The lexical reordering model in this approach is achieved using a permutation automaton. We have evaluated these two approaches on the 2005 and 2006 IWSLT development sets and shown that the techniques scale well to Hansard and UN corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We use 6 words to the left and right of a source word for sequential Maxent, but only 2 preceding source and target words for FST approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic acquisition of hierarchical transduction models for machine translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Douglas",
"suffix": ""
}
],
"year": 1998,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Alshawi, S. Bangalore, and S. Douglas. 1998. Automatic acquisition of hierarchical transduction models for machine translation. In ACL, Montreal, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical machine translation of euparl data by using bilingual n-grams",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gispert",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Marino",
"suffix": ""
}
],
"year": 2005,
"venue": "Workshop on Building and Using Parallel Texts. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.E. Banchs, J.M. Crego, A. Gispert, P. Lambert, and J.B. Marino. 2005. Statistical machine translation of euparl data by using bilingual n-grams. In Workshop on Building and Using Parallel Texts. ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Supertagging: An approach to almost parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bangalore and A. K. Joshi. 1999. Supertagging: An ap- proach to almost parsing. Computational Linguistics, 25(2).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Stochastic finite-state models for spoken language machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Workshop on Embedded Machine Translation Systems",
"volume": "",
"issue": "",
"pages": "52--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bangalore and G. Riccardi. 2000. Stochastic finite-state models for spoken language machine translation. In Pro- ceedings of the Workshop on Embedded Machine Transla- tion Systems, pages 52-59.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Maximum Entropy Approach to Natural Language Processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "A",
"middle": [
"D"
],
"last": "Stephen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vincent",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.L. Berger, Stephen A. D. Pietra, D. Pietra, and J. Vincent. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22(1):39-71.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Mathematics of Machine Translation: Parameter Estimation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "263--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, S.D. Pietra, V.D. Pietra, and R. Mercer. 1993. The Mathematics of Machine Translation: Parameter Estimation. Computational Linguistics, 16(2):263-312.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Chiang. 2005. A hierarchical phrase-based model for statis- tical machine translation. In Proceedings of the ACL Con- ference, Ann Arbor, MI.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A discriminative model for tree-to-tree translation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Kucerova",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Cowan, I. Kucerova, and M. Collins. 2006. A discrimi- native model for tree-to-tree translation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Search-based structure prediction",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Daume, J. Langford, and D. Marcu. 2007. Search-based structure prediction. submitted to Machine Learning Jour- nal.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Performance Guarantees for Regularized Maximum Entropy Density Estimation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dudik",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLT'04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Dudik, S. Phillips, and R.E. Schapire. 2004. Perfor- mance Guarantees for Regularized Maximum Entropy Den- sity Estimation. In Proceedings of COLT'04, Banff, Canada. Springer Verlag.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The AT&T WATSON Speech Recognizer",
"authors": [
{
"first": "V",
"middle": [],
"last": "Goffin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bocchieri",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ljolje",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Parthasarathy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rahim",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saraclar",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Goffin, C. Allauzen, E. Bocchieri, D. Hakkani-Tur, A. Ljolje, S. Parthasarathy, M. Rahim, G. Riccardi, and M. Saraclar. 2005. The AT&T WATSON Speech Recognizer. In Pro- ceedings of ICASSP, Philadelphia, PA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Syntactic phrase-based statistical machine translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hearne",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sima'an",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of IEEE/ACL first International Workshop on Spoken Language Technology (SLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Hassan, M. Hearne, K. Sima'an, and A. Way. 2006. Syntac- tic phrase-based statistical machine translation. In Proceed- ings of IEEE/ACL first International Workshop on Spoken Language Technology (SLT), Aruba, December.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Novel reordering approaches in phrase-based statistical machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kanthak",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts",
"volume": "",
"issue": "",
"pages": "167--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kanthak, D. Vilar, E. Matusov, R. Zens, and H. Ney. 2005. Novel reordering approaches in phrase-based statistical ma- chine translation. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, pages 167-174, Ann Ar- bor, Michigan.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and la- beling sequence data. In Proceedings of ICML, San Fran- cisco, CA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A comparison of algorithms for maximum entropy parameter estimation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of CoNLL-2002",
"volume": "",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of CoNLL- 2002, pages 49-55. Taipei, Taiwan.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical machine translation by parsing",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Melamed",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. D. Melamed. 2004. Statistical machine translation by pars- ing. In Proceedings of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2002. Discriminative training and max- imum entropy models for statistical machine translation. In Proceedings of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F.J. Och and H. Ney. 2003. A systematic comparison of vari- ous statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "In defense of onevs-all classification",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Rifkin",
"suffix": ""
},
{
"first": "Aldebaro",
"middle": [],
"last": "Klautau",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "101--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Rifkin and Aldebaro Klautau. 2004. In defense of one- vs-all classification. Journal of Machine Learning Research, pages 101-141.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A discriminative global training algorithm for statistical mt",
"authors": [
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2006,
"venue": "COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Tillmann and T. Zhang. 2006. A discriminative global train- ing algorithm for statistical mt. In COLING-ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Statistical Learning Theory",
"authors": [
{
"first": "V",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V.N. Vapnik. 1998. Statistical Learning Theory. John Wiley & Sons.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Scalable purely-discriminative training for word and tree transducers",
"authors": [
{
"first": "B",
"middle": [],
"last": "Wellington",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pike",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2006,
"venue": "AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Wellington, J. Turian, C. Pike, and D. Melamed. 2006. Scal- able purely-discriminative training for word and tree trans- ducers. In AMTA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3):377-404.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of 39 th ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Yamada and K. Knight. 2001. A syntax-based statistical translation model. In Proceedings of 39 th ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improvements in phrase-based statistical machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zens and H. Ney. 2004. Improvements in phrase-based sta- tistical machine translation. In Proceedings of HLT-NAACL, pages 257-264, Boston, MA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Stochastic lexicalized inversion transduction grammar for alignment",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Zhang and D. Gildea. 2005. Stochastic lexicalized inver- sion transduction grammar for alignment. In Proceedings of ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Example bilingual texts with alignment information I:\u00cfH need:\u00bb^%cW2 to: make:\u00c2k a: collect \u00c3 call $*d"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Bilanguage strings resulting from alignments shown inFigure 3."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Locally constraint permutation automaton for a sentence with 4 words and window size of 2."
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Output target</td><td>Sequential Lexical Model Target word for each source position i</td><td>Bag-of-Words Lexical Model Target word given a source sentence</td></tr><tr><td colspan=\"3\">Input features BOgram(S, 0, Probabilities BOgram(S, i \u2212 d, i + d) : bag of n-grams P (ti|BOgram(S, i \u2212 d, i + d)) P (BOW (T )|BOgram(S, 0, |S|))</td></tr><tr><td/><td>Independence assumption between the labels</td><td/></tr><tr><td>Number of classes</td><td colspan=\"2\">One per target word or phrase</td></tr><tr><td>Training samples</td><td>One per source token</td><td>One per sentence</td></tr><tr><td>Preprocessing</td><td>Source/Target word alignment</td><td>Source/Target sentence alignment</td></tr></table>",
"text": "A comparison of the sequential and bag-of-words lexical choice models |S|): bag of n-grams in source sentence in the interval [i \u2212 d, i + d] in source sentence"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Training (2005)</td><td colspan=\"3\">Dev 2005</td><td colspan=\"2\">Dev 2006</td></tr><tr><td/><td colspan=\"7\">Chinese English Chinese English Chinese English</td></tr><tr><td colspan=\"3\">Sentences Running Words 351,060 376,615 46,311 Vocabulary 11,178 11,232 Singletons 4,348 4,866</td><td>3,826 931 600</td><td>506</td><td>3,897 898 538</td><td>5,214 1,136 619</td><td>489</td><td>6,362 * 1,134 * 574 *</td></tr><tr><td>OOVs [%]</td><td>-</td><td>-</td><td>0.6</td><td/><td>0.3</td><td>0.9</td><td>1.0</td></tr><tr><td>ASR WER [%]</td><td>-</td><td>-</td><td>-</td><td/><td>-</td><td>25.2</td><td>-</td></tr><tr><td>Perplexity</td><td>-</td><td>-</td><td>33</td><td/><td>-</td><td>86</td><td>-</td></tr><tr><td># References</td><td>-</td><td>-</td><td/><td>16</td><td/><td/><td>7</td></tr></table>",
"text": "Statistics of training and development data from 2005/2006 ( * = first of multiple translations only)."
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">: Results (mBLEU) scores for the three dif-</td></tr><tr><td colspan=\"4\">ferent models on the transcriptions for development</td></tr><tr><td colspan=\"4\">set 2005 and 2006 and ASR 1-best for development</td></tr><tr><td>set 2006.</td><td/><td/><td/></tr><tr><td/><td>Dev 2005</td><td/><td>Dev 2006</td></tr><tr><td/><td>Text</td><td colspan=\"2\">Text ASR 1-best</td></tr><tr><td>FST</td><td>51.8</td><td>19.5</td><td>16.5</td></tr><tr><td>Seq. Maxent</td><td>53.5</td><td>19.4</td><td>16.3</td></tr><tr><td>BOW Maxent</td><td>59.9</td><td>19.3</td><td>16.6</td></tr></table>",
"text": ""
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Corpus</td><td colspan=\"2\">Vocabulary</td><td>SFST</td><td>BOW</td></tr><tr><td/><td>Source</td><td>Target</td><td/></tr><tr><td>UN</td><td colspan=\"2\">252,571 53,005</td><td>64.6</td><td>69.5</td></tr><tr><td/><td/><td/><td colspan=\"2\">(60.5/69.1) (66.2/72.6)</td></tr><tr><td colspan=\"3\">Hansard 100,270 78,333</td><td>57.4</td><td>60.8</td></tr><tr><td/><td/><td/><td colspan=\"2\">(50.6/67.7) (56.5/63.4)</td></tr></table>",
"text": "Lexical Selection results (F-measure) on the Arabic-English UN Corpus and the French-English Hansard Corpus. In parenthesis are Fmeasures for open and closed class lexical items."
}
}
}
}