ACL-OCL / Base_JSON /prefixP /json /P16 /P16-1007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P16-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:00:13.539529Z"
},
"title": "Models and Inference for Prefix-Constrained Machine Translation",
"authors": [
{
"first": "Joern",
"middle": [],
"last": "Wuebker",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sa\u0161a",
"middle": [],
"last": "Hasan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We apply phrase-based and neural models to a core task in interactive machine translation: suggesting how to complete a partial translation. For the phrase-based system, we demonstrate improvements in suggestion quality using novel objective functions, learning techniques, and inference algorithms tailored to this task. Our contributions include new tunable metrics, an improved beam search strategy, an n-best extraction method that increases suggestion diversity, and a tuning procedure for a hierarchical joint model of alignment and translation. The combination of these techniques improves next-word suggestion accuracy dramatically from 28.5% to 41.2% in a large-scale English-German experiment. Our recurrent neural translation system increases accuracy yet further to 53.0%, but inference is two orders of magnitude slower. Manual error analysis shows the strengths and weaknesses of both approaches.",
"pdf_parse": {
"paper_id": "P16-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "We apply phrase-based and neural models to a core task in interactive machine translation: suggesting how to complete a partial translation. For the phrase-based system, we demonstrate improvements in suggestion quality using novel objective functions, learning techniques, and inference algorithms tailored to this task. Our contributions include new tunable metrics, an improved beam search strategy, an n-best extraction method that increases suggestion diversity, and a tuning procedure for a hierarchical joint model of alignment and translation. The combination of these techniques improves next-word suggestion accuracy dramatically from 28.5% to 41.2% in a large-scale English-German experiment. Our recurrent neural translation system increases accuracy yet further to 53.0%, but inference is two orders of magnitude slower. Manual error analysis shows the strengths and weaknesses of both approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A core prediction task in interactive machine translation (MT) is to complete a partial translation (Ortiz-Mart\u00ednez et al., 2009; Koehn et al., 2014) . Sentence completion enables interfaces that are richer than basic post-editing of MT output. For example, the translator can receive updated suggestions after each word typed (Langlais et al., 2000) . However, we show that completing partial translations by na\u00efve constrained decoding-the standard in prior work-yields poor suggestion quality. We describe new phrase-based objective functions, learning techniques, and inference algorithms for the sentence completion task. 1 We then compare this improved phrase-based system to a state-of-theart recurrent neural translation system in large-scale English-German experiments.",
"cite_spans": [
{
"start": 100,
"end": 129,
"text": "(Ortiz-Mart\u00ednez et al., 2009;",
"ref_id": "BIBREF27"
},
{
"start": 130,
"end": 149,
"text": "Koehn et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 327,
"end": 350,
"text": "(Langlais et al., 2000)",
"ref_id": "BIBREF19"
},
{
"start": 626,
"end": 627,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A system for completing partial translations takes as input a source sentence and a prefix of the target sentence. It predicts a suffix: a sequence of tokens that extends the prefix to form a full sentence. In an interactive setting, the first words of the suffix are critical; these words are the focus of the user's attention and can typically be appended to the translation with a single keystroke. We introduce a tuning metric that scores correctness of the whole suffix, but is particularly sensitive to these first words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Phrase-based inference for this task involves aligning the prefix to the source, then generating the suffix by translating the unaligned words. We describe a beam search strategy and a hierarchical joint model of alignment and translation that together improve suggestions dramatically. For English-German news, next-word accuracy increases from 28.5% to 41.2%. An interactive MT system could also display multiple suggestions to the user. We describe an algorithm for efficiently finding the n-best next words directly following a prefix and their corresponding best suffixes. Our experiments show that this approach to n-best list extraction, combined with our other improvements, increased next-word suggestion accuracy of 10-best lists from 33.4% to 55.5%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also train a recurrent neural translation system to maximize the conditional likelihood of the next word following a translation prefix, which is both a standard training objective in neural translation and an ideal fit for our task. This neural system provides even more accurate predictions than our improved phrase-based system. However, inference is two orders of magnitude slower, which is prob-lematic for an interactive setting. We conclude with a manual error analysis that reveals the strengths and weaknesses of both the phrase-based and neural approaches to suffix prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let F and E denote the set of all source and target language strings, respectively. Given a source sentence f \u2208 F and target prefix e p \u2208 E, a predicted suffix e s \u2208 E can be evaluated by comparing the full sentence e = e p e s to a reference e * . Let e * s denote the suffix of the reference that follows e p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Suffix Prediction",
"sec_num": "2"
},
{
"text": "We define three metrics below that score translations by the characteristics that are most relevant in an interactive setting: the accuracy of the first words of the suffix and the overall quality of the suffix. Each metric takes example triples (f, e p , e * ) produced during an interactive MT session in which e p was generated in the process of constructing e * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Suffix Prediction",
"sec_num": "2"
},
{
"text": "A simulated corpus of examples can be produced from a parallel corpus of (f, e * ) pairs by selecting prefixes of each e * . An exhaustive simulation selects all possible prefixes, while a sampled simulation selects only k prefixes uniformly at random for each e * . Computing metrics for exhaustive simulations is expensive because it requires performing suffix prediction inference for every prefix: |e * | times for each reference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Suffix Prediction",
"sec_num": "2"
},
{
"text": "Word Prediction Accuracy (WPA) or nextword accuracy (Koehn et al., 2014) is 1 if the first word of the predicted suffix e s is also the first word of reference suffix e * s , and 0 otherwise. Averaging over examples gives the frequency that the word following the prefix was predicted correctly. In a sampled simulation, all reference words that follow the first word of a sampled suffix are ignored by the metric, so most reference information is unused.",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Koehn et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Suffix Prediction",
"sec_num": "2"
},
{
"text": "is the maximum number of contiguous words at the start of the predicted suffix that match the reference. Like WPA, this metric is 0 if the first word of e s is not also the first word of e * s . In a sampled simulation, all reference words that follow the first mis-predicted word in the sampled suffix are ignored. While it is possible that the metric will require the full reference suffix, most reference information is unused in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Predicted Words (#prd)",
"sec_num": null
},
{
"text": "(pxB ): B (Papineni et al., 2002) is computed from the geometric mean of clipped n-gram precisions prec n (\u2022, \u2022) and a brevity penalty BP (\u2022, \u2022). Given a sequence of references E * = e * 1 , . . . , e * t and corresponding predictions E = e 1 , . . . , e t ,",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prefix-B",
"sec_num": null
},
{
"text": "(E, E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B",
"sec_num": null
},
{
"text": "* ) = BP (E, E * ) \u2022 4 n=1 prec n (E, E * ) 1 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B",
"sec_num": null
},
{
"text": "Ortiz-Mart\u00ednez et al. (2010) use BLEU directly for training an interactive system, but we propose a variant that only scores the predicted suffix and not the input prefix. The pxB metric computes B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B",
"sec_num": null
},
{
"text": "(\u00ca,\u00ca * ) for the following constructed sequences\u00ca and\u00ca * :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B",
"sec_num": null
},
{
"text": "\u2022 For each (f, e p , e * ) and suffix prediction e s , E includes the full sentence e = e p e s . \u2022 For each (f, e p , e * ),\u00ca * is a masked copy of e * in which all prefix words that do not match any word in e are replaced by null tokens. This construction maintains the original computation of the brevity penalty, but does not include the prefix in the precision calculations. Unlike the two previous metrics, the pxB metric uses all available reference information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B",
"sec_num": null
},
{
"text": "In order to account for boundary conditions, the reference e * is masked by the prefix e p as follows: we replace each of the first |e p \u2212 3| words with a null token e null , unless the word also appears in the suffix e * s . Masking retains the last three words of the prefix so that the first words after the prefix can contribute to the precision of all n-grams that overlap with the prefix, up to n = 4. Words that also appear in the suffix are retained so that their correct prediction in the suffix can contribute to those precisions, which would otherwise be clipped.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B",
"sec_num": null
},
{
"text": "All of these metrics can be used as the tuning objective of a phrase-based machine translation system. Tuning toward a sampled simulation that includes one or two prefixes per reference is much faster than using an exhaustive set of prefixes. A linear combination of these metrics can be used to trade off the relative importance of the full suffix and the words immediately following the prefix. With a combined metric, learning can focus on these words while using all available information in the references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Functions for Learning",
"sec_num": "2.1"
},
{
"text": "In addition to these metrics, suffix prediction can be evaluated by the widely used keystroke ratio (KSR) metric . This ratio assumes that any number of characters from the beginning of the suggested suffix can be appended to the user prefix using a single keystroke. It computes the ratio of key strokes required to enter the reference interactively to the character count of the reference. Our MT architecture does not permit tuning to KSR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keystroke Ratio (KSR)",
"sec_num": "2.2"
},
{
"text": "Other methods of quantifying effort in an interactive MT system are more appropriate for user studies than for direct evaluation of MT predictions. For example, measuring pupil dilation, pause duration and frequency (Schilperoord, 1996) , mouse-action ratio (Sanchis-Trilles et al., 2008) , or source difficulty (Bernth and McCord, 2000) would certainly be relevant for evaluating a full interactive system, but are beyond the scope of this work.",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "(Schilperoord, 1996)",
"ref_id": "BIBREF31"
},
{
"start": 258,
"end": 288,
"text": "(Sanchis-Trilles et al., 2008)",
"ref_id": "BIBREF30"
},
{
"start": 312,
"end": 337,
"text": "(Bernth and McCord, 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keystroke Ratio (KSR)",
"sec_num": "2.2"
},
{
"text": "In the log-linear approach to phrase-based translation (Och and Ney, 2004) , the distribution of translations e \u2208 E given a source sentence f \u2208 F is:",
"cite_spans": [
{
"start": 55,
"end": 74,
"text": "(Och and Ney, 2004)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Inference",
"sec_num": "3"
},
{
"text": "p(e|f ; w) = r: src(r)=f tgt(r)=e 1 Z(f ) exp w \u03c6(r) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Inference",
"sec_num": "3"
},
{
"text": "Here, r is a phrasal derivation with source and target projections src(r) and tgt(r), w \u2208 R d is the vector of model parameters, \u03c6(\u2022) \u2208 R d is a feature map, and Z(f ) is an appropriate normalizing constant. For the same model, the distribution over suffixes e s \u2208 E must also condition on a prefix e p \u2208 E:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Inference",
"sec_num": "3"
},
{
"text": "p(e s |e p , f ; w) = r: src(r)=f tgt(r)=epes 1 Z(f ) exp w \u03c6(r) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Inference",
"sec_num": "3"
},
{
"text": "In phrase-based decoding, the best scoring derivation r given a source sentence f and weights w is found efficiently by beam search, with one beam for every count of source words covered by a partial derivation (known as the source coverage cardinality). To predict a suffix conditioned on a prefix by constrained decoding, Barrachina et al. (2008) and Ortiz-Mart\u00ednez et al. (2009) modify the beam search by discarding hypotheses (partial derivations) that do not match the prefix e p .",
"cite_spans": [
{
"start": 324,
"end": 348,
"text": "Barrachina et al. (2008)",
"ref_id": "BIBREF0"
},
{
"start": 353,
"end": 381,
"text": "Ortiz-Mart\u00ednez et al. (2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Inference",
"sec_num": "3"
},
{
"text": "We propose target beam search, a two-step inference procedure. The first step is to produce a phrase-based alignment between the target prefix and a subset of the source words. The target is aligned left-to-right by appending aligned phrase pairs. However, each beam is associated with a target word count, rather than a source word count. Therefore, each beam contains hypotheses for a fixed prefix of target words. Phrasal translation candidates are bundled and sorted with respect to each target phrase rather than each source phrase. Crucially, the source distortion limit is not enforced during alignment, so that long-range reorderings can be analyzed correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Inference",
"sec_num": "3"
},
{
"text": "The second step generates the suffix using standard beam search. 2 Once the target prefix is completely aligned, each hypothesis from the final target beam is copied to an appropriate source beam. Search starts with the lowest-count source beam that contains at least one hypothesis. Here, we re-instate the distortion limit with the following modification to avoid search failures: The decoder can always translate any source position before the last source position that was covered in the alignment phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-Based Inference",
"sec_num": "3"
},
{
"text": "The phrase pairs available during decoding may not be sufficient to align the target prefix to the source. Pre-compiled phrase tables (Koehn et al., 2003) are typically pruned, and dynamic phrase tables (Levenberg et al., 2010) require sampling for efficient lookup.",
"cite_spans": [
{
"start": 134,
"end": 154,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF17"
},
{
"start": 203,
"end": 227,
"text": "(Levenberg et al., 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Phrase Pairs",
"sec_num": "3.1"
},
{
"text": "To improve alignment coverage, we include additional synthetic phrases extracted from word-level alignments between the source sentence and target prefix inferred using unpruned lexical statistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Phrase Pairs",
"sec_num": "3.1"
},
{
"text": "We first find the intersection of two directional word alignments. The directional alignments are obtained similar to IBM Model 2 (Brown et al., 1993) by aligning the most likely source word to each target word. Given a source sequence f = f 1 . . . f |f | and a target sequence e = e 1 . . . e |e| , we define the alignment a = a 1 . . . a |e| , where a i = j means that e i is aligned to f j . The likelihood is modeled by a single-word lexicon probability that is provided by our translation model and an alignment probability modeled as a Poisson distribution P oisson(k, \u03bb) in the distance to the diagonal.",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Phrase Pairs",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = arg max j\u2208{1,...,|f |} p(a i = j|f, e) (3) p(a i = j|f, e) = p(e i |f j ) \u2022 p(a i |j) (4) p(e i |f j ) = cnt(e i , f j ) cnt(f j ) (5) p(a i |j) = Poisson(|a i \u2212 j|, 1.0)",
"eq_num": "(6)"
}
],
"section": "Synthetic Phrase Pairs",
"sec_num": "3.1"
},
{
"text": "Here, cnt(e i , f j ) is the count of all word alignments between e i and f j in the training bitext, and cnt(f j ) the monolingual occurrence count of f j . We perform standard phrase extraction (Och et al., 1999; Koehn et al., 2003) to obtain our synthetic phrases, whose translation probabilities are again estimated based on the single-word probabilities p(e i |f j ) from our translation model. Given a synthetic phrase pair (e, f ), the phrase translation probability is computed as",
"cite_spans": [
{
"start": 196,
"end": 214,
"text": "(Och et al., 1999;",
"ref_id": "BIBREF25"
},
{
"start": 215,
"end": 234,
"text": "Koehn et al., 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Phrase Pairs",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(e|f ) = 1\u2264i\u2264|e| max 1\u2264j\u2264|f | p(e i |f j )",
"eq_num": "(7)"
}
],
"section": "Synthetic Phrase Pairs",
"sec_num": "3.1"
},
{
"text": "Additionally, we introduce three indicator features that count the number of synthetic phrase pairs, source words and target words, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Phrase Pairs",
"sec_num": "3.1"
},
{
"text": "In order to tune the model for suffix prediction, we optimize the weights w in Equation 2 to maximize the metrics introduced in Section 2. Model tuning is performed with AdaGrad (Duchi et al., 2011) , an online subgradient method. It features an adaptive learning rate and comes with good theoretical guarantees. See Green et al. (2013) for the details of applying AdaGrad to phrase-based translation. The same model scores both alignment of the prefix and translation of the suffix. However, different feature weights may be appropriate for scoring each step of the inference process. In order to learn different weights for alignment and translation within a unified joint model, we apply the hierarchical adaptation method of Wuebker et al. (2015) , which is based on frustratingly easy domain adaptation (FEDA) (Daum\u00e9 III, 2007) . We define three sub-segment domains:",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 317,
"end": 336,
"text": "Green et al. (2013)",
"ref_id": "BIBREF13"
},
{
"start": 729,
"end": 750,
"text": "Wuebker et al. (2015)",
"ref_id": "BIBREF34"
},
{
"start": 815,
"end": 832,
"text": "(Daum\u00e9 III, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "4"
},
{
"text": ", and . The domain contains all phrases that are used for aligning the prefix with the source sentence. Phrases that span both prefix and suffix additionally belong to the domain. Finally, once the prefix has been completely covered, the domain applies to all phrases that are used to translate the remainder of the sentence. The domain spans the entire phrasal derivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "4"
},
{
"text": "Formally, given a set of domains D = { , , , }, each feature is replicated for each domain d \u2208 D. These replicas can be interpreted as domain-specific \"offsets\" to the baseline weights. For an original feature vector \u03c6 with a set of domains D \u2286 D, the replicated feature vector contains |D| copies f d of each feature f \u2208 \u03c6, one for each d \u2208 D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f d = f, d \u2208 D 0, otherwise.",
"eq_num": "(8)"
}
],
"section": "Tuning",
"sec_num": "4"
},
{
"text": "The weights of the replicated feature space are initialized with 0 except for the domain, where we copy the baseline weights w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w d = w, d is 0, otherwise.",
"eq_num": "(9)"
}
],
"section": "Tuning",
"sec_num": "4"
},
{
"text": "All our phrase-based systems are first tuned without prefixes or domains to maximize B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "4"
},
{
"text": ". When tuning for suffix prediction, we keep these baseline weights w fixed to maintain baseline translation quality and only update the weights corresponding to the , and domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning",
"sec_num": "4"
},
{
"text": "Consider the interactive MT application setting in which the user is presented with an autocomplete list of alternative translations (Langlais et al., 2000) . The user query may be satisfied if the machine predicts the correct completion in its top-n output. However, it is well-known that n-best lists are poor approximations of MT structured output spaces (Macherey et al., 2008; Gimpel et al., 2013) . Even very large values of n can fail to produce alternatives that differ in the first words of the suffix, which limits n-best KSR and WPA improvements at test time. For tuning, WPA is often zero for every item on the n-best list, which prevents learning. Fortunately, the prefix can help efficiently enumerate diverse next-word alternatives. If we can find all edges in the decoding lattice that span the prefix e p and suffix e s , then we can generate diverse alternatives in precisely the right location in the target. Let G = (V, E) be the search lattice created by decoding, where V are nodes and E are the edges produced by rule applications. For any w \u2208",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "(Langlais et al., 2000)",
"ref_id": "BIBREF19"
},
{
"start": 358,
"end": 381,
"text": "(Macherey et al., 2008;",
"ref_id": "BIBREF22"
},
{
"start": 382,
"end": 402,
"text": "Gimpel et al., 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "V , let parent(w) return v s.t. v, w \u2208 E, target(w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "return the target sequence e defined by following the next pointers from w, and length(w) be the length of the target sequence up to w. During decoding, we set parent pointers and also assign monotonically increasing integer ids to each w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "To extract a full sentence completion given an edge v, w \u2208 E that spans the prefix/suffix boundary, we must find the best path to a goal node efficiently. To do this, we sort V in reverse topological order and set forward pointers from each node v to the Algorithm 1 Diverse n-best list extraction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "Require: Lattice G = (V, E), prefix length P 1: M = [] Marked nodes 2: for w \u2208 V in reverse topological order do 3: v = parent(w) v, w \u2208 E 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "if length(v) \u2264 P and length(w) > P then 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "Add w to M Mark node 6: end if 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "v.child = v.child \u2295 w Child pointer update 8: end for 9: N = [] n-best target strings 10: for m \u2208 M do 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "Add target(m) to N 12: end for 13: return N child node on the best goal path. During this traversal, we also mark all child nodes of edges that span the prefix/suffix boundary. Finally, we use the parent and child pointers to extract an n-best list of translations. Algorithm 1 shows the full procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Extraction",
"sec_num": "5"
},
{
"text": "Neural machine translation (NMT) models the conditional probability p(e|f ) of translating a source sentence f to a target sentence e. In the encoderdecoder NMT framework (Sutskever et al., 2014; Cho et al., 2014) , an encoder computes a representation s for each source sentence. From that source representation, the decoder generates a translation one word at a time by maximizing:",
"cite_spans": [
{
"start": 171,
"end": 195,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF33"
},
{
"start": 196,
"end": 213,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural machine translation",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log p(e|f ) = |e| i=1 log p (e i |e <i , f, s)",
"eq_num": "(10)"
}
],
"section": "Neural machine translation",
"sec_num": "6"
},
{
"text": "The individual probabilities in Equation 10 are often parameterized by a recurrent neural network which repeatedly predicts the next word e i given all previous target words e <i . Since this model generates translations by repeatedly predicting next words, it is a natural choice for the sentence completion task. Even in unconstrained decoding, it predicts one word at a time conditioned on the most likely prefix. We modified the state-of-the-art English-German NMT system described in (Luong et al., 2015) to conduct a beam search that constrains the translation to match a fixed prefix. 3 As we decode from left to right, the decoder transitions from a constrained prefix decoding mode to unconstrained beam search. In the constrained mode-the next word to predict e i is known-we set the beam size to 1, aggregate the score of predicting e i immediately without having to sort the softmax distribution over all words, and feed e i directly to the next time step. Once the prefix has been consumed, the decoder switches to standard beam search with a larger beam size (12 in our experiments). In this mode, the most probable word e i is passed to the next time step.",
"cite_spans": [
{
"start": 489,
"end": 509,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 592,
"end": 593,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural machine translation",
"sec_num": "6"
},
{
"text": "We evaluate our models and methods for English-French and English-German on two domains: software and news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "The phrase-based systems are built with Phrasal (Green et al., 2014) , an open source toolkit. We use a dynamic phrase table (Levenberg et al., 2010) and tune parameters with AdaGrad. All systems have 42 dense baseline features. We align the bitexts with mgiza (Gao and Vogel, 2008) and estimate 5-gram language models (LMs) with KenLM (Heafield et al., 2013) .",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "(Green et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 125,
"end": 149,
"text": "(Levenberg et al., 2010)",
"ref_id": "BIBREF20"
},
{
"start": 261,
"end": 282,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF11"
},
{
"start": 336,
"end": 359,
"text": "(Heafield et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "The English-French bilingual training data consists of 4.9M sentence pairs from the Common Crawl and Europarl corpora from WMT 2015 (Bojar et al., 2015). The LM was estimated from the target side of the bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "For English-German we run large-scale experiments. The bitext contains 19.9M parallel segments collected from WMT 2015 and the OPUS collection (Skadi\u0146\u0161 et al., 2014) . The LM was estimated from the target side of the bitext and the monolingual Common Crawl corpus (Buck et al., 2014) , altogether 37.2B running words.",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "(Skadi\u0146\u0161 et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 264,
"end": 283,
"text": "(Buck et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "The software test set includes 10k sentence pairs from the Autodesk post editing corpus 4 . For the news domain we chose the English-French new-stest2014 and English-German newstest2015 sets provided for the WMT 2016 5 shared task. The translation systems were tuned towards the specific domain, using another 10k segments from the Autodesk data or the newstest2013 data set, respectively. On the English-French tune set we randomly select one target prefix from each sentence pair for rapid experimentation. On all other test and tune sets we select two target prefixes at random. 6 The selected prefixes remain fixed throughout all experiments.",
"cite_spans": [
{
"start": 582,
"end": 583,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "For NMT, we report results both using a single network and an ensemble of eight models using various attention mechanisms (Luong et al., 2015) .",
"cite_spans": [
{
"start": 122,
"end": 142,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7"
},
{
"text": "Tables 1 and 2 show the main phrase-based results. The baseline system corresponds to constrained beam search, which performed best in (Ortiz-Mart\u00ednez et al., 2009) and (Barrachina et al., 2008) , where it was referred to as phrase-based (PB) and phrase-based model (PBM), respectively. Our target beam search strategy improves all metrics on both test sets.",
"cite_spans": [
{
"start": 135,
"end": 164,
"text": "(Ortiz-Mart\u00ednez et al., 2009)",
"ref_id": "BIBREF27"
},
{
"start": 169,
"end": 194,
"text": "(Barrachina et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Results",
"sec_num": "7.1"
},
{
"text": "For English-French, we observe absolute improvements of up to 3.2% pxB , 11.4% WPA and 10.6% KSR. We experimented with four different prefix-constrained tuning criteria: pxB , WPA, #prd, and the linear combination (pxB +WPA)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Results",
"sec_num": "7.1"
},
{
"text": ". We see that tuning towards prefix decoding increases all metrics. Across our two test sets, the combined metric yielded the most stable results. Here, we obtain gains of up to 3.0% pxB , 3.1% WPA and 2.1% KSR. We continue using the linear combination criterion for all subsequent experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "For English-German-the large-scale settingwe observe similar total gains of up to 3.9% pxB , 11.2% WPA and 8.2% KSR. The target beam search procedure contributes the most gain among our various improvements. Table 3 illustrates the differences in the translation output on three example sentences taken from the newstest2015 test set. It is clearly visible that both target beam search and prefix tuning improve the prefix alignment, which results in better translation suffixes.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "To improve recall in interactive MT, the user can be presented with multiple alternative sentence completions (Langlais et al., 2000) , which correspond to an n-best list of translation hypotheses generated by the prefix-constrained inference procedure. The diverse extraction scheme introduced in section 5 is particularly designed for next-word prediction recall. Table 4 shows results for 10-best lists.",
"cite_spans": [
{
"start": 110,
"end": 133,
"text": "(Langlais et al., 2000)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 366,
"end": 373,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Diverse n-best Results",
"sec_num": "7.2"
},
{
"text": "We see that WPA is increased by up to 15.3% by including the 10-best candidates, 11.3% being contributed by our novel diverse n-best extraction. Jointly, target beam search, prefix tuning and diverse n-best extraction lead to an absolute improvement of up to 23.5% over the baseline 10-best or-acle. We believe that n = 10 suggestions are the maximum number of candidates that should be presented to a user, but we also ran experiments with n = 3 and n = 5, which would result in an interface with reduced cognitive load. These settings yield 5.5% and 10.0% WPA gains respectively on English-German news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse n-best Results",
"sec_num": "7.2"
},
{
"text": "We compare this phrase-based system to the NMT system described in Section 6 for English-German. Table 5 shows the results. We observe a clear advantage of NMT over our best phrase-based system when comparing WPA. For pxB , the phrasebased model outperforms the single neural network system on the Autodesk set, but underperforms the ensemble. This stands in contrast to unconstrained full-sentence translation quality, where the phrasebased system is slightly better than the ensemble. The neural system substantially outperforms the phrase-based system for all metrics in the news domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with NMT",
"sec_num": "7.3"
},
{
"text": "In an interactive setting, the system must make predictions in near real-time, so we report average decoding times. We observe a clear time vs. accuracy trade-off; the phrase-based is 10.6 to 31.3 times faster than the single network NMT system and more than 100 times faster than the ensemble. Crucially, the phrase-based system runs on a CPU, while NMT requires a GPU for these speeds. Further, the 10-best oracle WPA of the phrase-based system is higher than the NMT ensemble in both genres.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with NMT",
"sec_num": "7.3"
},
{
"text": "Following the example of Neubig et al. (2015) , we performed a manual analysis of the first 100 segments on the newstest2015 data set in order to qualitatively compare the constrained translations produced by the phrase-based and single network NMT systems. We observe four main error categories in which the translations differ, for which we have given examples in Table 6 . NMT is generally better with long-range verb reorderings, which often lead to the verb being dropped by the phrasebased system. E.g. the word erscheinen in Ex. 1 and ver\u00f6ffentlicht in Ex. 2 are missing in the phrasebased translation. Also, the NMT engine often produces better German grammar and morphological agreement, e.g. kein vs. keine in Ex. 3 or the verb conjugations in Ex. 4. Especially interesting is that the NMT system generated the negation nicht in the second half of Ex. Table 2 : Phrase-based results on English-German, tuned to the linear combination of pxB and WPA.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "Neubig et al. (2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 366,
"end": 373,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 862,
"end": 869,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with NMT",
"sec_num": "7.3"
},
{
"text": "a direct correspondence in the English source, but makes the sentence feel more natural in German.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with NMT",
"sec_num": "7.3"
},
{
"text": "On the other hand, NMT sometimes drops content words, as in Ex. 5, where middle-class jobs, Minnesota and Progressive Caucus co-chair remain entirely untranslated by NMT. Finally, incorrect prefix alignment sometimes leads to incorrect portions of the source sentence being translated after the prefix or even superfluous output by the phrase-based engine, like , die in Ex. 6. Table 7 summarizes how many times each of the systems produced a better output than the other, broken down by category.",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 385,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with NMT",
"sec_num": "7.3"
},
{
"text": "Target-mediated interactive MT was first proposed by Foster et al. (1997) and then further developed within the TransType (Langlais et al., 2000) and TransType2 (Esteban et al., 2004; Barrachina et al., 2008) projects. In TransType2, several different approaches were evaluated. Barrachina et al. (2008) reports experimental results that show the superiority of phrase-based models over stochastic finite state transducers and alignment templates, which were extended for the interactive translation paradigm by . Ortiz-Mart\u00ednez et al. (2009) confirm this observation, and find that their own suggested method using partial statistical phrase-based alignments performs on a similar level on most tasks. The approach using phrase-based models is used as the baseline in this paper.",
"cite_spans": [
{
"start": 53,
"end": 73,
"text": "Foster et al. (1997)",
"ref_id": "BIBREF10"
},
{
"start": 122,
"end": 145,
"text": "(Langlais et al., 2000)",
"ref_id": "BIBREF19"
},
{
"start": 161,
"end": 183,
"text": "(Esteban et al., 2004;",
"ref_id": "BIBREF9"
},
{
"start": 184,
"end": 208,
"text": "Barrachina et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 279,
"end": 303,
"text": "Barrachina et al. (2008)",
"ref_id": "BIBREF0"
},
{
"start": 514,
"end": 542,
"text": "Ortiz-Mart\u00ednez et al. (2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "In order to make the interaction sufficiently responsive, Barrachina et al. (2008) resort to search within a word graph, which is generated by the translation decoder without constraints at the beginning of the workflow. A given prefix is then matched to the paths within the word graph. This approach was recently refined with more permissive matching criteria by Koehn et al. (2014) , who report strong improvements in prediction accuracy.",
"cite_spans": [
{
"start": 58,
"end": 82,
"text": "Barrachina et al. (2008)",
"ref_id": "BIBREF0"
},
{
"start": 365,
"end": 384,
"text": "Koehn et al. (2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Instead of using a word graph, it is also possible to perform a new search for every interaction (Bender et al., 2005; Ortiz-Mart\u00ednez et al., 2009) , which is the approach we have adopted. Ortiz-Mart\u00ednez et al. (2009) perform the most similar study to our work in the literature. The authors also define prefix decoding as a two-stage process, but focus on investigating different smoothing techniques, while our work includes new metrics, models, and inference.",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "(Bender et al., 2005;",
"ref_id": "BIBREF1"
},
{
"start": 119,
"end": 147,
"text": "Ortiz-Mart\u00ednez et al., 2009)",
"ref_id": "BIBREF27"
},
{
"start": 189,
"end": 217,
"text": "Ortiz-Mart\u00ednez et al. (2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We have shown that both phrase-based and neural translation approaches can be used to complete partial translations. The recurrent neural system provides higher word prediction accuracy, but requires lengthy inference on a GPU. The phrase-based system is fast, produces diverse n-best lists, and provides reasonable prefix-B performance. The complementary strengths of both systems suggest future work in combining these techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "We have also shown decisively that simply performing constrained decoding for a phrase-based model is not an effective approach to the task of completing translations. Instead, the learning objective, model, and inference procedure should all 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Suddenly I'm at the National Theatre and I just couldn't quite believe it. reference \"Pl\u00f6tzlich war ich im Nationaltheater und ich konnte es kaum glauben. baseline \"Pl\u00f6tzlich war ich im Nationaltheater bin und ich konnte es einfach nicht glauben. target beam search \"Pl\u00f6tzlich war ich im National Theatre und das konnte ich nicht ganz glauben. + prefix tuning \"Pl\u00f6tzlich war ich im National Theatre, und ich konnte es einfach nicht glauben.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "source",
"sec_num": null
},
{
"text": "source \"A little voice inside me said, 'You're going to have to do 10 minutes while they fix the computer.\" \" reference \"Eine kleine Stimme sagte mir \"Du musst jetzt 10 Minuten \u00fcberbr\u00fccken, w\u00e4hrend sie den Computer reparieren.\" \" baseline \"Eine kleine Stimme sagte mir \"Du musst jetzt 10 Minuten \u00fcberbr\u00fccken, sie legen die m\u00fcssen, w\u00e4hrend der Computer.\" target beam search \"Eine kleine Stimme sagte mir \"Du musst jetzt 10 Minuten \u00fcberbr\u00fccken zu tun, w\u00e4hrend sie den Computer reparieren\". + prefix tuning \"Eine kleine Stimme sagte mir \"Du musst jetzt 10 Minuten \u00fcberbr\u00fccken, w\u00e4hrend sie den Computer reparieren.\" \" Table 4 : Oracle results on the English-French and English-German tasks. We compare the single best result with oracle scores on 10-best lists with standard and diverse n-best extraction on both target beam search with prefix tuning and the phrase-based baseline system. Table 5 : English-German results for the phrase-based system with target beam search and tuned to a combined metric, compared with the recurrent neural translation system. The 10-best diverse line contains oracle scores from a 10-best list; all other scores are computed for a single suffix prediction per example. We also report unconstrained full-sentence B scores. The phrase-based timing results include prefix alignment and synthetic phrase extraction. be tailored to the task. The combination of these changes can adapt a phrase-based translation system to perform prefix alignment and suffix prediction jointly with fewer search errors and greater accuracy for the critical first words of the suffix. In light of the dramatic improvements in prediction quality that result from the techniques we have described, we look forward to investigating the effect on user experience for interactive translation systems that employ these methods. : Example sentences from the English-German newstest2015 test set. We compare the prefix decoding output of phrase-based target beam search against the single network neural machine translation (NMT) engine, printing the prefix in italics. The examples illustrate the four error categories missing verb (Ex. 1 and 2), grammar / morphology (Ex. 3 and 4), missing content words (Ex. 5) and alignment (Ex. 6).",
"cite_spans": [],
"ref_spans": [
{
"start": 614,
"end": 621,
"text": "Table 4",
"ref_id": null
},
{
"start": 885,
"end": 892,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "#better phrase-based NMT missing verb 1 19 grammar / morphology 0 15 missing content words 17 3 alignment 0 6 Table 7 : Result of the manual analysis on the first 100 segments of the English-German newstest2015 test set. For each of the four error categories we count how many times one of the systems produced a better output.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Code available at: https://github.com/stanfordnlp/phrasal",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We choose cube pruning(Huang and Chiang, 2007) as the beam-filling strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used the trained models provided by the authors of(Luong et al., 2015) using the codebase at https://github.com/lmthang/nmt.matlab.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://autodesk.app.box.com/Autodesk-PostEditing 5 http://www.statmt.org/wmt16 6 We briefly experimented with larger sets of prefixes and also exhaustive simulation in tuning, but did not observe significant improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Minh-Thang Luong was partially supported by NSF Award IIS-1514268 and partially supported by a gift from Bloomberg L.P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical approaches to computerassisted translation",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Barrachina",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Civera",
"suffix": ""
},
{
"first": "Elsa",
"middle": [],
"last": "Cubel",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "1",
"pages": "3--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Barrachina, Oliver Bender, Francisco Casacu- berta, Jorge Civera, Elsa Cubel, Shahram Khadivi, et al. 2008. Statistical approaches to computer- assisted translation. Computational Linguistics, 35(1):3-28.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Comparison of generation strategies for interactive machine translation",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Sa\u0161a",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "EAMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Bender, Sa\u0161a Hasan, David Vilar, Richard Zens, and Hermann Ney. 2005. Comparison of genera- tion strategies for interactive machine translation. In EAMT.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The effect of source analysis on translation confidence",
"authors": [
{
"first": "Arendse",
"middle": [],
"last": "Bernth",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Mccord",
"suffix": ""
}
],
"year": 2000,
"venue": "AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arendse Bernth and Michael C. McCord. 2000. The effect of source analysis on translation confidence. In AMTA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Findings of the 2015 Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hokamp",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, et al. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In WMT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephan",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephan A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Pa- rameter Estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "N-gram counts and language models from the common crawl",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Bas",
"middle": [],
"last": "Van Ooyen",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In LREC.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Frustratingly easy domain adaptation",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adap- tation. In ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, July.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "TransType2 -an innovative computer-assisted translation system",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Esteban",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Lorenzo",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"S"
],
"last": "Valderr\u00e1banos",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Esteban, Jos\u00e9 Lorenzo, Antonio S. Valderr\u00e1banos, and Guy Lapalme. 2004. TransType2 -an innovative computer-assisted translation system. In ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Target-Text Mediated Interactive Machine Translation",
"authors": [
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Plamondon",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine Translation",
"volume": "12",
"issue": "1-2",
"pages": "175--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Foster, Pierre Isabelle, and Pierre Plamondon. 1997. Target-Text Mediated Interactive Machine Translation. Machine Translation, 12(1-2):175- 194.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parallel implementations of word alignment tool",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Gao and Stephan Vogel. 2008. Parallel implemen- tations of word alignment tool. In Software Engineer- ing, Testing, and Quality Assurance for Natural Lan- guage Processing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A systematic exploration of diversity in machine translation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Shakhnarovich",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fast and adaptive online training of feature-rich translation models",
"authors": [
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spence Green, Sida Wang, Daniel Cer, and Christo- pher D. Manning. 2013. Fast and adaptive online training of feature-rich translation models. In ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Phrasal: A toolkit for new directions in statistical machine translation",
"authors": [
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spence Green, Daniel Cer, and Christopher D. Man- ning. 2014. Phrasal: A toolkit for new directions in statistical machine translation. In WMT.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Scalable modified Kneser-Ney language model estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Forest rescoring: Faster decoding with integrated language models",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2007. Forest rescor- ing: Faster decoding with integrated language mod- els. In ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Refinements to interactive translation prediction based on search graphs",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Chara",
"middle": [],
"last": "Tsoukala",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Saint-Amand",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Chara Tsoukala, and Herve Saint- Amand. 2014. Refinements to interactive translation prediction based on search graphs. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "TransType: a Computer-Aided Translation Typing System",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2000,
"venue": "NAACL Workshop on Embedded Machine Translation Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Langlais, George Foster, and Guy Lapalme. 2000. TransType: a Computer-Aided Translation Typing System. In NAACL Workshop on Embedded Machine Translation Systems.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Stream-based translation models for statistical machine translation",
"authors": [
{
"first": "Abby",
"middle": [],
"last": "Levenberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abby Levenberg, Chris Callison-Burch, and Miles Os- borne. 2010. Stream-based translation models for statistical machine translation. In NAACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Lattice-based minimum error rate training for statistical machine translation",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
},
{
"first": "Jakop",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Macherey, Franz Josef Och, Ignacio Thayer, and Jakop Uszkoreit. 2008. Lattice-based minimum error rate training for statistical machine translation. In EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Neural reranking improves subjective quality of machine translation: NAIST at WAT2015",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2015,
"venue": "2nd Workshop on Asian Translation (WAT2015)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Makoto Morishita, and Satoshi Naka- mura. 2015. Neural reranking improves subjective quality of machine translation: NAIST at WAT2015. In 2nd Workshop on Asian Translation (WAT2015).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine trans- lation. Computational Linguistics, 30(4):417-450.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improved alignment models for statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statis- tical machine translation. In EMNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Efficient search for interactive statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och, Richard Zens, and Hermann Ney. 2003. Efficient search for interactive statistical ma- chine translation. In EACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Interactive machine translation based on partial statistical phrase-based alignments",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ortiz-Mart\u00ednez",
"suffix": ""
},
{
"first": "Ismael",
"middle": [],
"last": "Garc\u00eda-Varea",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2009,
"venue": "RANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ortiz-Mart\u00ednez, Ismael Garc\u00eda-Varea, and Fran- cisco Casacuberta. 2009. Interactive machine trans- lation based on partial statistical phrase-based align- ments. In RANLP.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Online learning for interactive statistical machine translation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ortiz-Mart\u00ednez",
"suffix": ""
},
{
"first": "Ismael",
"middle": [],
"last": "Garc\u00eda-Varea",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ortiz-Mart\u00ednez, Ismael Garc\u00eda-Varea, and Fran- cisco Casacuberta. 2010. Online learning for inter- active statistical machine translation. In NAACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In ACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Improving interactive machine translation via mouse actions",
"authors": [
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Sanchis-Trilles",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ortiz-Mart\u00ednez",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Civera",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Vidal",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Germ\u00e1n Sanchis-Trilles, Daniel Ortiz-Mart\u00ednez, Jorge Civera, Francisco Casacuberta, Enrique Vidal, and Hieu Hoang. 2008. Improving interactive machine translation via mouse actions. In EMNLP.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "It's about Time: Temporal Aspects of Cognitive Processes in Text Production",
"authors": [
{
"first": "Joost",
"middle": [],
"last": "Schilperoord",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joost Schilperoord. 1996. It's about Time: Temporal Aspects of Cognitive Processes in Text Production. Rodopi.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Billions of parallel words for free: Building and using the EU bookshop corpus",
"authors": [
{
"first": "Raivis",
"middle": [],
"last": "Skadi\u0146\u0161",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Roberts",
"middle": [],
"last": "Rozis",
"suffix": ""
},
{
"first": "Daiga",
"middle": [],
"last": "Deksne",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raivis Skadi\u0146\u0161, J\u00f6rg Tiedemann, Roberts Rozis, and Daiga Deksne. 2014. Billions of parallel words for free: Building and using the EU bookshop corpus. In LREC.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Hierarchical incremental adaptation for statistical machine translation",
"authors": [
{
"first": "Joern",
"middle": [],
"last": "Wuebker",
"suffix": ""
},
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joern Wuebker, Spence Green, and John DeNero. 2015. Hierarchical incremental adaptation for statistical machine translation. In EMNLP.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "3. This word does not have",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">autodesk</td><td/><td colspan=\"2\">newstest2014</td></tr><tr><td/><td colspan=\"2\">tuning criterion pxB</td><td colspan=\"3\">WPA #prd KSR pxB</td><td colspan=\"2\">WPA #prd KSR</td></tr><tr><td>baseline</td><td>B</td><td>57.9</td><td>41.1</td><td>1.49 57.8</td><td>40.9</td><td>38.0</td><td>0.96 61.7</td></tr><tr><td>target beam search</td><td>B</td><td>61.0</td><td>47.2</td><td>1.74 50.3</td><td>44.1</td><td>49.4</td><td>1.35 51.1</td></tr><tr><td>+ prefix tuning</td><td>(pxB pxB +WPA) 2</td><td>64.0 64.0</td><td>50.3 50.1</td><td>1.95 48.2 1.95 48.2</td><td>44.7 44.9</td><td>50.9 50.3</td><td>1.40 50.5 1.38 50.8</td></tr><tr><td/><td>WPA</td><td>62.4</td><td>50.2</td><td>1.88 48.1</td><td>43.3</td><td>50.5</td><td>1.34 51.7</td></tr><tr><td/><td>#prd</td><td>63.8</td><td>49.7</td><td>1.95 48.4</td><td>44.1</td><td>50.3</td><td>1.37 50.7</td></tr></table>"
},
"TABREF1": {
"text": "Phrase-based results on the English-French task. We compare the baseline with the target beam search proposed in this work. Prefix tuning is evaluated with four different tuning criteria.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"2\">autodesk</td><td/><td colspan=\"2\">newstest2015</td></tr><tr><td/><td>pxB</td><td colspan=\"3\">WPA #prd KSR pxB</td><td colspan=\"2\">WPA #prd KSR</td></tr><tr><td>baseline</td><td>58.5</td><td>37.8</td><td>1.54 64.7</td><td>32.1</td><td>28.5</td><td>0.61 72.7</td></tr><tr><td>target beam search</td><td>61.2</td><td>44.6</td><td>1.78 58.0</td><td>36.0</td><td>39.7</td><td>0.84 64.5</td></tr><tr><td>+ prefix tuning</td><td>62.2</td><td>46.0</td><td>1.85 57.2</td><td>36.0</td><td>41.2</td><td>0.88 63.7</td></tr></table>"
},
"TABREF3": {
"text": "Translation examples from the English-German newstest2015 test set. We compare the prefix decoding output of the baseline against target beam search both with and without prefix tuning. The prefix is printed in italics.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">English-French</td><td/><td/><td colspan=\"2\">English-German</td><td/></tr><tr><td/><td/><td colspan=\"2\">autodesk</td><td colspan=\"2\">newstest2014</td><td colspan=\"2\">autodesk</td><td colspan=\"2\">newstest2015</td></tr><tr><td/><td/><td colspan=\"8\">WPA KSR WPA KSR WPA KSR WPA KSR</td></tr><tr><td>baseline</td><td>1-best</td><td>41.1</td><td>57.8</td><td>38.0</td><td>61.7</td><td>37.8</td><td>64.7</td><td>28.5</td><td>72.7</td></tr><tr><td/><td>10-best</td><td>48.6</td><td>53.3</td><td>42.7</td><td>58.5</td><td>43.9</td><td>60.2</td><td>33.4</td><td>69.5</td></tr><tr><td colspan=\"2\">target beam search 1-best</td><td>50.3</td><td>48.2</td><td>50.9</td><td>50.5</td><td>46.0</td><td>57.2</td><td>41.2</td><td>63.7</td></tr><tr><td/><td>10-best</td><td>56.8</td><td>43.7</td><td>54.9</td><td>47.3</td><td>51.1</td><td>53.2</td><td>46.6</td><td>60.3</td></tr><tr><td/><td>10-best diverse</td><td>64.5</td><td>39.1</td><td>66.2</td><td>41.4</td><td>57.3</td><td>48.4</td><td>55.5</td><td>54.5</td></tr></table>"
},
"TABREF5": {
"text": "He is due to appear in Karratha Magistrates Court on September 23. reference Er soll am 23. September vor dem Amtsgericht in Karratha erscheinen. phrase-based Er ist aufgrund der in Karratha Magistrates Court am 23. September. NMT Er wird am 23. September in Karratah Magistrates Court erscheinen. 2. source The research, funded by the [...], will be published today in the Medical Journal of Australia. reference Die von [...] finanzierte Studie wird heute im Medical Journal of Australia ver\u00f6ffentlicht. phrase-based Die von [...] finanzierte Studie wird heute im Medical Journal of Australia. NMT Die von [...] finanzierte Studie wird heute im Medical Journal of Australia ver\u00f6ffentlicht. 3. source But it is certainly not a radical initiative -at least by American standards. reference Aber es ist mit Sicherheit keine radikale Initiative -jedenfalls nicht nach amerikanischen Standards. phrase-based Aber es ist sicherlich kein radikale Initiative -zumindest von den amerikanischen Standards. NMT Aber es ist gewiss keine radikale Initiative -zumindest nicht nach amerikanischem Ma\u00dfstab. 4. source Now everyone knows that the labor movement did not diminish the strength of the nation but enlarged it. reference Jetzt wissen alle, dass die Arbeiterbewegung die St\u00e4rke der Nation nicht einschr\u00e4nkte, sondern sie vergr\u00f6\u00dferte. phrase-based Jetzt wissen alle, dass die Arbeiterbewegung die St\u00e4rke der Nation nicht schm\u00e4lern, aber vergr\u00f6\u00dfert . NMT Jetzt wissen alle, dass die Arbeiterbewegung die St\u00e4rke der Nation nicht verringert, sondern erweitert hat. 5. source \"As go unions, so go middle-class jobs,\" says Ellison, the Minnesota Democrat who serves as a Congressional Progressive Caucus co-chair. reference \"So wie Gewerkschaften sterben, sterben auch die Mittelklassejobs,\" sagte Ellison, ein Demokrat aus Minnesota und stellvertretender Vorsitzender des Progressive Caucus im Kongress. phrase-based \"So wie Gewerkschaften sterben, so Mittelklasse-Jobs\", sagt Ellison, der Minnesota Demokrat, dient als Congressional Progressive Caucus Mitveranstalter. NMT \"So wie Gewerkschaften sterben, so gehen die gehen,\" sagt Ellison, der Liberalen, der als Kongresses des eine dient. 6. source The opposition politician, Imran Khan, accuses Prime Minister Sharif of rigging the parliamentary elections, which took place in May last year. reference Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben. phrase-based Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben. , die NMT Der Oppositionspolitiker Imran Khan wirft Premier Sharif vor, bei der Parlamentswahl im Mai vergangenen Jahres betrogen zu haben.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>1. source</td></tr></table>"
},
"TABREF6": {
"text": "",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}