ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:34.782596Z"
},
"title": "Touch Editing: A Flexible One-Time Interaction Approach for Translation",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "qian.wang@nlpr.ia.ac.cn"
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "jjzhang@nlpr.ia.ac.cn"
},
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "cqzong@nlpr.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a touch-based editing method for translation, which is more flexible than traditional keyboard-mouse-based translation postediting. This approach relies on touch actions that users perform to indicate translation errors. We present a dual-encoder model to handle the actions and generate refined translations. To mimic the user feedback, we adopt the TER algorithm comparing between draft translations and references to automatically extract the simulated actions for training data construction. Experiments on translation datasets with simulated editing actions show that our method significantly improves original translation of Transformer (up to 25.31 BLEU) and outperforms existing interactive translation methods (up to 16.64 BLEU). We also conduct experiments on post-editing dataset to further prove the robustness and effectiveness of our method.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a touch-based editing method for translation, which is more flexible than traditional keyboard-mouse-based translation postediting. This approach relies on touch actions that users perform to indicate translation errors. We present a dual-encoder model to handle the actions and generate refined translations. To mimic the user feedback, we adopt the TER algorithm comparing between draft translations and references to automatically extract the simulated actions for training data construction. Experiments on translation datasets with simulated editing actions show that our method significantly improves original translation of Transformer (up to 25.31 BLEU) and outperforms existing interactive translation methods (up to 16.64 BLEU). We also conduct experiments on post-editing dataset to further prove the robustness and effectiveness of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) has made great success during the past few years (Sutskever et al., 2014; Bahdanau et al., 2014; Wu et al., 2016; Vaswani et al., 2017) , but automatic machine translation is still far from perfect and cannot meet the strict requirements of users in real applications (Petrushkov et al., 2018) . Many notable humanmachine interaction approaches have been proposed for allowing professional translators to improve machine translation results (Wuebker et al., 2016; Knowles and Koehn, 2016; Hokamp and Liu, 2017) . As an instance of such approaches, post-editing directly requires translators to modify outputs from machine translation (Simard et al., 2007) . However, traditional post-editing requires intensive keyboard interaction, which is inconvenient on mobile devices. Grangier and Auli (2018) suggest a one-time interaction approach with lightweight editing ef-forts, QuickEdit, in which users are asked to simply mark incorrect words in a translation hypothesis for one time in the hope that the system will change them. QuickEdit delivers appealing improvements on draft hypotheses while maintaining the flexibility of human-machine interaction. Unfortunately, only marking incorrect words is far from adequate: for example, it does not indicate the missing information beyond the original hypothesis, which is a typical issue called under-translation in machine translation (Tu et al., 2016) .",
"cite_spans": [
{
"start": 82,
"end": 106,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 107,
"end": 129,
"text": "Bahdanau et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 130,
"end": 146,
"text": "Wu et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 147,
"end": 168,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 301,
"end": 326,
"text": "(Petrushkov et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 474,
"end": 496,
"text": "(Wuebker et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 497,
"end": 521,
"text": "Knowles and Koehn, 2016;",
"ref_id": "BIBREF22"
},
{
"start": 522,
"end": 543,
"text": "Hokamp and Liu, 2017)",
"ref_id": "BIBREF18"
},
{
"start": 667,
"end": 688,
"text": "(Simard et al., 2007)",
"ref_id": null
},
{
"start": 807,
"end": 831,
"text": "Grangier and Auli (2018)",
"ref_id": "BIBREF14"
},
{
"start": 1416,
"end": 1433,
"text": "(Tu et al., 2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel one-time interaction method called Touch Editing, which is flexible for users and more adequate for a system to generate better translations. Inspired by human editing process, the proposed method relies on a series of touch-based actions including SUBSTITU-TION, DELETION, INSERTION and REORDERING. These actions do not include lexical information and thus can be flexibly provided by users through various of gestures on touch screen devices. By using these actions, our method is able to capture the editing intention from users to generate better translations: for instance, INSERTION indicates a word is missing at a particular position, and our method is expected to insert the correct word. To this end, we present a neural network model by augmenting Transformer (Vaswani et al., 2017) with an extra encoder for a hypothesis and its actions. Since it is impractical to manually annotate large-scale action dataset to train the model, we thereby adopt the algorithm of TER (Snover et al., 2006) to automatically extract actions from a draft hypothesis and its reference.",
"cite_spans": [
{
"start": 805,
"end": 827,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 1014,
"end": 1035,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate our method, we conduct simulated experiments on translation datasets the same as in other works (Denkowski et al., 2014; Grangier and Auli, 2018) , The results demonstrate that our method can address the well-known challenging issues in machine translation including over-QuickEdit Hypothesis y' travel far does not necessary to proctor for food supply .",
"cite_spans": [
{
"start": 108,
"end": 132,
"text": "(Denkowski et al., 2014;",
"ref_id": "BIBREF9"
},
{
"start": 133,
"end": 157,
"text": "Grangier and Auli, 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Result travel far does not require to proctor food supplies .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Source x weite wege m\u00fcsse proctor f\u00fcr die nahrungsmittelbeschaffung nicht gehen . Reference y proctor does not have to travel far to buy food .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hypothesis y' travel far does not necessary to proctor for food supply .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Touch Editing",
"sec_num": null
},
{
"text": "Modified (y') proctor does not necessary to travel far for <INS> food supply . Figure 1: Example of interaction methods. QuickEdit allows users to mark incorrect words. Our method introduces more flexible actions. m(y ) is modified from y by applying reordering actions and inserting a special token INS to keep alignment with the action sequence a which contains actions like SUBSTITUTION, INSERTION and DELETION. \"-\" denotes the word in that position is unmarked. Our method then generates a refined translation based on the modified hypothesis m(y ) and the action sequence a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Touch Editing",
"sec_num": null
},
{
"text": "translation, under-translation and mis-ordering, and thus it outperforms Transformer and QuickEdit by a margin up to 25.31 and 16.64 BLEU points respectively. In addition, experiments on post-editing dataset further prove the effectiveness and robustness of our method. Finally, we implement a real application on mobile phones to discuss the usability in real senarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Touch Editing",
"sec_num": null
},
{
"text": "2 Touch Editing Approach",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Touch Editing",
"sec_num": null
},
{
"text": "QuickEdit allows translators to mark incorrect words which they expect the system to change (Grangier and Auli, 2018) . However, as shown in Figure 1 , the information is inadequate for a system to correct a translation hypothesis, especially when it comes to under-translation, in which the system is hardly to predict missing words into hypotheses. To achieve better adequacy, we take human editing habits into consideration. As shown in Figure 1 , a human translator may insert, delete, substitute or reorder some words to correct errors of undertranslation, over-translation, mis-translation and mis-ordering in an original translation hypothesis. Based on human editing process, we define a set of actions to represent human editing intentions:",
"cite_spans": [
{
"start": 92,
"end": 117,
"text": "(Grangier and Auli, 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 141,
"end": 149,
"text": "Figure 1",
"ref_id": null
},
{
"start": 440,
"end": 449,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Actions",
"sec_num": "2.1"
},
{
"text": "\u2022 INSERTION: a new word should be inserted into a given position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actions",
"sec_num": "2.1"
},
{
"text": "\u2022 DELETION: a word at a specific position should be deleted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actions",
"sec_num": "2.1"
},
{
"text": "\u2022 SUBSTITUTION: a word should be substituted by another word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actions",
"sec_num": "2.1"
},
{
"text": "\u2022 REORDERING: a segment of words should be moved to another position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actions",
"sec_num": "2.1"
},
{
"text": "In Touch Editing, these actions can be performed by human translators on a given machine hypothesis to indicate translation errors. To keep the flexibility of interactions, for SUBSTITUTION and INSERTION actions, our method allows users to only indicate which word should be substitute or in which position a word should be inserted. The light-weight interaction in Touch Editing is nonlexical, i.e., it does not require any keyboard inputs, and thus can be adopted to mobile devices with touch screens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actions",
"sec_num": "2.1"
},
{
"text": "Our model seeks to correct translation errors of an original hypothesis y based on actions A provided by human translator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "To make full use of the actions, we firstly modify the original hypothesis by applying A on y to obtain A(y ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A(y ) = m(y ), a .",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "Specifically, as shown in Figure 1 , m(y ) is modified from y by reordering the segment in gray color and inserting a token INS , and thus the REORDERING actions is implicitly included in Figure 2 : Model architecture. We add a hypothesis encoder (the right part) into Transformer which differs from source encoder (the left part) in positional embedding. We use learned action positional embedding instead of the sinusoids. We then use a neural network model to generate a translation y for the source sentence x, the hypothesis y and the actions A:",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 1",
"ref_id": null
},
{
"start": 188,
"end": 196,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "P (y | x, y , A; \u03b8) = N n=1 P (yn|y<n, x, m(y ), a; \u03b8). (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "As shown in Figure 2 , the neural network model we developed is a dual encoder model based on Transformer similar to Tebbifakhr et al. (2018) . Specifically, besides encoding the source sentence x with source encoder (the left part of Figure 2 ), our model additionally encodes A(y ) with an extra hypothesis encoder (the right part of Figure 2 ) and integrates the encoded representations into decoding network using dual multi-head attention.",
"cite_spans": [
{
"start": 117,
"end": 141,
"text": "Tebbifakhr et al. (2018)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": null
},
{
"start": 235,
"end": 243,
"text": "Figure 2",
"ref_id": null
},
{
"start": 336,
"end": 345,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "Encoding A(y ) As shown in the right part of Figure 2 , the hypothesis encoder firstly embeds m(y ) with length l in distributed space using the same word embedding as in decoder, which is denoted as",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 53,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "w = {w 1 , \u2022 \u2022 \u2022 , w l }. Then it encodes a = {a 1 , \u2022 \u2022 \u2022 , a l }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "with learned positional embedding according to the specific actions. As shown in Figure 3 , the action positional embedding includes four embedding matrixes corresponding to three action types and a none action for positions without any action. For the ith position of a, the encoder chooses an embedding matrix based on the action type of a i and selects the ith row of the matrix as the positional embedding vector, which is denoted as p i :",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 89,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p i = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 P E INSERTION (i) if a i = I P E DELETION (i) if a i = D P E SUBSTITUTION (i) if a i = S P E None (i) if a i = -",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "Where P E * denote the action positional embedding matrixes in Figure 3 . The learned action positional embedding is used in hypothesis encoder to replace the fixed sinusoids positional encoding in Transformer encoder. Next, the encoder adds the word embedding w and the action positional embedding p to obtain input embedding",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 71,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "e = {w 1 + p 1 , \u2022 \u2022 \u2022 , w l + p l }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "The following part of hypothesis encoder lies the same as Transformer encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "Decoding The output of hypothesis encoder, together with the output of source encoder, are fed into the decoder. To combine both of the encoders' outputs, we apply dual multi-head attention in each layer of decoder: the attention sub-layer attends to both encoders' outputs by performing multi-head attention respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A src = MultiHead(Q tgt , K src , V src ) A hyp = MultiHead(Q tgt , K hyp , V hyp )",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "Where Q tgt is coming from previous layer of the decoder, K src and V src matrixes are final representations of the source encoder while K hyp and V hyp matrixes are final representations of the hypothesis encoder. The two attention vectors A src and A hyp are then averaged to replace encoder-decoder attention in Transformer, resulting in the input of next layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "Training The overall model, which includes a source encoder, a hypothesis encoder with action positional embedding, and a decoder, is jointly trained. We maximize the log-likelihood of the reference sentence y given the source sentence x, the initial hypothesis y , and the corresponding actions A. By applying A on y , the training objective becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 = arg max \u03b8 D log P (y | x, m(y ), a; \u03b8) .",
"eq_num": "(5)"
}
],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "where D is the training dataset consists of quadruplets like (source x, modified hypothesis m(y ), action sequence a, target y). We use Adam optimizer (Kingma and Ba, 2014), an extension of stochastic gradient descent (Bottou, 1991) , to train the model. After training, the model with parameter\u03b8 is then used in inference phase to generate refined translations for test data, which consists of triplets like (source x, modified hypothesis m(y ), action sequence a).",
"cite_spans": [
{
"start": 218,
"end": 232,
"text": "(Bottou, 1991)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.2"
},
{
"text": "The actions we defined in Section 2.1 can be provided by human translators in real applications. However, it is impractical to manually collect a large scale annotated dataset for training our model. Thus we resort to propose an approach to automatically extract editing actions from a machine translation hypothesis and its corresponding reference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Data Annotation",
"sec_num": "3"
},
{
"text": "To make our method powerful, the number of editing actions which convert a hypothesis to its Algorithm 1 Extracting actions with TER Input: hypothesis y , reference y m(y ) \u2190 y a \u2190 Empty action sequence repeat Find reordering r that most reduces min-editdistance(m(y ), y) if r reduces edit distance then m(y ) \u2190 applying r to m(y ) end if until no beneficial reordering remains a \u2190 min-edit(m(y ), y) m(y ) \u2190 insert INS into m(y ) based on a Output: m(y ), a reference is minimal as presented in Section 2.1. Snover et al. (2006) study this problem and point out that its optimal solution is NP-hard (Lopresti and Tomkins, 1997; Shapira and Storer, 2002) . To optimize the number of editing actions, they instead propose an approximate algorithm based on minimal edit distance. The basic idea of their algorithm can be explained as follows. It repeatedly modifies the intermediate string by applying reordering actions which is greedily found to mostly reduce the edit distance between the intermediate string and the reference, until no more beneficial reordering remains.",
"cite_spans": [
{
"start": 510,
"end": 530,
"text": "Snover et al. (2006)",
"ref_id": "BIBREF35"
},
{
"start": 601,
"end": 629,
"text": "(Lopresti and Tomkins, 1997;",
"ref_id": "BIBREF24"
},
{
"start": 630,
"end": 655,
"text": "Shapira and Storer, 2002)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Data Annotation",
"sec_num": "3"
},
{
"text": "In this paper, we adopt the basic idea of Snover et al. (2006) to automatically extract actions. As shown in Algorithm 1, given a reference and a hypothesis, the algorithm repeatedly reorders words to reduce the word-level minimal edit distance between reference y and modified hypothesis m(y ) until no beneficial reordering remains. With the modified hypothesis m(y ), the algorithm then calculates the editing action sequence a that minimize the word-level edit distance between m(y ) and y (see Action Sequence a in Figure 1 ). It finally inserts special token INS to keep alignment between the modified hypothesis and the action sequence (see Modified m(y ) in Figure 1 ). The output of the algorithm, which is a tuple of modified hypothesis and action sequence, together with the source sentence and its reference, are used to train our model as described in Section 2.2.",
"cite_spans": [
{
"start": 42,
"end": 62,
"text": "Snover et al. (2006)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 520,
"end": 528,
"text": "Figure 1",
"ref_id": null
},
{
"start": 666,
"end": 674,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic Data Annotation",
"sec_num": "3"
},
{
"text": "We conduct simulated experiment on translation datasets. Specifically, we translate the source sentences in translation datasets with a pre-trained Transformer model and build the training data with simulated human feedback using algorithm described in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "The experiment is conducted on three translation datasets: the IWSLT'14 English-German dataset (Cettolo et al., 2014) , the WMT'14 English-German dataset (Bojar et al., 2014) and the WMT'17 Chinese-English dataset (Ondrej et al., 2017) . The IWSLT'14 English-German dataset consists of 170k sentence pairs from TED talk subtitles. We use dev2010 as validation set which contains 887 sentent pairs, and a concatenation of tst2010, tst2011 and tst2012 as test set which con- tains 4698 sentence pairs. For WMT'14 English-German dataset, we use the same data and preprocessing as (Luong et al., 2015) . The dataset consists of 4.5M sentence pairs for training 1 . We take newstest2013 for validation and newstest2014 for testing. For Chinese to English dataset, we use CWMT portion which is a subset of WMT'17 training data containing 9M sentence pairs. We validate on newsdev2017 and test on newstest2017.",
"cite_spans": [
{
"start": 95,
"end": 117,
"text": "(Cettolo et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 154,
"end": 174,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 214,
"end": 235,
"text": "(Ondrej et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 577,
"end": 597,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Settings",
"sec_num": "4.1"
},
{
"text": "As for vocabulary, the English and German datasets are encoded using byte-pair encoding (Sennrich et al., 2015) with a shared vocabulary of 8k tokens for IWSLT'14 and 32k tokens for WMT'14. For Chinese to English dataset, the English vocabulary is set to 30k subwords, while the Chinese data is tokenized into character level and the vocabulary is set to 10k characters. Note that even with subword units or character units, the actions are marked in word level, i.e. all units from a given word share the same actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Settings",
"sec_num": "4.1"
},
{
"text": "We train the models with two settings. For the larger WMT English-German and English-Chinese dataset, we borrow the Transformer base parameter set of Vaswani et al. (2017) , which contains 6 layers for encoders and decoder respectively. The multi-head attention of each layer contains 8 heads. The word embedding size is set to 512 and the feedforward layer dimension is 2048. For the smaller IWSLT dataset, we use 3 layers for each component and multi-head attention with 4 heads in each layer. The word embedding size is 256 and the feedforward layers' hidden size is 1024. We also apply label smoothing \u03c3 ls = 0.1 and dropout p dropout = 0.1 during training. All models are trained from scratch with corresponding training data, e.g., parallel data for Transformer baseline model and annotated data for Touch Editing.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Settings",
"sec_num": "4.1"
},
{
"text": "We report the results of different systems including Transformer and QuickEdit. The Transformer model is tested on bitext data, i.e., the model directly generates translations based on source sentences. As for the QuickEdit, we followed the settings of Grangier and Auli (2018) , in which they mark all words in initial translation results that do not appear in the references as incorrect, and use the QuickEdit model to generate refined translations. In Touch Baseline setting, we use the algorithm described in Section 3 to obtain the actions respect to initial translations and references, and then apply reordering and deletion actions to obtain refined translations. The Touch Edit setting accesses the same information as Touch Baseline but uses the neural model described in Section 2.2 to handle the actions. Note that the original QuickEdit model is based on ConvS2S, and thus we reimplement it based on Transformer to keep the fairness of comparison 2 .",
"cite_spans": [
{
"start": 253,
"end": 277,
"text": "Grangier and Auli (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.2"
},
{
"text": "As shown in Table 1 , our model strongly outperforms other systems. As for BLEU score, our model achieves up to +25.31 than Transformer and +16.64 than QuickEdit. Our model also significantly reduces TER by -0.28 and -0.18 comparing to Transformer and QuickEdit.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.2"
},
{
"text": "We also notice that the improvement on the smaller IWSLT'14 dataset (up to 17.22) is not as Table 2 : Word reordering quality, measured in number of word reorderings required to align to references, and RIBES score. significant as that on the larger WMT'14 dataset (up to 24.74) and WMT'17 dataset (up to 25.31) . This observation is in consistent with QuickEdit, which also gains lower improvement on the smaller dataset. The reason, as described in Grangier and Auli (2018) , is that the underlying machine translation model is overfitted on the smaller 170k dataset. Thus the translation output requires less edits on which we build simulated editing action dataset.",
"cite_spans": [
{
"start": 283,
"end": 311,
"text": "WMT'17 dataset (up to 25.31)",
"ref_id": null
},
{
"start": 451,
"end": 475,
"text": "Grangier and Auli (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.2"
},
{
"text": "The limited supervised data further impacts the model quality and final results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.2"
},
{
"text": "To further investigate the model capacity, we conduct four experiments on WMT'14 English to German dataset. We analyze the factors that bring the remarkable improvement by modeling coverage, reordering quality and accuracy of each action type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "We also test our model with limited number of actions to evaluate the model usability with partial feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "Reordering We evaluate the word reordering quality of our model, compared with Transformer and QuickEdit. We adopt two automatic evaluation metrics. One metric is based on monolingual alignment. We firstly align model hypotheses and references with TER, and then count the number of words that should be reordered. As shown by Reordering in Table 2 , the output of our model requires less word reorderings to align with reference. The other metric is RIBES (Isozaki et al., 2010) , which is based on rank correlation. As shown in Table 2 , our method outperforms the other two systems with 90.50 versus 79.97 for Transformer and 84.33 for QuickEdit.",
"cite_spans": [
{
"start": 457,
"end": 479,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 341,
"end": 348,
"text": "Table 2",
"ref_id": null
},
{
"start": 530,
"end": 537,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "Accuracy As described in Section 2.1, the actions of our method represent human editing intentions, i.e., they indicate errors in original hypothesis and our model is expected to correct these errors based on editing actions. To evaluate the accuracy Table 3 , our model achieves the accuracy of 99.15% for deletion 3 , 36.32% for insertion and 31.86% for substitution. The high deletion accuracy shows that our model indeed learns to delete over-translated words. For insertion and substitution, the actions only indicate where to insert or substitute, and do not provide any ground truth. Since the self-attention mechanism in Transformer is good at word sense disambiguation (Tang et al., 2018a,b) , our model is able to select correct words to insert or substitute.",
"cite_spans": [
{
"start": 678,
"end": 700,
"text": "(Tang et al., 2018a,b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "Partial Feedback The model we train and test is based on all actions, i.e., all translation errors of the initial hypotheses are marked out. However, a human translator may not provide all marks. In fact, the feedback of human translators is hard to predict, and vary with different translators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "In this case, we test our model with simulated partial feedback. We train our model with all actions and randomly select 0%, 5%, . . . 100% of actions in test set to simulate human behavior. To further investigate the effect of partial feedback 7 on different actions, we train three extra models with specific kinds of actions: INSERT, DELETE and SUBSTITUTE. We then randomly select part of each kind of actions to test the model. Note that the REORDERING actions are always enabled since they are operated on a segment of words and cannot be partially disabled. To investigate the effect of REORDERING actions, we also train a model without reordering and partially select three kinds of actions to test the model. As shown in Figure 4 , for the model trained with all actions, the BLEU scores increases from 29.43 (with reordering only) to 50.49 (with all actions) as more actions are provided. For the models trained with specific kinds of actions and the model trained without reordering, the observation is similar.",
"cite_spans": [],
"ref_spans": [
{
"start": 729,
"end": 737,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "In previous sections, our model is tested and analyzed on automatic machine translation datasets. However, in post-editing scenarios, our model faces three major challenges: action inconsistency, data inconsistency and model inconsistency. For action inconsistency, the editing actions to train our model are extracted from machine predictions and references. The references in our training data are written by human from scratch, while in post-editing the references (human post-edited results) are revisions of machine translations, and thus the editing actions might be different. For data inconsistency, our model is trained on dataset of News domain (WMT) or TED talks (IWSLT). However in real world, data may be from any other domains. For model inconsistency, we use Transformer to build our training data while the translation model used in real applications may be different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Post-Editing Data",
"sec_num": "4.4"
},
{
"text": "To investigate the performance facing the three challenges, we test our model on WMT English-German Automatic Post-Editing (APE) dataset in IT domain using data from WMT'16 (Bojar et al., 2016) and WMT'17 (Ondrej et al., 2017) . The test data consists of triplets like (source, machine translation, human post-edit), in which the machine translation is generated with a PBSMT system. We use the algorithm of Section 3 to extract actions from machine translations and human post-edited sentences. With the actions and original machine translations, we use the model trained on WMT'14 English-German dataset in Section 4 to generate refined translations. To make a comparison, we also evaluate QuickEdit with the same setting. Table 4 summarizes the results on post-editing dataset. It is clear to see that even with the three kinds of inconsistency, our model still gains significant improvements of up to 20.05 BLEU than the raw machine translation system (PBSMT). As for QuickEdit, the improvement on post-editing dataset (about 4-7 BLEU) is smaller than that on translation dataset (about 11 BLEU). We conjecture that the stable improvement of our method is due to more flexible action types. With the detailed editing actions, the model is competent to correct various of errors in draft machine translations, and thus leads to the robustness and effectiveness of our method.",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 198,
"end": 226,
"text": "WMT'17 (Ondrej et al., 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 725,
"end": 732,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experiments on Post-Editing Data",
"sec_num": "4.4"
},
{
"text": "So far, the experiments we conducted are based on simulated human feedbacks, in which the actions are extracted from initial machine translation results and their corresponding references to simulate human editing actions. Thus in our simulated setting, the references are used in inference phase to simulate human behavior, as in other interaction methods (Denkowski et al., 2014; Marie and Max, 2015; Grangier and Auli, 2018) . These experiments show that our method can significantly improve the initial translation with similated actions. However, whether the actions are convenient to perform is a key point in real applications.",
"cite_spans": [
{
"start": 357,
"end": 381,
"text": "(Denkowski et al., 2014;",
"ref_id": "BIBREF9"
},
{
"start": 382,
"end": 402,
"text": "Marie and Max, 2015;",
"ref_id": "BIBREF26"
},
{
"start": 403,
"end": 427,
"text": "Grangier and Auli, 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Real Scenarios",
"sec_num": "4.5"
},
{
"text": "To investigate the usability and applicable scenarios of our method, we implement a real mobile application on iPhone, in which the actions can be performed on multi-touch screens. For a given source sentence, the application provides an initial machine translation. The text area of translation can response to several gestures 4 : Tap indicated a missing word should be inserted into the nearest space between two words; Swipe on a word indicated that the word should be deleted; Long-Press a word means the word should be substituted with other word; Pan can drag a word to another position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Real Scenarios",
"sec_num": "4.5"
},
{
"text": "We conduct a free-use study with four participants, in which the participants are asked to translate 20 sentences randomly selected from LDC Chinese-English test set with (1) Touch Editing or (2) keyboard input after 5 minutes to get familiar with the application. We observe that the users with Touch Editing tends to correct an error for multiple times when the system cannot predict a word they want, while the users with keyboard input tends to modify more content of initial translation and spend more time on choosing words. We then conduct an unstructured interview on the usability of our method. The result of the interview shows that Touch Editing is convenient and intuitive but lack of ability of generating final accurate translation. It can be treated as a light-weight proofreading method, and suitable for Pre-Post-Editing (Marie and Max, 2015) .",
"cite_spans": [
{
"start": 839,
"end": 860,
"text": "(Marie and Max, 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Real Scenarios",
"sec_num": "4.5"
},
{
"text": "Post-editing is a pragmatic method that allows human translators to directly correct errors in draft machine translations (Simard et al., 2007) . Comparing to purely manual translation, it achieves higher productivity while maintaining the human translation quality (Plitt and Masselot, 2010; Federico et al., 2012) .",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Simard et al., 2007)",
"ref_id": null
},
{
"start": 266,
"end": 292,
"text": "(Plitt and Masselot, 2010;",
"ref_id": "BIBREF31"
},
{
"start": 293,
"end": 315,
"text": "Federico et al., 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Many notable works introduce different levels of human-machine interactions in post-editing. Barrachina et al. (2009) propose a prefix-based interactive method which enable users to correct the first translation error from left to right in each iteration. Green et al. (2014) implement a prefix-based interactive translation system and Huang et al. (2015) adopt the prefix constrained translation candidates into a novel input method for translators. Peris et al. (2017) further extend this idea to neural machine translation.",
"cite_spans": [
{
"start": 93,
"end": 117,
"text": "Barrachina et al. (2009)",
"ref_id": "BIBREF1"
},
{
"start": 256,
"end": 275,
"text": "Green et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 336,
"end": 355,
"text": "Huang et al. (2015)",
"ref_id": "BIBREF19"
},
{
"start": 451,
"end": 470,
"text": "Peris et al. (2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The prefix-based protocol is inflexible since users have to follow the left-to-right order. To overcome the weakness of prefix-based approach, Gonz\u00e1lez-Rubio et al. 2016; Cheng et al. (2016) introduce interaction methods that allow users to correct errors at arbitrary position in a machine hypothesis, while Weng et al. (2019) also preventing repeat mistakes by memorizing revision actions. Hokamp and Liu (2017) propose grid beam search to incorporate lexical constraints like words and phrases provided by human translators and force the constraints to appear in hypothesis.",
"cite_spans": [
{
"start": 171,
"end": 190,
"text": "Cheng et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 309,
"end": 327,
"text": "Weng et al. (2019)",
"ref_id": "BIBREF42"
},
{
"start": 392,
"end": 413,
"text": "Hokamp and Liu (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Recently, some researchers resort to more flexible interactions, which only require mouse click or touch actions. For example, Marie and Max (2015) ; Domingo et al. (2016) propose interactive translation methods which ask user to select correct or incorrect segments of a translation with mouse only. Similar to our work, Grangier and Auli (2018) propose a mouse based interactive method which allows users to simply mark the incorrect words in draft machine hypotheses and expect the system to generate refined translations. Herbig et al. (2019 Herbig et al. ( , 2020 propose a multi-modal interface for post-editors which takes pen, touch, and speech modalities into consideration.",
"cite_spans": [
{
"start": 127,
"end": 147,
"text": "Marie and Max (2015)",
"ref_id": "BIBREF26"
},
{
"start": 150,
"end": 171,
"text": "Domingo et al. (2016)",
"ref_id": "BIBREF10"
},
{
"start": 322,
"end": 346,
"text": "Grangier and Auli (2018)",
"ref_id": "BIBREF14"
},
{
"start": 526,
"end": 545,
"text": "Herbig et al. (2019",
"ref_id": "BIBREF17"
},
{
"start": 546,
"end": 568,
"text": "Herbig et al. ( , 2020",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The protocol that given an initial translation to generate a refined translation, is also used in polishing mechanism in machine translation (Xia et al., 2017; Geng et al., 2018) and automatic post-editing (APE) task Pal et al., 2016) . The idea of multi-source encoder is also widely used in the field of APE research (Chatterjee et al., 2018 (Chatterjee et al., , 2019 . In human-machine interaction scenarios, the human feedback is used as extra information in polishing process.",
"cite_spans": [
{
"start": 141,
"end": 159,
"text": "(Xia et al., 2017;",
"ref_id": "BIBREF45"
},
{
"start": 160,
"end": 178,
"text": "Geng et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 217,
"end": 234,
"text": "Pal et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 319,
"end": 343,
"text": "(Chatterjee et al., 2018",
"ref_id": "BIBREF7"
},
{
"start": 344,
"end": 370,
"text": "(Chatterjee et al., , 2019",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we propose Touch Editing, a flexible and effective interaction approach which allows human translators to revise machine translation results via touch actions. The actions we introduce can be provided with gestures like tapping, panning, swiping or long pressing on touch screens to represent human editing intentions. We present a simulated action extraction method for constructing training data and a dual-encoder model to handle the actions to generate refined translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We prove the effectiveness of the proposed interaction approach and discuss the applicable scenarios with a free-use study. For future works, we plan to conduct large scale real world experiments to evaluate the productivity of different interactive machine translation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We use the pre-processed data from https://nlp. stanford.edu/projects/nmt/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, the comparison is still unfair because QuickEdit and our mothod access more supervised information than Transformer form simulated human feedback, which is the nature of interaction settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We do not explicitly remove words that marked as DELE-TION and the neural model is responsible for making final decision whether these words should be deleted. It might slightly hurt BLEU and accuracy but potentially generates more fluent translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These gestures are explicit and directly supported by Apple iOS devices: https://developer.apple.com/ documentation/uikit/uigesturerecognizer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical approaches to computer-assisted translation",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Barrachina",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Civera",
"suffix": ""
},
{
"first": "Elsa",
"middle": [],
"last": "Cubel",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Lagarda",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Jes\u00fas",
"middle": [],
"last": "Tom\u00e1s",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "1",
"pages": "3--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Barrachina, Oliver Bender, Francisco Casacu- berta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes\u00fas Tom\u00e1s, En- rique Vidal, et al. 2009. Statistical approaches to computer-assisted translation. Computational Lin- guistics, 35(1):3-28.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Findings of the 2014 workshop on statistical machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Saint-Amand",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ninth workshop on statistical machine translation",
"volume": "",
"issue": "",
"pages": "12--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. 2014. Findings of the 2014 workshop on statistical machine translation. In Pro- ceedings of the ninth workshop on statistical ma- chine translation, pages 12-58.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Findings of the 2016 conference on machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "131--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In Pro- ceedings of the First Conference on Machine Trans- lation: Volume 2, Shared Task Papers, pages 131- 198.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Stochastic gradient learning in neural networks",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of Neuro-N\u0131mes",
"volume": "91",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 1991. Stochastic gradient learning in neural networks. Proceedings of Neuro-N\u0131mes, 91(8):12.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Report on the 11th iwslt evaluation campaign, iwslt",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Jan Niehues, Sebastian St\u00fcker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam, vol- ume 57.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Findings of the wmt 2019 shared task on automatic post-editing",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "11--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajen Chatterjee, Christian Federmann, Matteo Negri, and Marco Turchi. 2019. Findings of the wmt 2019 shared task on automatic post-editing. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 3: Shared Task Papers, Day 2), pages 11-28.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajen Chatterjee, Matteo Negri, Raphael Rubino, and Marco Turchi. 2018. Findings of the WMT 2018 shared task on automatic post-editing. In Proceed- ings of the Third Conference on Machine Transla- tion: Shared Task Papers.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "PRIMT: A pick-revise framework for interactive machine translation",
"authors": [
{
"first": "Shanbo",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Huadong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1240--1249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shanbo Cheng, Shujian Huang, Huadong Chen, Xinyu Dai, and Jiajun Chen. 2016. PRIMT: A pick-revise framework for interactive machine translation. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, San Diego California, USA, June 12-17, 2016, pages 1240-1249.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning from post-editing: Online model adaptation for statistical machine translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "395--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski, Chris Dyer, and Alon Lavie. 2014. Learning from post-editing: Online model adapta- tion for statistical machine translation. In Proceed- ings of the 14th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 395-404.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Interactive-predictive translation based on multiple word-segments",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Domingo",
"suffix": ""
},
{
"first": "Alvaro",
"middle": [],
"last": "Peris",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 19th Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "282--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Domingo, Alvaro Peris, and Francisco Casacu- berta. 2016. Interactive-predictive translation based on multiple word-segments. In Proceedings of the 19th Annual Conference of the European Associa- tion for Machine Translation, pages 282-291.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Measuring user productivity in machine translation enhanced computer assisted translation",
"authors": [
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Cattelan",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Trombetti",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Tenth Conference of the Association for Machine Translation in the Americas (AMTA)",
"volume": "",
"issue": "",
"pages": "44--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcello Federico, Alessandro Cattelan, and Marco Trombetti. 2012. Measuring user productivity in ma- chine translation enhanced computer assisted trans- lation. In Proceedings of the Tenth Conference of the Association for Machine Translation in the Americas (AMTA), pages 44-56. AMTA Madison, WI.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adaptive multi-pass decoder for neural machine translation",
"authors": [
{
"first": "Xinwei",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "523--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinwei Geng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2018. Adaptive multi-pass decoder for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 523-532.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Beyond prefix-based interactive translation prediction",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gonz\u00e1lez-Rubio",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"Ortiz"
],
"last": "Martinez",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Jose Miguel Benedi",
"middle": [],
"last": "Ruiz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "198--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Gonz\u00e1lez-Rubio, Daniel Ortiz Martinez, Fran- cisco Casacuberta, and Jose Miguel Benedi Ruiz. 2016. Beyond prefix-based interactive translation prediction. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 198-207.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Quickedit: Editing text & translations by crossing words out",
"authors": [
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "272--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Grangier and Michael Auli. 2018. Quickedit: Editing text & translations by crossing words out. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 272-282.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Human effort and machine learnability in computer aided translation",
"authors": [
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sida",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Heer",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1225--1236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spence Green, Sida I Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D Man- ning. 2014. Human effort and machine learnability in computer aided translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1225-1236.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mmpe: A multimodal interface for post-editing machine translation",
"authors": [
{
"first": "Nico",
"middle": [],
"last": "Herbig",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "D\u00fcwel",
"suffix": ""
},
{
"first": "Santanu",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Kalliopi",
"middle": [],
"last": "Meladaki",
"suffix": ""
},
{
"first": "Mahsa",
"middle": [],
"last": "Monshizadeh",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Kr\u00fcger",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1691--1702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nico Herbig, Tim D\u00fcwel, Santanu Pal, Kalliopi Meladaki, Mahsa Monshizadeh, Antonio Kr\u00fcger, and Josef van Genabith. 2020. Mmpe: A multi- modal interface for post-editing machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1691-1702.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multi-modal approaches for post-editing machine translation",
"authors": [
{
"first": "Nico",
"middle": [],
"last": "Herbig",
"suffix": ""
},
{
"first": "Santanu",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Kr\u00fcger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nico Herbig, Santanu Pal, Josef van Genabith, and An- tonio Kr\u00fcger. 2019. Multi-modal approaches for post-editing machine translation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1-11.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Lexically constrained decoding for sequence generation using grid beam search",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Hokamp",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1535--1546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Hokamp and Qun Liu. 2017. Lexically con- strained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1535-1546.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A new input method for human translators: integrating machine translation effectively and imperceptibly",
"authors": [
{
"first": "Guoping",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoping Huang, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2015. A new input method for hu- man translators: integrating machine translation ef- fectively and imperceptibly. In Twenty-Fourth Inter- national Joint Conference on Artificial Intelligence.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic evaluation of translation quality for distant language pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic eval- uation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944-952. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural interactive translation prediction",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "107--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Knowles and Philipp Koehn. 2016. Neural interactive translation prediction. In Proceedings of the Association for Machine Translation in the Amer- icas, pages 107-120.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Statistical post-editing of a rule-based machine translation system",
"authors": [
{
"first": "A-L",
"middle": [],
"last": "Lagarda",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Alabau",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "D\u00edaz-De Lia\u00f1o",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "217--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A-L Lagarda, Vicente Alabau, Francisco Casacuberta, Roberto Silva, and Enrique D\u00edaz-de Lia\u00f1o. 2009. Statistical post-editing of a rule-based machine trans- lation system. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, Companion Volume: Short Papers, pages 217-220. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Block edit models for approximate string matching. Theoretical computer science",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Lopresti",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Tomkins",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "181",
"issue": "",
"pages": "159--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Lopresti and Andrew Tomkins. 1997. Block edit models for approximate string matching. The- oretical computer science, 181(1):159-179.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Touchbased pre-post-editing of machine translation output",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Aur\u00e9lien",
"middle": [],
"last": "Max",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1040--1045",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Marie and Aur\u00e9lien Max. 2015. Touch- based pre-post-editing of machine translation output. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 1040-1045.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Findings of the 2017 conference on machine translation (wmt17)",
"authors": [
{
"first": "Bojar",
"middle": [],
"last": "Ondrej",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Federmann",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Yvette",
"suffix": ""
},
{
"first": "Haddow",
"middle": [],
"last": "Barry",
"suffix": ""
},
{
"first": "Huck",
"middle": [],
"last": "Matthias",
"suffix": ""
},
{
"first": "Koehn",
"middle": [],
"last": "Philipp",
"suffix": ""
},
{
"first": "Logacheva",
"middle": [],
"last": "Liu Qun",
"suffix": ""
},
{
"first": "Monz",
"middle": [],
"last": "Varvara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Christof",
"suffix": ""
}
],
"year": 2017,
"venue": "Second Conference onMachine Translation",
"volume": "",
"issue": "",
"pages": "169--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojar Ondrej, Rajen Chatterjee, Federmann Christian, Graham Yvette, Haddow Barry, Huck Matthias, Koehn Philipp, Liu Qun, Logacheva Varvara, Monz Christof, et al. 2017. Findings of the 2017 confer- ence on machine translation (wmt17). In Second Conference onMachine Translation, pages 169-214. The Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A neural network based approach to automatic post-editing",
"authors": [
{
"first": "Santanu",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Mihaela",
"middle": [],
"last": "Sudip Kumar Naskar",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Vela",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "281--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santanu Pal, Sudip Kumar Naskar, Mihaela Vela, and Josef van Genabith. 2016. A neural network based approach to automatic post-editing. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), volume 2, pages 281-286.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Interactive neural machine translation",
"authors": [
{
"first": "Alvaro",
"middle": [],
"last": "Peris",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Domingo",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Speech & Language",
"volume": "45",
"issue": "",
"pages": "201--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvaro Peris, Miguel Domingo, and Francisco Casacu- berta. 2017. Interactive neural machine translation. Computer Speech & Language, 45:201-220.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning from chunk-based feedback in neural machine translation",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Petrushkov",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "326--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Petrushkov, Shahram Khadivi, and Evgeny Ma- tusov. 2018. Learning from chunk-based feedback in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), vol- ume 2, pages 326-331.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A productivity test of statistical machine translation post-editing in a typical localisation context. The Prague bulletin of mathematical linguistics",
"authors": [
{
"first": "Mirko",
"middle": [],
"last": "Plitt",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Masselot",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "93",
"issue": "",
"pages": "7--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirko Plitt and Fran\u00e7ois Masselot. 2010. A productiv- ity test of statistical machine translation post-editing in a typical localisation context. The Prague bulletin of mathematical linguistics, 93:7-16.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Edit distance with move operations",
"authors": [
{
"first": "Dana",
"middle": [],
"last": "Shapira",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Storer",
"suffix": ""
}
],
"year": 2002,
"venue": "Annual Symposium on Combinatorial Pattern Matching",
"volume": "",
"issue": "",
"pages": "85--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dana Shapira and James A Storer. 2002. Edit dis- tance with move operations. In Annual Symposium on Combinatorial Pattern Matching, pages 85-98. Springer.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Why self-attention? a targeted evaluation of neural machine translation architectures",
"authors": [
{
"first": "Gongbo",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4263--4272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gongbo Tang, Mathias M\u00fcller, Annette Rios, and Rico Sennrich. 2018a. Why self-attention? a targeted evaluation of neural machine translation architec- tures. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4263-4272.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "An analysis of attention mechanism: The case of word sense disambiguation in neural machine translation",
"authors": [
{
"first": "Gongbo",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2018,
"venue": "Third Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "26--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2018b. An analysis of attention mechanism: The case of word sense disambiguation in neural machine trans- lation. In Third Conference on Machine Translation, pages 26-35.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Multi-source transformer with combined losses for automatic post editing",
"authors": [
{
"first": "Amirhossein",
"middle": [],
"last": "Tebbifakhr",
"suffix": ""
},
{
"first": "Ruchit",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "846--852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amirhossein Tebbifakhr, Ruchit Agrawal, Matteo Ne- gri, and Marco Turchi. 2018. Multi-source trans- former with combined losses for automatic post edit- ing. In Proceedings of the Third Conference on Ma- chine Translation: Shared Task Papers, pages 846- 852.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Modeling coverage for neural machine translation",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "76--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 76-85.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Correct-andmemorize: Learning to translate from interactive revisions",
"authors": [
{
"first": "Rongxiang",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "5255--5263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rongxiang Weng, Hao Zhou, Shujian Huang, Lei Li, Yifan Xia, and Jiajun Chen. 2019. Correct-and- memorize: Learning to translate from interactive re- visions. In Proceedings of the Twenty-Eighth Inter- national Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5255-5263.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Models and inference for prefix-constrained machine translation",
"authors": [
{
"first": "Joern",
"middle": [],
"last": "Wuebker",
"suffix": ""
},
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Sasa",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "66--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joern Wuebker, Spence Green, John DeNero, Sasa Hasan, and Minh-Thang Luong. 2016. Models and inference for prefix-constrained machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 66-75.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Deliberation networks: Sequence generation beyond one-pass decoding",
"authors": [
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1784--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass de- coding. In Advances in Neural Information Process- ing Systems, pages 1784-1794.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Action positional embedding. The model firstly chooses an embedding matrix according to the action at position i, then lookups the ith row of the matrix as the positional embedding of position i. L is the maximum length of sentences. m(y ). The action sequence a below m(y ) contains SUBSTITUTION, INSERTION and DELETION at the corresponding position.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Results of partial feedback measured in BLEU score. We train five models to investigate the effects of partial feedback on different actions.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"text": "BLEU TER BLEU TER BLEU TER BLEU TER BLEU TER BLEU TER",
"content": "<table><tr><td/><td/><td colspan=\"2\">IWSLT'14</td><td/><td/><td colspan=\"2\">WMT'14</td><td/><td/><td colspan=\"2\">WMT'17</td><td/></tr><tr><td>Model</td><td colspan=\"2\">EN-DE</td><td colspan=\"2\">DE-EN</td><td colspan=\"2\">EN-DE</td><td colspan=\"2\">DE-EN</td><td colspan=\"2\">EN-ZH</td><td colspan=\"2\">ZH-EN</td></tr><tr><td>ConvS2S \u2020</td><td>24.20</td><td>-</td><td>27.40</td><td>-</td><td>25.20</td><td>-</td><td>29.70</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>QuickEdit \u2020</td><td>30.80</td><td>-</td><td>34.60</td><td>-</td><td>36.60</td><td>-</td><td>41.30</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Transformer</td><td>27.40</td><td>0.52</td><td>33.17</td><td>0.45</td><td>26.69</td><td>0.56</td><td>31.73</td><td>0.48</td><td>32.53</td><td>0.55</td><td>21.89</td><td>0.61</td></tr><tr><td>QuickEdit \u2021</td><td>34.33</td><td>0.43</td><td>40.13</td><td>0.39</td><td>37.00</td><td>0.43</td><td>41.48</td><td>0.39</td><td>41.20</td><td>0.43</td><td>29.78</td><td>0.51</td></tr><tr><td>Touch Baseline</td><td>34.48</td><td>0.42</td><td>40.09</td><td>0.35</td><td>33.92</td><td>0.43</td><td>39.47</td><td>0.37</td><td>38.96</td><td>0.42</td><td>29.17</td><td>0.51</td></tr><tr><td>Touch Editing</td><td>44.25</td><td>0.32</td><td>50.39</td><td>0.29</td><td>50.49</td><td>0.28</td><td>56.47</td><td>0.24</td><td>57.84</td><td>0.28</td><td>45.67</td><td>0.33</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Results of different systems measured in BLEU and TER. \u2020 denotes the results from Quick Edit. QuickEdit \u2021 is our reimplementation based on Transformer. Touch baseline is the result modified from initial hypothesis by deleting and reordering words. Touch Editing is our model trained with all actions described in Section 2.1.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "",
"content": "<table><tr><td>: Accuracy of actions. Total means number</td></tr><tr><td>of actions to transform the draft machine translations</td></tr><tr><td>into references. Correct means how many words are</td></tr><tr><td>corrected (or deleted) by the model.</td></tr><tr><td>of INSERTION, DELETION and SUBSTITUTION, we</td></tr><tr><td>first use TER to align machine translation hypothe-</td></tr><tr><td>ses and references, as well as our model's outputs</td></tr><tr><td>and references. With the references as intermedi-</td></tr><tr><td>ates, we then align our model's outputs and original</td></tr><tr><td>machine translations. With the alignment result, we</td></tr><tr><td>directly check whether the words with actions are</td></tr><tr><td>corrected or not to calculate the accuracy of the</td></tr><tr><td>three actions. To make a complete comparison, we</td></tr><tr><td>also analyze the results of QuickEdit and calculate</td></tr><tr><td>the accuracy.</td></tr><tr><td>As shown in</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"text": "Results on post-editing dataset in terms of BLEU and TER.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}