ACL-OCL / Base_JSON /prefixP /json /P19 /P19-1029.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P19-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:32:43.861027Z"
},
"title": "Self-Regulated Interactive Sequence-to-Sequence Learning",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kreutzer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computational Linguistics Heidelberg University",
"location": {}
},
"email": "kreutzer@cl.uni-heidelberg.de"
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {
"country": "Germany"
}
},
"email": "riezler@cl.uni-heidelberg.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning. We show how self-regulation strategies that decide when to ask for which kind of feedback from a teacher (or from oneself) can be cast as a learning-to-learn problem leading to improved cost-aware sequence-to-sequence learning. In experiments on interactive neural machine translation, we find that the selfregulator discovers an-greedy strategy for the optimal cost-quality trade-off by mixing different feedback types including corrections, error markups, and self-supervision. Furthermore, we demonstrate its robustness under domain shift and identify it as a promising alternative to active learning.",
"pdf_parse": {
"paper_id": "P19-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning. We show how self-regulation strategies that decide when to ask for which kind of feedback from a teacher (or from oneself) can be cast as a learning-to-learn problem leading to improved cost-aware sequence-to-sequence learning. In experiments on interactive neural machine translation, we find that the selfregulator discovers an-greedy strategy for the optimal cost-quality trade-off by mixing different feedback types including corrections, error markups, and self-supervision. Furthermore, we demonstrate its robustness under domain shift and identify it as a promising alternative to active learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The concept of self-regulation has been studied in educational research (Hattie and Timperley, 2007; Hattie and Donoghue, 2016) , psychology (Zimmerman and Schunk, 1989; Panadero, 2017) , and psychiatry (Nigg, 2017) , and was identified as central to successful learning. \"Self-regulated students\" can be characterized as \"becoming like teachers\", in that they have a repertoire of strategies to self-assess and self-manage their learning process, and they know when to seek help and which kind of help to seek. While there is a vast literature on machine learning approaches to metalearning (Schmidhuber et al., 1996) , learning-tolearn (Thrun and Pratt, 1998) , or never-ending learning (Mitchell et al., 2015) , the aspect of learning when to ask for which kind of feedback has so far been neglected in this field.",
"cite_spans": [
{
"start": 72,
"end": 100,
"text": "(Hattie and Timperley, 2007;",
"ref_id": "BIBREF12"
},
{
"start": 101,
"end": 127,
"text": "Hattie and Donoghue, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 141,
"end": 169,
"text": "(Zimmerman and Schunk, 1989;",
"ref_id": "BIBREF48"
},
{
"start": 170,
"end": 185,
"text": "Panadero, 2017)",
"ref_id": null
},
{
"start": 203,
"end": 215,
"text": "(Nigg, 2017)",
"ref_id": "BIBREF27"
},
{
"start": 592,
"end": 618,
"text": "(Schmidhuber et al., 1996)",
"ref_id": "BIBREF34"
},
{
"start": 638,
"end": 661,
"text": "(Thrun and Pratt, 1998)",
"ref_id": null
},
{
"start": 689,
"end": 712,
"text": "(Mitchell et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a machine learning algorithm that uses self-regulation in order to balance the cost and effect of learning from different types of feedback. This is particularly relevant for human-in-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Human-in-the-loop self-regulated learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq",
"sec_num": null
},
{
"text": "the-loop machine learning, where human supervision is costly. The self-regulation module automatically learns which kind of feedback to apply when in training-full supervision by teacher demonstration or correction, weak supervision in the form of positive or negative rewards for student predictions, or a self-supervision signal generated by the student. Figure 1 illustrates this learning scenario. The learner, in our case a sequence-tosequence (Seq2Seq) learner, aims to solve a certain task with the help of a human teacher. For every input it receives for training, it can ask the teacher for feedback to its own output, or supervise itself by training on its own output, or skip learning on the input example altogether. The self-regulator's policy for choosing feedback types is guided by their cost and by the performance gain achieved by learning from a particular type of feedback.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Seq2Seq",
"sec_num": null
},
{
"text": "We apply the self-regulation algorithm to interactive machine translation where a neural machine translation (NMT) system functions as a student which receives feedback simulated from a human reference translation or supervises itself. The intended real-world application is a machine translation personalization scenario where the goal of the human translator is to teach the NMT system to adapt to in-domain data with the best trade-off between feedback cost and performance gain. It can be transferred to other sequence-to-sequence learning tasks such as personalization of conversational AI systems for question-answering or geographical navigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq",
"sec_num": null
},
{
"text": "Our analysis of different configurations of selfregulation yields the following insights: Perhaps unsurprisingly, the self-regulator learns to balance all types of feedback instead of relying only on the strongest or cheapest option. This is an advantage over active learning strategies that only consider the choice between no supervision and full supervision. Interestingly, though, we find that the selfregulator learns to trade off exploration and exploitation similar to a context-free -greedy strategy that optimizes for fastest learning progress. Lastly, we show that the learned regulator is robust in a cold-start transfer to new domains, and even shows improvements over fully supervised learning on domains such as literary books where reference translations provide less effective learning signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq",
"sec_num": null
},
{
"text": "The incorporation of a query's cost into reinforcement learning has been addressed, for example, in the framework of active reinforcement learning (Krueger et al., 2016) . The central question in active reinforcement learning is to quantify the long-term value of reward information, however, assuming a fixed cost for each action and every round. Our framework is considerably more complicated by the changing costs for each feedback type on each round.",
"cite_spans": [
{
"start": 147,
"end": 169,
"text": "(Krueger et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A similar motivation for the need of changing feedback in reinforcement learning with human feedback is given in MacGlashan et al. (2017) . The goal of that work is to operationalize feedback schemes such as diminishing returns, differential feedback, or policy shaping. Human reinforcement learning with corrective feedback that can decrease or increase the action magnitude has been introduced in Celemin et al. (2019) . However, none of these works are concerned with the costs that are incurred when eliciting rewards from humans, nor do they consider multiple feedback modes.",
"cite_spans": [
{
"start": 113,
"end": 137,
"text": "MacGlashan et al. (2017)",
"ref_id": null
},
{
"start": 399,
"end": 420,
"text": "Celemin et al. (2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is connected to active learning, for example, to approaches that use reinforcement learning to learn a policy for a dynamic active learning strategy (Fang et al., 2017) , or to learn a curriculum to order noisy examples (Kumar et al., 2019) , or to the approach of who use imitation learning to select batches of data to be labeled. However, the action space these approaches consider is restricted to the decision whether or not to select particular data and is designed for a fixed budget, neither do they incorporate feedback cost in their frameworks. As we will show, our self-regulation strategy outperforms active learning based on uncertainty sampling Peris and Casacuberta, 2018) and our reinforcement learner is rewarded in such a way that it will produce the best system as early as possible.",
"cite_spans": [
{
"start": 158,
"end": 177,
"text": "(Fang et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 229,
"end": 249,
"text": "(Kumar et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 668,
"end": 696,
"text": "Peris and Casacuberta, 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Research that addresses the choice and the combination of different types of feedback is situated in the area between reinforcement and imitation learning (Ranzato et al., 2016; Cheng et al., 2018) . Instead of learning how to mix different supervision signals, these approaches assume fixed schedules.",
"cite_spans": [
{
"start": 155,
"end": 177,
"text": "(Ranzato et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 178,
"end": 197,
"text": "Cheng et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Further connections between our work on learning with multiple feedback types can be drawn to various extensions of reinforcement learning by multiple tasks (Jaderberg et al., 2017) , multiple loss functions (Wun et al., 2018) , or multiple policies (Smith et al., 2018) .",
"cite_spans": [
{
"start": 157,
"end": 181,
"text": "(Jaderberg et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 208,
"end": 226,
"text": "(Wun et al., 2018)",
"ref_id": "BIBREF47"
},
{
"start": 250,
"end": 270,
"text": "(Smith et al., 2018)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Feedback in the form of corrections (Turchi et al., 2017) , error markings (Domingo et al., 2017) , or translation quality judgments (Lam et al., 2018) has been successfully integrated in simulation experiments into interactive-predictive machine translation. Again, these works do not consider automatic learning of a policy for the optimal choice of feedback.",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Turchi et al., 2017)",
"ref_id": "BIBREF43"
},
{
"start": 75,
"end": 97,
"text": "(Domingo et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 133,
"end": 151,
"text": "(Lam et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this work, we focus on the aspect of selfregulated learning that concerns the ability to decide which type of feedback to query from a teacher (or oneself) for most efficient learning depending on the context. In our human-in-the-loop machine learning formulation, we focus on two contextual aspects that can be measured precisely: quality and cost. The self-regulation task is to optimally balance human effort and output quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Regulated Interactive Learning",
"sec_num": "3"
},
{
"text": "We model self-regulation as an active reinforcement learning problem with dynamic costs, where in each state, i.e. upon receiving an input, the regulator has to choose an action, here a feedback type, and pay a cost. The learner receives feedback of that type from the human to improve its prediction. Based on the effect of this learning update, the regulator's actions are reinforced or penalized, so that it improves its choice for future inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Regulated Interactive Learning",
"sec_num": "3"
},
{
"text": "In the following, we first compare training objectives for a Seq2Seq learner from various types of feedback ( \u00a73.1), then introduce the selfregulator module ( \u00a73.2), and finally combine both in the self-regulation algorithm ( \u00a73.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Regulated Interactive Learning",
"sec_num": "3"
},
{
"text": "Let x = x 1 . . . x S be a sequence of indices over a source vocabulary V SRC , and y = y 1 . . . y T a sequence of indices over a target vocabulary V TRG . The goal of sequence-to-sequence learning is to learn a function for mapping an input sequence x into an output sequences y. Specifically, for the example of machine translation, where y is a translation of x, the model, parametrized by a set of weights \u03b8, learns to maximize p \u03b8 (y | x). This quantity is further factorized into conditional probabilities over single tokens:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "p \u03b8 (y | x) = T t=1 p \u03b8 (y t | x; y <t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "The distribution p \u03b8 (y t | x; y <t ) is defined by the neural model's softmax-normalized output vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "p \u03b8 (y t | x; y <t ) = softmax(NN \u03b8 (x; y <t )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "There are various options for building the architecture of the neural model NN \u03b8 , such as recurrent (Sutskever et al., 2014) , convolutional (Gehring et al., 2017) or attentional (Vaswani et al., 2017) encoder-decoder architectures (or a mix thereof (Chen et al., 2018)). Regardless of their architecture, there are multiple ways of interactive learning that can be applied to neural Seq2Seq learners.",
"cite_spans": [
{
"start": 101,
"end": 125,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF40"
},
{
"start": 142,
"end": 164,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 180,
"end": 202,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "Learning from Corrections (FULL). Under full supervision, i.e., when the learner receives a fully corrected output y * for an input x, crossentropy minimization (equivalent to maximizing the likelihood of the data D under the current model) considers the following objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "J FULL (\u03b8) = 1 |D| (x,y * )\u2208D \u2212 log p \u03b8 (y * | x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "The stochastic gradient of this objective is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "g FULL \u03b8 (x, y * ) = \u2212\u2207 \u03b8 log p \u03b8 (y * | x),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "constituting an unbiased estimate of the gradient",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "\u2207 \u03b8 J FULL =E (x,y * )\u223cD g FULL \u03b8 (x, y * ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "A local minimum can be found by performing stochastic gradient descent on g FULL \u03b8 (x, y * ). This training objective is the standard in supervised learning when training with human-generated references or for online adaptation to post-edits (Turchi et al., 2017) .",
"cite_spans": [
{
"start": 242,
"end": 263,
"text": "(Turchi et al., 2017)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "Learning from Error Markings (WEAK). Petrushkov et al. (2018) presented chunk-based binary feedback as a low-cost alternative to full corrections. In this scenario the human teacher marks the correct parts of the machine-generated output y. As a consequence every token in the output receives a reward \u03b4 t , either \u03b4 t = 1 if marked as correct, or \u03b4 t = 0 otherwise. The objective of the learner is to maximize the likelihood of the correct parts of the output, or equivalently, to minimize",
"cite_spans": [
{
"start": 37,
"end": 61,
"text": "Petrushkov et al. (2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "J WEAK (\u03b8) = 1 |D| (x,\u0177)\u2208D T t=1 \u2212\u03b4 t log p \u03b8 (\u0177 t | x;\u0177 <t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "where the stochastic gradient is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "g WEAK \u03b8 (x,\u0177) = \u2212 T t=1 \u03b4 t \u2022 \u2207 \u03b8 log p \u03b8 (\u0177 t | x; y <t ) \u2207 \u03b8 J WEAK = E (x,\u0177)\u223cD g WEAK \u03b8 (x,\u0177) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "The tokens\u0177 t that receive \u03b4 t = 1 are part of the correct output y * , so the model receives a hint how a corrected output should look like. Although the likelihood of the incorrect parts of the sequence does not weigh into the sum, they are contained in the context of the correct parts (in y <t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "Self-Supervision (SELF). Instead of querying the teacher for feedback, the learner can also choose to learn from its own output, that is, to learn from self-supervision. The simplest option is to treat the learner's output as if it was correct, but that quickly leads to overconfidence and degeneration. Clark et al. (2018) proposed a cross-view training method: the learner's original prediction is used as a target for a weaker model that shares parameters with the original model. We adopt this strategy by first producing a target sequence\u0177 with beam search and then weaken the decoder through attention dropout with probability p att . The objective is to minimize the negative likelihood of the original target under the weakened model",
"cite_spans": [
{
"start": 304,
"end": 323,
"text": "Clark et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "J SELF (\u03b8) = 1 |D| (x,\u0177)\u2208D \u2212 log p patt \u03b8 (\u0177 | x),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "where the stochastic gradient is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "g SELF \u03b8 (x,\u0177) = \u2212\u2207 \u03b8 log p patt \u03b8 (\u0177 | x) \u2207 \u03b8 J SELF = E (x,\u0177)\u223cD g SELF \u03b8 (x,\u0177) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "Combination. For self-regulated learning, we also consider a fourth option (NONE): the option to ignore the current input. Figure 2 summarizes the stochastic gradients for all cases. In practice, Seq2Seq learning shows greater stability for mini-batch updates than online updates on single training samples. Mini-batch self-regulated learning can be achieved by accumulating stochastic gradients for a mini-batch of size B before updating \u03b8 with an average of these stochastic gradients, which we denote as g",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "g s \u03b8 (x, y) = \u2212 T t=1 f t \u2022 \u2207 \u03b8 log p drop \u03b8 (y t | x t ; y <t ), with y = y * if s = FULL y otherwise, drop = p att if s = SELF 0 otherwise, and f t = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 if s \u2208 {FULL, SELF} \u03b4 t if s = WEAK 0 if s = NONE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "s [1:B] \u03b8 (x [1:B] , y [1:B] ) = 1 B B i=1 g s i \u03b8 (x i , y i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seq2Seq Learning",
"sec_num": "3.1"
},
{
"text": "The regulator is another neural model q \u03c6 that is optimized for the quality-cost trade-off of the Seq2Seq learner. Given an input x i and the Seq2Seq's hypothesis\u0177 i , it chooses an action, here a supervision mode",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "s i \u223c q \u03c6 (s | x i ,\u0177 i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "This choice of feedback determines the update of the Seq2Seq learner ( Figure 2 ). The regulator is rewarded by the ratio between the cost c i of obtaining the feedback s i and the quality improvement \u2206(\u03b8 i , \u03b8 i\u22121 ) caused by updating the Seq2Seq learner with the feedback:",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "r(s i , x i , \u03b8 i ) = \u2206(\u03b8 i , \u03b8 i\u22121 ) c i + \u03b1 . (1) \u2206(\u03b8 i , \u03b8 i\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "is measured as the difference in validation score achieved before and after the learner's update (Fang et al., 2017) , and c i as the cost of user edits. Adding a small constant cost \u03b1 to the actual feedback cost ensures numerical stability. This meta-parameter can be interpreted as representing a basic cost for model updates of any kind. The objective for the regulator is to maximize the expected reward defined in Eq. 1:",
"cite_spans": [
{
"start": 97,
"end": 116,
"text": "(Fang et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "J META (\u03c6) = E x\u223cp(x),s\u223cq \u03c6 (s|x,\u0177) [r(s, x, \u03b8)] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "The full gradient of this objective is estimated by the stochastic gradient for sampled actions (Williams, 1992) :",
"cite_spans": [
{
"start": 96,
"end": 112,
"text": "(Williams, 1992)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "g META \u03c6 (x,\u0177, s) = r \u2022 \u2207 \u03c6 log q \u03c6 (s | x,\u0177). (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "Note that the reward contains the immediate improvement after one update of the Seq2Seq learner and not the overall performance in hindsight. This is an important distinction to classic expected reward objectives in reinforcement learning since it biases the regulator towards actions that have an immediate effect, which is desirable in the case of interaction with a human. However, since Seq2Seq learning requires updates and evaluations based on mini-batches, the regulator update also needs to be based on mini-batches of predictions, leading to the following specification of Eq. (2) for a mini-batch j:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "g META \u03c6 (x [1:B] ,\u0177 [1:B] , s [1:B] ) (3) = 1 B B i=1 g META \u03c6 (x i ,\u0177 i , s i ) = \u2206(\u03b8 j , \u03b8 j\u22121 ) 1 B B i=1 \u2207 \u03c6 log q \u03c6 (s i | x i ,\u0177 i ) c i + \u03b1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "While mini-batch updates are required for stable Seq2Seq learning, they hinder the regulator from assigning credit for model improvement to individual elements within the mini-batch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Self-Regulate",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 presents the proposed online learning algorithm with model updates cumulated over mini-batches. On arrival of a new input, the regulator predicts a feedback type in line 6. According to this prediction, the environment/user is asked for feedback for the Seq2Seq's prediction at cost c i (line 7). The Seq2Seq model is updated on the Algorithm 1 Self-Regulated Interactive Seq2Seq Input: Initial Seq2Seq \u03b8 0 , regulator \u03c6 0 , B 1: j \u2190 0 2: while inputs and human available do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.3"
},
{
"text": "3: j \u2190 j + 1 4: for i \u2190 1 to B do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.3"
},
{
"text": "Observe input x i , Seq2Seq output\u0177 i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.3"
},
{
"text": "Choose feedback:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "s i \u223c q \u03c6 (s | x i ,\u0177 i ) 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Obtain feedback f i of type s i at cost c i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "Update \u03b8 with g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "s [1:B] \u03b8 (x [1:B] ,\u0177 [1:B] ) 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "Measure improvement \u2206(\u03b8 j , \u03b8 j\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "Update \u03c6 with g META \u03c6 (x [1:B] ,\u0177 [1:B] , s [1:B] )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "basis of the feedback and mini-batch of stochastic gradients computed as summarized in Figure 2 . In order to reinforce the regulator, the Seq2Seq model's improvement (line 9) is assessed, and the parameters of the regulator are updated (line 10, Eq. 3). Training ends when the data stream or the provision of feedback ends. The intermediate Seq2Seq evaluations can be re-used for model selection (early stopping). In practice, these evaluations can either be performed by validation on a held-out set (as in the simulation experiments below) or by human assessment.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "Practical Considerations. The algorithm does not introduce any additional hyperparameters beyond standard learning rates, architecture design and mini-batch sizes that have to be tuned. As proposed in Petrushkov et al. (2018) or Clark et al. (2018) , targets\u0177 are pre-generated offline with the initial \u03b8 0 , which we found crucial for the stability of the learning process. The evaluation step after the Seq2Seq update is an overhead that comes with meta-learning, incurring costs depending on the decoding algorithm and the evaluation strategy. However, Seq2Seq updates can be performed in mini-batches, and the improvement is assessed after a mini-batch of updates, as discussed above.",
"cite_spans": [
{
"start": 201,
"end": 225,
"text": "Petrushkov et al. (2018)",
"ref_id": "BIBREF31"
},
{
"start": 229,
"end": 248,
"text": "Clark et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "The main research questions to be answered in our experiments are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "1. Which strategies does the regulator develop?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "2. How well does a trained regulator transfer across domains?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "3. How do these strategies compare against (active) learning from a single feedback type?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We perform experiments for interactive NMT, where a general-domain NMT model is adapted to a specific domain by learning from the feedback of a human translator. This is a realistic interactive learning scenario where cost-free pre-training on a general domain data is possible, but each feedback generated by the human translator in the personalization step incurs a specific cost. In our experiment, we use human-generated reference translations to simulate both the cost of human feedback and to measure the performance gain achieved by model updates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Seq2Seq Architecture. Both the Seq2Seq learner and the regulator are based on LSTMs (Hochreiter and Schmidhuber, 1997) . The Seq2Seq has four bi-directional encoder and four decoder layers with 1024 units each, embedding layers of size 512. It uses Luong et al. (2015)'s input feeding and output layer, and global attention with a single feed forward layer (Bahdanau et al., 2015) .",
"cite_spans": [
{
"start": 84,
"end": 118,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 357,
"end": 380,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Regulator Architecture. The regulator consists of LSTMs on two levels: Inspired by Siamese Networks (Bromley et al., 1994) , a bi-directional LSTM encoder of size 512 separately reads in both the current input sequence and the beam search hypothesis generated by the Seq2Seq. The last state of encoded source and hypothesis sequence and the previous output distribution are concatenated to form the input to a higher-level regulator LSTM of size 256. This LSTM updates its internal state and predicts a score for every feedback type for every input in the mini-batch. The feedback for each input is chosen by sampling from the distribution obtained by softmax normalization of these scores. The embeddings of the regulator are initialized by the Seq2Seq's source embeddings and further tuned during training. The model is implemented in the JoeyNMT 1 framework based on PyTorch. 2 Data. We use three parallel corpora for Germanto-English translation: a general-domain data set from the WMT2017 translation shared task for Seq2Seq pre-training, TED talks from the IWSLT2017 evaluation campaign for training the regulator with simulated feedback, and the Books corpus from the OPUS collection (Tiedemann, 2012) for testing the regulator on another domain. Data pre-processing details and splits are given in \u00a7A.1. The joint vocabulary for Seq2Seq and the regulator consists of 32k BPE sub-words (Sennrich et al., 2016) trained on WMT.",
"cite_spans": [
{
"start": 100,
"end": 122,
"text": "(Bromley et al., 1994)",
"ref_id": "BIBREF3"
},
{
"start": 1191,
"end": 1208,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Training. The Seq2Seq model is first trained on WMT with Adam (Kingma and Ba, 2015) on mini-batches of size 64, an initial learning rate 1 \u00d7 10 \u22124 that is halved when the loss does not decrease for three validation rounds. Training ends when the validation score does not increase any further (scoring 29.08 BLEU on the WMT test). This model is then adapted to IWSLT with selfregulated training for one epoch, with online human feedback simulated from reference translations. The mini-batch size is reduced to 32 for self-regulated training to reduce the credit assignment problem for the regulator. The constant cost \u03b1 (Eq. 1) is set to 1. 3 When multiple runs are reported, the same set of random seeds is used for all models to control the order of the input data. The best run is evaluated on the Books domain for testing the generalization of the regulation strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Simulation of Cost and Performance. In our experiments, human feedback and its cost, and the performance gain achieved by model updates, is simulated by using human reference translations. Inspired by the keystroke mouse-action ratio (KSMR) (Barrachina et al., 2009) , a common metric for measuring human effort in interactive machine translation, we define feedback cost as the sum of costs incurred by character edits and clicks, similar to Peris and Casacuberta (2018) . The cost of a full correction (FULL) is the number of character edits between model output and reference, simulating the cost of a human typing. 4 Error markings (WEAK) are simulated by comparing the hypothesis to the reference and marking the longest common sub-strings as correct, as proposed by Petrushkov et al. (2018) . As an extension to Petrushkov et al. (2018) we mark multiple common sub-strings as correct if all of them have the longest length. The cost is defined as the number of marked words, assuming an interface that allows markings by clicking on words. For selftraining (SELF) and skipping training instances we naively assume zero cost, thus limiting the mea-surement of cost to the effort of the human teacher, and neglecting the effort on the learner's side. Table 1 illustrates the costs per feedback type on a randomly selected set of examples.",
"cite_spans": [
{
"start": 241,
"end": 266,
"text": "(Barrachina et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 443,
"end": 471,
"text": "Peris and Casacuberta (2018)",
"ref_id": "BIBREF30"
},
{
"start": 619,
"end": 620,
"text": "4",
"ref_id": null
},
{
"start": 772,
"end": 796,
"text": "Petrushkov et al. (2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We measure the model improvement by evaluating the held-out set translation quality of the learned model at various time steps with corpus BLEU (cased SacreBLEU (Post, 2018) ) and measure the accumulated costs. The best model is considered the one that delivers the highest quality at the lowest cost. This trade-off is important to bear in mind since it differs from the standard evaluation of machine translation models, where the overall best-scoring model, regardless of the supervision cost, is considered best. Finally, we evaluate the strategy learned by the regulator on an unseen domain, where the regulator decides which type of feedback the learner gets, but is not updated itself.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 173,
"text": "(Post, 2018)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "We compare learning from one type of feedback in isolation against regulators with the following set of actions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "1. Reg2: FULL, WEAK 2. Reg3: FULL, WEAK, SELF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Cost vs. Quality. Figure 3 compares the improvement in corpus BLEU (Papineni et al., 2002) (corresponding to results in Translation Error Rate (TER, computed by pyTER) (Snover et al., 2006) ) of regulation variants and full feedback over cumulative costs of up to 80k character edits. Using only full feedback (blue) as in standard supervised learning or learning from post-edits, the overall highest improvement can be reached (visible only after the cutoff of 80k edits; see Appendix A.2 for the comparison over a wider window of time). However, it comes at a very high cost (417k characters in total to reach +0.6 BLEU). The regulated variants offer a much cheaper improvement, at least until a cumulative cost between 80k (Reg4) and 120k (Reg2), depending on the feedback options available. The regulators do not reach the quality of the full model since their choice of feedback is oriented towards costs and immediate improvements. By finding a trade-off between feedback types for immediate improvements, the regulators sacrifice long-term improvement. Comparing regulators, Reg2 (orange) reaches the overall SELF 0",
"cite_spans": [
{
"start": 67,
"end": 90,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
},
{
"start": 168,
"end": 189,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Reg4: FULL, WEAK, SELF, NONE",
"sec_num": "3."
},
{
"text": "x Sie greift in ihre Geldb\u00f6rse und gibt ihm einen Zwanziger . y It attacks their wallets and gives him a twist . y * She reaches into her purse and hands him a 20 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reg4: FULL, WEAK, SELF, NONE",
"sec_num": "3."
},
{
"text": "WEAK 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reg4: FULL, WEAK, SELF, NONE",
"sec_num": "3."
},
{
"text": "x Und als ihr Vater sie sah und sah , wer sie geworden ist , in ihrem vollen M\u00e4dchen-Sein , schlang er seine Arme um sie und brach in Tr\u00e4nen aus . y And when her father saw them and saw who became them , in their full girl 's , he swallowed his arms around them and broke out in tears . y * When her father saw her and saw who she had become , in her full girl self , he threw his arms around her and broke down crying .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reg4: FULL, WEAK, SELF, NONE",
"sec_num": "3."
},
{
"text": "x Und durch diese zwei Eigenschaften war es mir m\u00f6glich , die Bilder zu erschaffen , die Sie jetzt sehen . y And through these two features , I was able to create the images you now see . y * And it was with those two properties that I was able to create the images that you 're seeing right now . highest improvement over the baseline model, but until the cumulative cost of around 35k character edits, Reg3 (green) offers faster improvement at a lower cost since it has an additional, cheaper feedback option. Adding the option to skip examples (Reg4, red) does not give a benefit. Appendix A.3 lists detailed results for offline evaluation on the trained Seq2Seq models on the IWSLT test set: Self-regulating models achieve improvements of 0.4-0.5 BLEU with costs reduced up to a factor of 23 in comparison to the full feedback model. The reduction in cost is enabled by the use of cheaper feedback, here markings and selfsupervision, which in isolation are very successful as well. Self-supervision works surprisingly well and can be recommended for cheap but effective unsupervised domain adaptation for sequence-tosequence learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FULL 59",
"sec_num": null
},
{
"text": "Self-Regulation Strategies. Figure 4 shows which actions Reg3 chooses over time when trained on IWSLT. Most often it chooses to do self-training on the current input. ing training, with the exception of an initial exploration phase within the first 100 iterations. In general, we observe that all regulators are highly sensitive to balancing cost and performance, and mostly prefer the cheapest option (e.g., Reg4 by choosing mostly NONE) since they are penalized heavily for choosing (or exploring) expensive options (see Eq. 1).",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "FULL 59",
"sec_num": null
},
{
"text": "A further research question is whether and how the self-regulation module takes the input or output context into account. We therefore compare its decisions to a context-free -greedy strategy. The -greedy algorithm is a successful algorithm for multi-armed bandits (Watkins, 1989) . In our case, the arms are the four feedback types. They are chosen based on their reward statistics, here the average empirical reward per feedback type",
"cite_spans": [
{
"start": 265,
"end": 280,
"text": "(Watkins, 1989)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FULL 59",
"sec_num": null
},
{
"text": "Q i (s) = 1 N i (s) 0,...,i r(s i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FULL 59",
"sec_num": null
},
{
"text": "With probability 1 \u2212 , the algorithm selects the feedback type with the highest empirical reward (exploitation), otherwise picks one of the remaining arms at random (exploration). In contrast to the neural regulator model, -greedy decides solely on the basis of the reward statistics and has no internal contextual state representation. The comparison of Reg3 with -greedy for a range of values for in Figure 5 shows that learned regulator behaves indeed very similar to an -greedy strategy with = 0.25. -greedy variants with higher amounts of exploration show a slower increase in BLEU, while those with more exploitation show an initial steep increase that flattens out, leading to overall lower BLEU scores. The regulator has hence found the best trade-off, which is an advantage over the -greedy algorithm where the hyperparameter requires dedicated tuning. Considering the -greedy-like strategy of the regulator and the strong role of the cost factor shown in Figure 4 , the regulator module does not appear to choose individual actions based e.g., on the difficulty of inputs, but rather composes mini-batches with a feedback ratio according to the feedback type's statistics. This confirms the observations of Peris and Casacuberta (2018) , who find that the subset of instances selected for labeling is secondaryit is rather the mixing ratio of feedback types that matters. This finding is also consistent with the mini-batch update regime that forces the regulator to take a higher-level perspective and optimize the expected improvement at the granularity of (minibatch) updates rather than at the input level.",
"cite_spans": [
{
"start": 1217,
"end": 1245,
"text": "Peris and Casacuberta (2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 402,
"end": 410,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 965,
"end": 973,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "FULL 59",
"sec_num": null
},
{
"text": "Domain Transfer. After training on IWSLT, we evaluate the regulators on the Books domain: Can they choose the best actions for an efficient learning progress without receiving feedback on the new domain? We evaluate the best run of each regulator type (i.e., \u03c6 trained on IWSLT), with the Seq2Seq model reset to the WMT baseline. The regulator is not further adapted to the Books domain, but decides on the feedback types for training the Seq2Seq model for a single epoch on the Books data. Figure 6 visualizes the regulated training process of the Seq2Seq model. As before, Reg3 performs best, outperforming weak, full and self-supervision (reaching 14.75 BLEU, not depicted since zero cost). Learning from full feedback improves much later in training and reaches 14.53 BLEU. 5 One explanation is that the reference translations in the Books corpus are less literal than the ones for IWSLT, such that a weak feedback signal allows the learner to learn more efficiently than from full corrections. Appendix A.4 reports the results for offline evaluation on the trained Seq2Seq models on the Books test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 491,
"end": 499,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "FULL 59",
"sec_num": null
},
{
"text": "Comparison to Active Learning. A classic active learning strategy is to sample a subset of the input data for full labeling based on the uncertainty of the model predictions . The size of this subset, i.e. the amount of human labeling effort, has to be known and determined before learning. Figure 7 compares the self-regulators on the Books domain with models that learn from a fixed ratio of fullylabeled instances in every batch. These are chosen according to the model's uncertainty, here measured by the average token entropy of the model's best-scoring beam search hypothesis. The regulated models with a mix of feedback types clearly outperform the active learning strategies, both in terms of cost-efficient learning ( Figure 7 ) as well as in overall quality (See Figure 9 in Appendix A.5). We conclude that mixing feedback types, especially in the case where full feedback is less reliable, offers large improvements over standard stream-based active learning strategies.",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 299,
"text": "Figure 7",
"ref_id": "FIGREF6"
},
{
"start": 727,
"end": 735,
"text": "Figure 7",
"ref_id": "FIGREF6"
},
{
"start": 773,
"end": 781,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "FULL 59",
"sec_num": null
},
{
"text": "Our experiments were designed as a pilot study to test the possibilities of self-regulated learning in simulation. In order to advance to field studies where human users interact with Seq2Seq models, several design choices have to be adapted with caution. Firstly, we simulate both feedback cost and quality improvement by measuring distances to static reference outputs. The experimental design in a field study has to account for a variation of feedback strength, feedback cost, and performance assessments, across time, across sentences, and across human users . One desideratum for field studies is thus to analyze this variation by analyzing the experimental results in a mixed effects model that accounts for variability across sentences, users, and annotation sessions (Baayen et al., 2008; Karimova et al., 2018) . Secondly, our simulation of costs considers only the effort of the human teacher, not the machine learner. The strong preference for the cheapest feedback option might be a result of overestimating the cost of human post-editing and underestimating the cost of self-training. Thus, a model for field studies where data is limited might greatly benefit from learned estimates of feedback cost and quality improvement .",
"cite_spans": [
{
"start": 776,
"end": 797,
"text": "(Baayen et al., 2008;",
"ref_id": "BIBREF0"
},
{
"start": 798,
"end": 820,
"text": "Karimova et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prospects for Field Studies",
"sec_num": "4.3"
},
{
"text": "We proposed a cost-aware algorithm for interactive sequence-to-sequence learning, with a selfregulation module at its core that learns which type of feedback to query from a human teacher. The empirical study on interactive NMT with simulated human feedback showed that this selfregulated model finds more cost-efficient solutions than models learning from a single feedback type and uncertainty-based active learning models, also under domain shift. While this setup abstracts away from certain confounding variables to be expected in real-life interactive machine learning, it should be seen as a pilot experiment that allows focussing on our central research questions under an exact and noise-free computation of feedback cost and performance gain. The proposed framework can naturally be expanded to integrate more feedback modes suitable for the interaction with humans, e.g., pairwise comparisons or output rankings. Future research directions will involve the development of reinforcement learning model with multi-dimensional rewards, and modeling explicit credit assignment for improving the capabilities of the regulator to make context-sensitive decisions in mini-batch learning. The WMT data is obtained from the WMT 2017 shared task website 6 and pre-processed as described in Hieber et al. (2017) . The preprocessing pipeline is used for IWSLT and Books data as well. IWSLT2017 is obtained from the evaluation campaign website. 7 For validation on WMT, we use the newstest2015 data, for IWSLT tst2014+tst2015, for testing on WMT newstest2017 and tst2017 for IWSLT. Since there is no standard split for the Books corpus, we randomly select 2k sentences for validation and testing each. Table 2 gives an overview of the size of the three resources. Figure 8 displays the development of BLEU over costs and time. Table 3 reports the offline held-out set evaluations for the early stopping points selected on the dev set for all feedback modes. All models notably improve over the baseline, only using full feedback leads to the overall best model on IWSLT (+0.6 BLEU / -0.6 TER), but costs a massive amounts of edits (417k characters). Selfregulating models still achieve improvements of 0.4-0.5 BLEU/TER with costs reduced up to a factor of 23. The reduction in cost is enabled by the use of cheaper feedback, here markings and self-supervision, which in isolation are successful as well. Self-supervision works surprisingly well, which makes it attractive for cheap but effective unsupervised domain adaptation. It has to be noted that both weak and self-supervision worked 6 http://www.statmt.org/wmt17/ translation-task.html 7 https://sites.google.com/site/ iwsltevaluation2017/ only well when targets were pre-computed with the baseline model and held fixed during training. We suspect that the strong reward signal (f t = 1) for non-reference outputs leads otherwise to undesired local overfitting effects that a learner with online-generated targets cannot recover from. Table 4 reports results for test set evaluation on the Books domain of the best model from the IWSLT domain each. The baseline was trained on WMT parallel data without any regulation. The regulator was trained on IWSLT and evaluated on Books, the Seq2Seq model is further trained for one epoch on Books. The costs are measured in character edits and clicks. The best result in terms of BLEU and TER is achieved by the Reg2 model, even outperforming the model with full feedback. As observed for the IWSLT domain (cf. Section 4.2), self-training is very effective, but is outperformed by the Reg2 model and roughly on par Figure 9 shows the development of BLEU over time for the regulators and active learning strategies with a fixed ratio of full feedback per batch (\u03b3 \u2208 [10, 30, 50, 70, 90] ). The decision whether to label an instance in a batch is made based on the average token entropy of the model's current hypothesis. Using only 50% of the fully-supervised labels achieves the same quality as 100% using this uncertainty-based active learning sampling strategy. However, the regulated models reach a higher quality not only at a lower cost (see Fig-ure 7) , but also reach an overall higher quality. Figures 10 and 11 show the ratio of feedback types for self-regulation during training with Reg2 and Reg4 respectively.",
"cite_spans": [
{
"start": 1291,
"end": 1311,
"text": "Hieber et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 3756,
"end": 3781,
"text": "(\u03b3 \u2208 [10, 30, 50, 70, 90]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1700,
"end": 1707,
"text": "Table 2",
"ref_id": "TABREF6"
},
{
"start": 1762,
"end": 1770,
"text": "Figure 8",
"ref_id": null
},
{
"start": 1825,
"end": 1832,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 2990,
"end": 2997,
"text": "Table 4",
"ref_id": "TABREF10"
},
{
"start": 3611,
"end": 3619,
"text": "Figure 9",
"ref_id": null
},
{
"start": 4143,
"end": 4153,
"text": "Fig-ure 7)",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/joeynmt/joeynmt 2 Code: https://github.com/juliakreutzer/ joeynmt/tree/acl19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Values = 1 distort the rewards for self-training too much.4 As computed by the Python library difflib.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "With multiple epochs it would improve further, but we avoid showing the human the same inputs multiple times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable feedback. The research reported in this paper was supported in part by the German research foundation (DFG) under grant RI-2221/4-1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mixed-effects modeling with crossed random effects for subjects and items",
"authors": [
{
"first": "Harald",
"middle": [],
"last": "Baayen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "Douglas M",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bates",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of memory and language",
"volume": "59",
"issue": "4",
"pages": "390--412",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Harald Baayen, Douglas J Davidson, and Douglas M Bates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of memory and language, 59(4):390-412.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR), San Diego, California, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical approaches to computer-assisted translation. Computational Linguistics",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Barrachina",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Civera",
"suffix": ""
},
{
"first": "Elsa",
"middle": [],
"last": "Cubel",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Lagarda",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Jes\u00fas",
"middle": [],
"last": "Tom\u00e1s",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Vidal",
"suffix": ""
},
{
"first": "Juan-Miguel",
"middle": [],
"last": "Vilar",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/coli.2008.07-055-R2-06-29"
]
},
"num": null,
"urls": [],
"raw_text": "Sergio Barrachina, Oliver Bender, Francisco Casacu- berta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes\u00fas Tom\u00e1s, En- rique Vidal, and Juan-Miguel Vilar. 2009. Statistical approaches to computer-assisted translation. Com- putational Linguistics, 35(1).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Signature verification using a \"siamese\" time delay neural network",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Bromley",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Guyon",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "S\u00e4ckinger",
"suffix": ""
},
{
"first": "Roopak",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 1994,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1994. Signature ver- ification using a \"siamese\" time delay neural net- work. In Advances in Neural Information Process- ing Systems (NeurIPS), Denver, CO, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A fast hybrid reinforcement learning framework with human corrective feedback",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Celemin",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Ruiz-Del Solar",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Kober",
"suffix": ""
}
],
"year": 2019,
"venue": "Autonomous Robots",
"volume": "43",
"issue": "5",
"pages": "1173--1186",
"other_ids": {
"DOI": [
"10.1007/s10514-018-9786-6"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Celemin, Javier Ruiz-del Solar, and Jens Kober. 2019. A fast hybrid reinforcement learning frame- work with human corrective feedback. Autonomous Robots, 43(5):1173-1186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The best of both worlds: Combining recent advances in neural machine translation",
"authors": [
{
"first": "Mia",
"middle": [],
"last": "Xu Chen",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fast policy learning through imitation and reinforcement",
"authors": [
{
"first": "Ching-An",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Nolan",
"middle": [],
"last": "Wagener",
"suffix": ""
},
{
"first": "Byron",
"middle": [],
"last": "Boots",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ching-An Cheng, Xinyan Yan, Nolan Wagener, and Byron Boots. 2018. Fast policy learning through imitation and reinforcement. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), Monterey, CA, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semi-supervised sequence modeling with cross-view training",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Christopher D. Man- ning, and Quoc Le. 2018. Semi-supervised se- quence modeling with cross-view training. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), Brussels, Belgium.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Segment-based interactivepredictive machine translation",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Domingo",
"suffix": ""
},
{
"first": "\u00c1lvaro",
"middle": [],
"last": "Peris",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2017,
"venue": "Machine Translation",
"volume": "31",
"issue": "4",
"pages": "163--185",
"other_ids": {
"DOI": [
"10.1007/s10590-017-9213-3"
]
},
"num": null,
"urls": [],
"raw_text": "Miguel Domingo,\u00c1lvaro Peris, and Francisco Casacuberta. 2017. Segment-based interactive- predictive machine translation. Machine Transla- tion, 31(4):163-185.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning how to active learn: A deep reinforcement learning approach",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1063"
]
},
"num": null,
"urls": [],
"raw_text": "Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference on Machine Learning (ICML), Vancou- ver, Canada.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning strategies: a synthesis and conceptual model",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hattie",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"M"
],
"last": "Donoghue",
"suffix": ""
}
],
"year": 2016,
"venue": "NPJ Science of Learning",
"volume": "1",
"issue": "",
"pages": "16013--16013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hattie and Gregory M. Donoghue. 2016. Learn- ing strategies: a synthesis and conceptual model. NPJ Science of Learning, 1:16013-16013.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The power of feedback",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hattie",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Timperley",
"suffix": ""
}
],
"year": 2007,
"venue": "American Educational Research Association",
"volume": "77",
"issue": "1",
"pages": "81--112",
"other_ids": {
"DOI": [
"10.1038/npjscilearn.2016.13"
]
},
"num": null,
"urls": [],
"raw_text": "John Hattie and Helen Timperley. 2007. The power of feedback. American Educational Research Associa- tion, 77(1):81-112.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sockeye: A toolkit for neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. CoRR, abs/1712.05690.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reinforcement learning with unsupervised auxiliary tasks",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Jaderberg",
"suffix": ""
},
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [
"Marian"
],
"last": "Czarnecki",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Schaul",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"Z"
],
"last": "Leibo",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Silver",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Sil- ver, and Koray Kavukcuoglu. 2017. Reinforcement learning with unsupervised auxiliary tasks. In Pro- ceedings of the International Conference on Learn- ing Representations (ICLR), Toulon, France.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A user-study on online adaptation of neural machine translation to human post-edits",
"authors": [
{
"first": "Sariya",
"middle": [],
"last": "Karimova",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Simianer",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2018,
"venue": "Machine Translation",
"volume": "32",
"issue": "4",
"pages": "309--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sariya Karimova, Patrick Simianer, and Stefan Riezler. 2018. A user-study on online adaptation of neural machine translation to human post-edits. Machine Translation, 32(4):309-324.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diego, CA, USA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Uyheng",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Kreutzer, Joshua Uyheng, and Stefan Riezler. 2018. Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (ACL), Melbourne, Australia.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Active reinforcement learning: Observing rewards at a cost",
"authors": [
{
"first": "David",
"middle": [],
"last": "Krueger",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Leike",
"suffix": ""
},
{
"first": "Owain",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Salvatier",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceeding of the 30th Conference on Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Krueger, Jan Leike, Owain Evans, and John Sal- vatier. 2016. Active reinforcement learning: Ob- serving rewards at a cost. In Proceeding of the 30th Conference on Neural Information Processing Sys- tems (NeurIPS), Barcelona, Spain.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Reinforcement learning based curriculum optimization for neural machine translation",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Minneapolis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Kumar, George Foster, Colin Cherry, and Maxim Krikun. 2019. Reinforcement learning based curriculum optimization for neural machine translation. In Proceedings of the Annual Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics (NAACL), Min- neapolis, MN, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A reinforcement learning approach to interactivepredictive neural machine translation",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Tsz Kin Lam",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Kreutzer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 21st Annual Conference of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsz Kin Lam, Julia Kreutzer, and Stefan Riezler. 2018. A reinforcement learning approach to interactive- predictive neural machine translation. In Proceed- ings of the 21st Annual Conference of the European Association for Machine Translation (EAMT), Ali- cante, Spain.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to actively learn neural machine translation",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Proceedings of the 22nd Confer- ence on Computational Natural Language Learning (CoNLL), Brussels, Belgium.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Interactive learning from policy-dependent human feedback",
"authors": [
{
"first": "Michael",
"middle": [
"L"
],
"last": "Taylor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Littman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor, and Michael L. Littman. 2017. Interactive learning from policy-dependent human feedback. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Never-ending learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hruschka",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Platanios",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Samadi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wijaya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saparov",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Greaves",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 29th Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Pla- tanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In Proceedings of the 29th Conference on Artificial Intelligence (AAAI), Austin, TX, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Annual research review: On the relations among self-regulation, self-control, executive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for developmental psychopathology",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nigg",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Child Psychology and Psychiatry",
"volume": "58",
"issue": "4",
"pages": "361--383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel T. Nigg. 2017. Annual research review: On the relations among self-regulation, self-control, execu- tive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for develop- mental psychopathology. Journal of Child Psychol- ogy and Psychiatry, 58(4):361-383.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A review of self-regulated learning: Six models and four directions of research",
"authors": [],
"year": 2017,
"venue": "Frontiers in Psychology",
"volume": "8",
"issue": "422",
"pages": "1--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ernesto Panadero. 2017. A review of self-regulated learning: Six models and four directions of research. Frontiers in Psychology, 8(422):1-28.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics (ACL), Philadelphia, PA, USA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Active learning for interactive neural machine translation of data streams",
"authors": [
{
"first": "Alvaro",
"middle": [],
"last": "Peris",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning (CONLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Confer- ence on Computational Natural Language Learning (CONLL), Brussels, Belgium.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning from chunk-based feedback in neural machine translation",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Petrushkov",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Petrushkov, Shahram Khadivi, and Evgeny Ma- tusov. 2018. Learning from chunk-based feedback in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (ACL), Melbourne, Australia.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation (WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation (WMT), Brussels, Belgium.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sequence level training with recurrent neural networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on Learning Representation (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In Proceedings of the International Conference on Learning Repre- sentation (ICLR), San Juan, Puerto Rico.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Simple principles of metalaerning",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Wiering",
"suffix": ""
}
],
"year": 1996,
"venue": "Technical Report",
"volume": "69",
"issue": "96",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00fcrgen Schmidhuber, Jieyu Zhao, and Marco Wiering. 1996. Simple principles of metalaerning. Technical Report 69 96, IDSIA, Lugano, Switzerland.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL), Berlin, Germany.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "An analysis of active learning strategies for sequence labeling tasks",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP), Honolulu, Hawaii.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Active learning with real annotation costs",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
},
{
"first": "Lewis",
"middle": [],
"last": "Friedland",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the NeurIPS Workshop on Cost-Sensitive Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Pro- ceedings of the NeurIPS Workshop on Cost-Sensitive Learning, Vancouver, Canada.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "An inference-based policy gradient method for learning options",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Matthew",
"suffix": ""
},
{
"first": "Herke",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Van Hoof",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew J. A. Smith, Herke Van Hoof, and Joelle Pineau. 2018. An inference-based policy gradient method for learning options. In Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas (AMTA)",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas (AMTA), volume 200.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems (NeurIPS), Montreal, Canada.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC), Istanbul, Turkey.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Continuous learning from human post-edits for neural machine translation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Amin Farajian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2017,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "108",
"issue": "1",
"pages": "233--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Turchi, Matteo Negri, M Amin Farajian, and Marcello Federico. 2017. Continuous learning from human post-edits for neural machine translation. The Prague Bulletin of Mathematical Linguistics, 108(1):233-244.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NeurIPS), Long Beach, CA, USA.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Learning from delayed rewards",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Watkins",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Watkins. 1989. Learning from delayed re- wards. PhD thesis, Cambridge University.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Machine Learning",
"volume": "8",
"issue": "",
"pages": "229--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine Learning, 8:229-256.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Learning to teach with dynamic loss functions",
"authors": [
{
"first": "Lijun",
"middle": [],
"last": "Wun",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianhuang",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceeding of the 32nd Conference on Neural Information Processing System (NeuRIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lijun Wun, Fei Tian, Yingce Xia, Yang Fan, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Learning to teach with dynamic loss functions. In Proceeding of the 32nd Conference on Neural Information Pro- cessing System (NeuRIPS), Montreal, Canada.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Self-Regulated Learning and Academic Achievement",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barry",
"suffix": ""
},
{
"first": "Dale",
"middle": [
"H"
],
"last": "Zimmerman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schunk",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-1-4612-3618-4"
]
},
"num": null,
"urls": [],
"raw_text": "Barry J. Zimmerman and Dale H. Schunk, editors. 1989. Self-Regulated Learning and Academic Achievement. Springer, New York, NY, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Stochastic gradients for the Seq2Seq learner in dependence of feedback type s."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "BLEU of regulation variants over cumulative costs. BLEU is computed on the tokenized IWSLT validation set with greedy decoding."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Reg3 actions as chosen over time, depicted for each iteration. Counting of iterations starts at the previous iteration count of the baseline model."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "BLEU and cumulative costs on IWSLT for Reg3 and -greedy with \u2208 [0.1, 0.25, 0.5, 0.75, 0.9]."
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Domain transfer of regulators trained on IWSLT to the Books domain in comparison to full and weak feedback only."
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Learned self-regulation strategies in comparison to uncertainty-based active learning with a fixed percentage of full feedback on the Books domain."
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Regulation variants evaluated in terms of BLEU over time (a) and cumulative costs (b). Iteration counts start from the iteration count of the baseline model. One iteration on IWSLT equals training on one mini-batch of 32 instances. The BLEU score is computed on the tokenized validation set with greedy decoding. In (b) the lines correspond to the means over three runs, the shaded area depicts the estimated 95% confidence interval. with the Reg3 model. Development of validation BLEU over time for learned regulation strategies in comparison to active learning with a fixed percentage \u03b3 of full feedback. Counting of iterations starts at the previous iteration count of the baseline model."
},
"FIGREF9": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Feedback chosen by Reg4 on IWSLT."
},
"TABREF0": {
"type_str": "table",
"text": "Examples from the IWSLT17 training set, cost (2nd column) and feedback decisions made by Reg3. For weak feedback, marked parts are underlined, for full feedback, the corrections are marked by underlining the parts of the reference that got inserted and the parts of the hypothesis that got deleted.",
"html": null,
"content": "<table><tr><td/><td>28.8 28.9 29.0</td><td>type full full/weak full/weak/self full/weak/self/none</td></tr><tr><td/><td>28.7</td></tr><tr><td>BLEU</td><td>28.6</td></tr><tr><td/><td>28.5</td></tr><tr><td/><td>28.4</td></tr><tr><td/><td>28.3</td></tr><tr><td/><td>0</td><td>10000 20000 30000 40000 50000 60000 70000 80000 Cumulative Cost</td></tr></table>",
"num": null
},
"TABREF6": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>: Number of sentences for parallel corpora used</td></tr><tr><td>for pre-training (WMT), regulator training (IWSLT)</td></tr><tr><td>and domain transfer evalution (Books).</td></tr></table>",
"num": null
},
"TABREF7": {
"type_str": "table",
"text": "Full 28.93 \u00b10.02 417k 25.60 \u00b10.02 61.86 \u00b10.03 Weak 28.65 \u00b10.01 32k 25.10 \u00b10.09 62.12 \u00b10.12 Self 28.58 \u00b10.02 -25.33 \u00b10.06 61.96 \u00b10.05 Reg4 28.57 \u00b10.04 68k 25.23 \u00b10.05 62.02 \u00b10.12 Reg3 28.61 \u00b10.03 18k 25.23 \u00b10.09 62.07 \u00b10.06 Reg2 28.66 \u00b10.06 88k 25.27 \u00b10.09 61.91 \u00b10.06",
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">IWSLT dev</td><td colspan=\"2\">IWSLT test</td></tr><tr><td/><td>BLEU\u2191</td><td>Cost\u2193</td><td>BLEU\u2191</td><td>TER\u2193</td></tr><tr><td>Baseline</td><td>28.28</td><td>-</td><td>24.84</td><td>62.42</td></tr></table>",
"num": null
},
"TABREF8": {
"type_str": "table",
"text": "Evaluation of models at early stopping points. Results for three random seeds on IWSLT are averaged, reporting the standard deviation in the subscript. The translation of the dev set is obtained by greedy decoding (as during validation) and of the test set with beam search of width five. The costs are measured in character edits and clicks, as described in Section 4.",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF10": {
"type_str": "table",
"text": "Evaluation of models at early stopping points on the Books test set (beam search with width five).",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF12": {
"type_str": "table",
"text": "Feedback chosen by Reg2 on IWSLT.",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">28.7</td><td>full/weak</td><td/><td/><td/><td/></tr><tr><td>BLEU</td><td colspan=\"2\">28.5 28.6</td><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">28.4</td><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">28.3</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>200</td><td>300</td><td>400</td><td>500 Iterations</td><td>600</td><td>700</td><td>800 +1.001e6</td></tr><tr><td/><td/><td>100</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">% of feedback</td><td>200 0 20 40 60 80</td><td>300 weak full</td><td>400</td><td>500</td><td>600</td><td>700</td><td>+1.001e6 800</td></tr><tr><td colspan=\"3\">28.3 28.4 28.5 Figure 10: 200 BLEU 28.6</td><td colspan=\"2\">300 full/weak/self/none 400</td><td>500 Iterations</td><td>600</td><td>700</td><td>800 +1.001e6</td></tr><tr><td/><td/><td>200</td><td>300</td><td>400</td><td>500</td><td>600</td><td>700</td><td>800 +1.001e6</td></tr></table>",
"num": null
}
}
}
}