| { |
| "paper_id": "Q18-1027", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:10:20.458178Z" |
| }, |
| "title": "Polite Dialogue Generation Without Parallel Data", |
| "authors": [ |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Niu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UNC Chapel Hill", |
| "location": {} |
| }, |
| "email": "tongn@cs.unc.edu" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "UNC Chapel Hill", |
| "location": {} |
| }, |
| "email": "mbansal@cs.unc.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguistically accurate. Moreover, parallel datasets for regular-to-stylistic pairs are usually unavailable. We present three weakly-supervised models that can generate diverse, polite (or rude) dialogue responses without parallel data. Our late fusion model (Fusion) merges the decoder of an encoder-attention-decoder dialogue model with a language model trained on stand-alone polite utterances. Our label-finetuning (LFT) model prepends to each source sequence a politeness-score scaled label (predicted by our state-of-the-art politeness classifier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model (Polite-RL) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrievalbased, polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.", |
| "pdf_parse": { |
| "paper_id": "Q18-1027", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguistically accurate. Moreover, parallel datasets for regular-to-stylistic pairs are usually unavailable. We present three weakly-supervised models that can generate diverse, polite (or rude) dialogue responses without parallel data. Our late fusion model (Fusion) merges the decoder of an encoder-attention-decoder dialogue model with a language model trained on stand-alone polite utterances. Our label-finetuning (LFT) model prepends to each source sequence a politeness-score scaled label (predicted by our state-of-the-art politeness classifier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model (Polite-RL) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrievalbased, polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Generating stylistic, personality-based language is crucial to developing engaging, convincing, and trustworthy conversational agents, for their effective application in intelligent tutoring, home assistance, online reservations/purchasing, health care, etc. Most current chatbots and conversational models lack any such style, which can be a social issue because human users might learn biased styles from such interactions, e.g., kids learning to be rude because the dialogue system encourages short, curt responses, and also does not itself use politeness to set an example. 1 In this work, we focus on the important and diverse paralinguistic style axis of politeness vs. rudeness (Brown and Levinson, 1987) .", |
| "cite_spans": [ |
| { |
| "start": 578, |
| "end": 579, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 685, |
| "end": 711, |
| "text": "(Brown and Levinson, 1987)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Generating stylistic dialogue responses is a substantially challenging task because the generated response needs to be syntactically and semantically fluent, contextually-relevant to the conversation, as well as convey accurate paralinguistic features. This is further complicated by the fact that content and style are only available in separate unpaired datasets, as opposed to translation-type parallel datasets containing regular-to-stylistic text pairs. Hence, we need indirectly-supervised models that can incorporate style into the generated response in absence of parallel data (i.e., where the training data for the conversation, versus style components, comes from two different datasets or domains), while still maintaining conversation relevance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we present three such weaklysupervised models 2 that can generate diverse, natural, and contextually-relevant polite (and rude) di-alogue responses, using data from separate style and dialogue domains: the Stanford Politeness Corpus (Danescu-Niculescu-Mizil et al., 2013) with Wikipedia and Stack Exchange requests, and the MovieTriples Dialogue Corpus (Serban et al., 2016) with IMSDB movie scripts, respectively. Each of our three models is based on a state-of-the-art politeness classifier and a sequence-to-sequence dialogue model. The first model (Fusion) employs a late fusion technique to merge the response generation decoder of the dialogue model with a language model trained on polite utterances chosen by the politeness classifier. The second label-fine-tuning (LFT) model prepends to the input utterance a single politeness label whose embedding is continuously scaled by the politeness score of the target sequence during training. This score is determined by feeding the corresponding ground-truth target sequence to our politeness classifier. During test time, we show that the LFT model is able to control the politeness level of generated responses by simply scaling the label's embedding by the continuous target politeness score of our choice. Our third reinforcement-based model (Polite-RL) encourages politeness generation by using the continuous-scale politeness score of the decoder-sampled sentence as a reward (via mixedobjective policy gradient methods), i.e., polite utterances are encouraged with positive reward, and rude ones discouraged with negative reward.", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 285, |
| "text": "(Danescu-Niculescu-Mizil et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 291, |
| "end": 388, |
| "text": "Wikipedia and Stack Exchange requests, and the MovieTriples Dialogue Corpus (Serban et al., 2016)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Hence, our models only need a style classifier (without parallel data) to automatically influence and encourage continuous-scale stylistic language generation in a complex dialogue setup, which also requires maintaining relevance to conversational context. Each of these models requires minimal changes to the architecture of either the underlying sequence-to-sequence (Seq2seq) dialogue base model or the style classifier, and hence can modularly update the architecture with the latest stateof-the-art dialogue models or style classifiers (and for diverse styles). In addition, we also employ two retrieval-based models, where we output the response which has the highest match with the input context from a set of classifier-picked polite responses or manually-picked generic polite utterof the LFT model were added to the Feb 1, 2018 resubmission based on reviewer discussions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "ances. These two retrieval models serve as parallel investigations on the performance of our three proposed generative models above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We conducted multiple human evaluations (for style and dialogue quality) on Amazon Mechanical Turk (MTurk) (Buhrmester et al., 2011) for all three models plus the base sequence-to-sequence dialogue model and the retrieval-based models, and show that while the Fusion and the two retrieval models increase the politeness level of responses at the cost of poorer dialogue quality, both our LFT and Polite-RL models can successfully produce polite responses (capturing several politeness strategies discussed by Brown and Levinson (1987) ), without sacrificing dialogue coherence and relevance compared to the base Seq2seq model (hence better balance between politeness and dialogue quality). We also compare the output dialogue politeness levels of the continuous LFT model for three different politeness levels. Finally, we present several detailed qualitative and quantitative analyses, including positive and negative output examples, automatic metric results on output responses, classifier error analysis, and visualization of the RL rewards.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 132, |
| "text": "(Buhrmester et al., 2011)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 509, |
| "end": 534, |
| "text": "Brown and Levinson (1987)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Style Transfer with Parallel Data There have been multiple works on style transfer with parallel data. These tasks can often be solved by directly applying some variation of translation-based Seq2seq model discussed in the previous section. For example, Xu et al. (2012) use a phrase-based statistical model, and Jhamtani et al. (2017) use a standard Seq2seq model to convert modern language to Shakespeare-style language by treating style transfer as a translation task. Some labeled sequence transduction methods have also been proposed (Kobus et al., 2017; Yamagishi et al., 2016; Johnson et al., 2017) . For example, Kikuchi et al. (2016) are able to control the length of the summarization text by feeding to the Seq2seq base model a label that indicates the intended output length in addition to the source input. Our LFT model also adopts this labeling idea, and is able to handle a similar situation but without parallel data, because by labeling each target sequence in the training set with its politeness classifier score, we are essentially converting nonparallel data to (noisy) parallel data (by using a classifier with high accuracy).", |
| "cite_spans": [ |
| { |
| "start": 254, |
| "end": 270, |
| "text": "Xu et al. (2012)", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 539, |
| "end": 559, |
| "text": "(Kobus et al., 2017;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 560, |
| "end": 583, |
| "text": "Yamagishi et al., 2016;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 584, |
| "end": 605, |
| "text": "Johnson et al., 2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 621, |
| "end": 642, |
| "text": "Kikuchi et al. (2016)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Style Transfer", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Style Transfer without Parallel Data Several previous works have looked at style transfer without parallel data, in both vision (Gatys et al., 2016; Zhu et al., 2017; Liu and Tuzel, 2016; Liu et al., 2017; Taigman et al., 2016; Kim et al., 2017; , and text (Sennrich et al., 2016a; Hu et al., 2017; Ghosh et al., 2017; Zhao et al., 2017; Mueller et al., 2017; Wang et al., 2017; Luan et al., 2017) . Among these models, some are bag-of-words based, i.e., they use style-related keywords to annotate the target sequences in the training set. For example, to control how formal the output sequences are in a EN-DE translation task, Sennrich et al. (2016a) labeled each target sequence based on whether it contains formal or informal verbs and pronouns (honorifics). To build a language model that generates utterances with the desired style, Ficler and Goldberg (2017) annotated their text with meta-data and keywords/POS tags based heuristics, while Ghosh et al. (2017) also adopted keyword spotting based on a dictionary of emotional words. The basic ideas of their models are similar to that of our LFT model. However, these keyword-spotting approaches do not fully extend to our politeness generation task, because politeness strategies follow complex patterns of grammar, word order, and phrasing (Danescu-Niculescu-Mizil et al., 2013) . For example, the politeness of please depends on where it occurs in a sentence, and what other politeness markers it cooccurs with (e.g., 'could/would you' style counterfactual modals vs. 'can/will you' style indicative modals). Therefore, our novel polite dialogue models are based on an accurate neural classifier, which is better at capturing several compositional paralinguistic features (as visualized in Aubakirova and Bansal (2016) , whose politeness classifier we extend). Moreover, our LFT and Polite-RL models can generate a continuum of style levels based on the continuously-scaled (by the politeness score) label embedding or reinforcement rewards.", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 148, |
| "text": "(Gatys et al., 2016;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 149, |
| "end": 166, |
| "text": "Zhu et al., 2017;", |
| "ref_id": "BIBREF68" |
| }, |
| { |
| "start": 167, |
| "end": 187, |
| "text": "Liu and Tuzel, 2016;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 188, |
| "end": 205, |
| "text": "Liu et al., 2017;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 206, |
| "end": 227, |
| "text": "Taigman et al., 2016;", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 228, |
| "end": 245, |
| "text": "Kim et al., 2017;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 257, |
| "end": 281, |
| "text": "(Sennrich et al., 2016a;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 282, |
| "end": 298, |
| "text": "Hu et al., 2017;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 299, |
| "end": 318, |
| "text": "Ghosh et al., 2017;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 319, |
| "end": 337, |
| "text": "Zhao et al., 2017;", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 338, |
| "end": 359, |
| "text": "Mueller et al., 2017;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 360, |
| "end": 378, |
| "text": "Wang et al., 2017;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 379, |
| "end": 397, |
| "text": "Luan et al., 2017)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 840, |
| "end": 866, |
| "text": "Ficler and Goldberg (2017)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 949, |
| "end": 968, |
| "text": "Ghosh et al. (2017)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1300, |
| "end": 1338, |
| "text": "(Danescu-Niculescu-Mizil et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1751, |
| "end": 1779, |
| "text": "Aubakirova and Bansal (2016)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Style Transfer", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Lastly, there have also been style transfer models that rely on the latent representation of text and use variational auto-encoders or cross-alignment to disentangle the representation of content and style in text (Hu et al., 2017; Shen et al., 2017; Zhao et al., 2017; Fu et al., 2018) . During inference time, the latent style representation is combined with new content to generate stylized, content-preserving text. Although both fall into the category of style transfer, our task differs in two important aspects from their tasks. First, as opposed to the task of strict content preservation when rephrasing a sentence to a different style, our task is about maintaining good relevance to the context when adding style, especially useful for dialogue-based tasks. Another distinctive trait of our task is that politeness resides in a spectrum rather than a fixed category or topic (e.g., Shakespearean), and our models can treat politeness as a continuum, i.e., controlling the politeness level by adjusting the fusion rate in the Fusion model, the magnitude of the continuous label in the LFT model, or the RL weight in the Polite-RL model.", |
| "cite_spans": [ |
| { |
| "start": 214, |
| "end": 231, |
| "text": "(Hu et al., 2017;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 232, |
| "end": 250, |
| "text": "Shen et al., 2017;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 251, |
| "end": 269, |
| "text": "Zhao et al., 2017;", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 270, |
| "end": 286, |
| "text": "Fu et al., 2018)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Style Transfer", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In order to obtain a persona-based conversational agent, Luan et al. (2017) proposed a multi-task learning (MTL) based approach: they train a Seq2seq model with conversation data and an autoencoder with non-conversational persona-related data from target speakers, and share the decoder parameters of these two models so that the generated responses can be adapted to the style of the target-speaker. This way of incorporating MTL into Seq2seq learning was first investigated by Dong et al. (2015) and Luong et al. (2016) to achieve multilingual NMT. In addition, Sennrich et al. (2016b) also employed MTL to improve NMT models with monolingual (non-parallel) data. These approaches are related to our Fusion model, because we use our classifier to obtain noisy polite target sequences (non-parallel data) that a polite language model trains on; next, during inference, we combine the parameters of the language model with a generative dialogue model trained on parallel data. In general, our models are also related to previous works like Johnson et al. (2017) , who adopted labeled sequence transduction methods for MTL tasks, because our task also involves adapting generated responses to different politeness styles and optimizing two sub-tasks' (namely response and politeness generation) loss functions (related to a multi-task setup).", |
| "cite_spans": [ |
| { |
| "start": 479, |
| "end": 497, |
| "text": "Dong et al. (2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 502, |
| "end": 521, |
| "text": "Luong et al. (2016)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 564, |
| "end": 587, |
| "text": "Sennrich et al. (2016b)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 1040, |
| "end": 1061, |
| "text": "Johnson et al. (2017)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Task Learning and Style Transfer", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Danescu-Niculescu-Mizil et al. (2013) created the Stanford Politeness Corpus and trained an SVM classifier using a list of useful linguistic features based on strategies from Brown and Levinson's theory of politeness (Brown and Levinson, 1987) . Aubakirova and Bansal (2016) recently took an endto-end neural approach to this politeness classification task by training a CNN model that directly learns to identify polite requests without using any hand-engineered features, while still improving on prediction accuracy. They also visualized what features the CNN model was learning and discovered some new features along the way. Our classifier mainly extends their work by adding a bi-directional LSTM layer (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) before the CNN layer to capture long-distance relationships in the sentence, which leads to higher cross-domain performance.", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 243, |
| "text": "(Brown and Levinson, 1987)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 246, |
| "end": 274, |
| "text": "Aubakirova and Bansal (2016)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 709, |
| "end": 743, |
| "text": "(Hochreiter and Schmidhuber, 1997;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 744, |
| "end": 771, |
| "text": "Schuster and Paliwal, 1997)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Politeness Studies", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "A related early work in personality-based dialogue is Mairesse and Walker (2007) , who studied introvert/extrovert personality language based on templated content and sentence planning (via personality dimensions such as hedges, tag questions, negations, subject implicitness, etc.). Relatedly, Sennrich et al. (2016a) use an English to German translation task to present a model that can generate target sequences that are either formal or informal, specifically based on honorifics-related verbs and pronouns. Our task is more general, taking into account several politeness-related paralinguistic features of Brown and Levinson (1987) and allowing end-to-end trainable stylistic dialogue generation with a polite-to-rude spectrum (based on a politeness classifier, without relying on parallel data). Moreover, our approaches allow simply replacing the politeness classifier with any other emotion or personality based language classifier to generate stylistic dialogue for that new style dimension.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 80, |
| "text": "Mairesse and Walker (2007)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 295, |
| "end": 318, |
| "text": "Sennrich et al. (2016a)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 612, |
| "end": 637, |
| "text": "Brown and Levinson (1987)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Politeness Studies", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In order to develop an accurate politeness classifier for effective use in stylistic dialogue response generation, we extend and improve upon the state-of-theart CNN model of Aubakirova and Bansal (2016) , and propose a bi-directional LSTM followed by a convolutional layer (see Figure 1 ), in order to both capture long-distance relationships in the sentence as well as windowed filter based features. For a sentence v 1:n (where each token v i is a d-dim word embedding vector), the LSTM layer first produces hidden states h 1:n (where h t is the concatenation of forward and backward hidden states at time step t). A filter m is then applied on a window of u hidden states. This produces a convolution feature", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 203, |
| "text": "Aubakirova and Bansal (2016)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 279, |
| "end": 287, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Politeness Classification Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "c i = f (m * v i:i+u\u22121 + b),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Politeness Classification Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where f is a non-linear function and b is a bias term. Every feature map c \u2208 R n\u2212u+1 is applied to each window, so that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Politeness Classification Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "c = [c 1 , ..., c n\u2212u+1 ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Politeness Classification Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The output of the convolutional layer is then fed to a max-pooling layer (Collobert et al., 2011) which gives C = max{c} for the filter. Filters of various sizes are used to obtain multiple features. The result is then passed to a fully-connected softmax layer that outputs probabilities over two labels, namely Polite and Rude.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Politeness Classification Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our classification model achieves comparable in-domain accuracy and improved crossdomain accuracy over the state-of-the-art results reported in Danescu-Niculescu-Mizil et al. 2013and Aubakirova and Bansal (2016) . We will discuss these results in detail in Section 6.", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 211, |
| "text": "Aubakirova and Bansal (2016)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Politeness Classification Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this section, we first describe our base dialogue model, i.e., the core (backbone) dialogue architecture upon which the three proposed politeness mod-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite-Style Dialogue Models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Input S1 S2 S3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite-Style Dialogue Models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Response by Seq2seq Q1 Q3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite-Style Dialogue Models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Response by LM", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "T1 T2 <start>", |
| "sec_num": null |
| }, |
| { |
| "text": "G2 G3 <end> Q2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "G1", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 2: Fusion model: the output probability distributions of the decoder and the polite-LM are linearly mixed to generate the final decoded outputs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "G1", |
| "sec_num": null |
| }, |
| { |
| "text": "els are built, and then present these three models that can generate polite dialogue responses. As a parallel investigation on the performance of our proposed models, we also employ two retrieval-based polite dialogue models toward the end.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "G1", |
| "sec_num": null |
| }, |
| { |
| "text": "Our base dialogue model is a simple sequence-tosequence (Seq2seq) model that consists of a twolayer bi-directional LSTM-RNN encoder to encode the conversation history turns, and a four-layer LSTM-RNN decoder to generate the response. Additive attention from the output of the encoder is applied to the last layer of the decoder. This architecture is almost identical to that proposed by Bahdanau et al. (2015), except with more layers (similar to Shao et al. (2017) ). Our base dialogue model achieves perplexity and word error rate results on par with those reported for the popular hierarchical HRED models in Serban et al. 2016, thus serving as a good base model to incorporate style into. Details will be discussed in Section 6.", |
| "cite_spans": [ |
| { |
| "start": 447, |
| "end": 465, |
| "text": "Shao et al. (2017)", |
| "ref_id": "BIBREF53" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Base Seq2seq Dialogue Model", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Inspired by the 'late fusion' approach in Venugopalan et al. 2016, our Fusion model (Fig. 2 ) combines the response generation decoder of the base Seq2seq dialogue model with a language model (polite-LM) trained exclusively on polite utterances. These utterances are chosen by feeding the classifier all response utterances in the MovieTriples training set, and only keeping those with politeness scores great than a certain threshold (set to 0.8 in our experiments, as will be discussed in Section 4.5). The polite-LM model is a two-layer LSTM-RNN based on Jozefowicz et al. (2016) . During inference time, we used the language", |
| "cite_spans": [ |
| { |
| "start": 558, |
| "end": 582, |
| "text": "Jozefowicz et al. (2016)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 91, |
| "text": "(Fig. 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Fusion Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "<label> S 1 S 2 S 3 <start> G 1 G 2 G 3 T 1 T 2 T 3 <end> Input Generated Response", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Politeness Classifier Target politeness score Figure 3 : Label-Fine-Tuning model: during training, the embedding of the prepended label is scaled by the style classifier's continuous score on the ground-truth (target) sequence. During testing, we scale the embedding of the label by the desired (continuous) politeness score of the generated response.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 46, |
| "end": 54, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Fusion Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "model to re-score the final output of the Seq2seq decoder (for each time step) by computing a linear combination of the output vocabulary distributions proposed by the Seq2seq model and polite-LM. Specifically, let p S2S t and p LM t denote the output probability distributions proposed by the Seq2seq model and the LM model at time t, respectively. The final 'fused' distribution p t for that time step is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "p t = \u03b1 p S2S t + (1 \u2212 \u03b1) p LM t (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where the fusion ratio \u03b1 is a hyperparameter that indicates how much Seq2seq output will influence the final output.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fusion Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "There are at least two drawbacks of the Fusion model. First, half of its output is determined by a polite language model that has not attended to the conversation context, making the response more likely to be irrelevant. Second, the model does not learn politeness during training, but is forced to be polite only during inference time. To address these two issues, we present our label-fine-tuning (LFT) model, which prepends a predicted continuous style label at the beginning of each input sentence to specify the intended politeness level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Label-Fine-Tuning Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Specifically, we add to the vocabulary a single politeness label and attach with it a trainable word embedding, just like what we would do to a normal token. Then, the way we make it continuous is by scaling its embedding vector with the (intended) politeness score of the target sequence. During training, this score is obtained by feeding the ground-truth target sequence (response) to the politeness classi- Figure 4 : Polite-RL model: upper-right shows max-likelihood (ML) training with generated and ground-truth target sequences; lower-right shows RL training with a randomly sampled response generated by the model and the reward it generates after getting fed into the style classifier. Note that the attention mechanism is not shown here for clarity.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 411, |
| "end": 419, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label-Fine-Tuning Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Input S 1 S 2 S 3 <start> G 1 G 2 G 3 T 1 T 2 T", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Label-Fine-Tuning Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "fier (see Figure 3 ), while during test time, we are free to scale the prepended politeness label with different scores of our choice (i.e., when we want the model to generate a polite response, we scale the label's embedding by a score between 0.5 and 1.0, whereas, to generate a rude response, we scale the embedding by a score between 0.0 and 0.5). This approach is related to the 'numerically-grounded' language model (Spithourakis et al., 2016) , except that we scale the politeness label embedding by its corresponding politeness score, rather than concatenating the two as input to the LSTM. 3", |
| "cite_spans": [ |
| { |
| "start": 422, |
| "end": 449, |
| "text": "(Spithourakis et al., 2016)", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 10, |
| "end": 18, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label-Fine-Tuning Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Thus, the LFT model is able to simultaneously produce polite, neutral and rude responses depending on the prepended label, similar to recent multilabel, multi-space, and zero-shot machine translation work using language identity or style labels (Sennrich et al., 2016a; Johnson et al., 2017; Ghosh et al., 2017) . Intuitively, this prepended label serves as the prior for the intended style of the generated response sequence, while the source utterance serves as the prior for the content of the generated sequence. In other words, the label and the source sentence cooperatively determine what the overall response looks like. 4", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 269, |
| "text": "(Sennrich et al., 2016a;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 270, |
| "end": 291, |
| "text": "Johnson et al., 2017;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 292, |
| "end": 311, |
| "text": "Ghosh et al., 2017)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Label-Fine-Tuning Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The LFT model incorporates style more directly into its training procedure than the fusion model, but it still does not fully exploit the value of the style classifier since it only supervises the dialogue model once by initially classifying the style of all the target sequences in the training set. Ideally we would want the classifier to constantly monitor and influence what style the model produces. Moreover, many contexts do not naturally elicit a polite response, 5 in which case we do not want to force the model to generate an utterance that matches the target politeness score, but rather to ask the model to generate as polite and natural a response as it could. These limitations motivate us to propose the third model: Polite Reinforcement Learning model (Polite-RL), where the style classifier regularly updates the model parameters (via sampling-based policy gradient) with continuous-spectrum rewards that encourage decoder-generated response samples to be polite and discourage them from being rude.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Following work from Paulus et al. (2018) , our loss function consists of two terms. The first term is the traditional maximum likelihood loss (L ML ), which we refer to as the teacher forcing part. The other one is the reinforcement learning loss (L RL ) based on politeness scores, which we refer to as the reinforce part. The total loss L then takes the form:", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 40, |
| "text": "Paulus et al. (2018)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "L = L ML + \u03b2 L RL (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "where \u03b2 is a hyperparameter indicating how much weight we want to give to the style reward component of the loss. The teacher forcing part minimizes the average of the maximum-likelihood loss at each decoding step. Specifically, let y * = {y * 1 , y * 2 , ..., y * n } be the ground-truth response for a given source (conversation history) utterance sequence x. The maximum-likelihood training objective is the minimization of the loss:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L ML = \u2212 n t=1 log p(y * t |y * 1 , ..., y * t\u22121 , x).", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We use a policy gradient method (Williams, 1992; Sutton et al., 2000) to calculate the second term in the objective function. Specifically, we sample a generated response for each input sequence (conversation history) x, and assign to it a reward R, which in our case is the politeness classifier's probability of the response classified as polite. Let y s = {y s 1 , y s 2 , ..., y s n } be the sampled response, then the reinforce part of the loss is:", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 48, |
| "text": "(Williams, 1992;", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 49, |
| "end": 69, |
| "text": "Sutton et al., 2000)", |
| "ref_id": "BIBREF57" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L RL = \u2212 (R \u2212 R b ) n t=1 log p(y s t |y s 1 , ..., y s t\u22121 , x)", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "where R b is a baseline that helps reduce variance during training (Ranzato et al., 2016) . Note that we can invert the classifier scores or reward (by flipping the first minus sign in Eq. 4), if we want to encourage rudeness as the style, instead of politeness. This also shows that an advantage of our implementations of the LFT model over the Polite-RL model (at the cost of shallower training) is that the LFT model can multitask to simultaneously produce responses of different style labels at test time, whereas reward-based reinforcement learning can only work in one direction at a time (based on the reward sign). 6", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 89, |
| "text": "(Ranzato et al., 2016)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polite Reinforcement Learning Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We employ two retrieval-based baseline models as a sanity check to the proposed approaches' perfor-mance: the first with oracle-level fluency, the second with additional oracle-level politeness.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval-based Models", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Classifier-based Retrieval Following Lowe et al. (2015) , for a [X 1 , Y, X 2 ] triple, our retrieval model treats the context (X 1 , Y ) and each response (X 2 ) as two documents and converts them to their TF-IDF based vectors (Ramos, 2003) to check for similarity. Specifically, we first obtain all candidate responses in the training set that are polite, 7 and calculate their TF-IDF vectors. Then for each context TF-IDF vector in the test set, we calculate its cosine similarity with that of each such polite-classified candidate response, and output the one with the highest value. Intuitively, for each context we are choosing a response that is both polite and most relevant to (having the most word overlaps with) the context.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 55, |
| "text": "Lowe et al. (2015)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 228, |
| "end": 241, |
| "text": "(Ramos, 2003)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieval-based Models", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "This approach is similar to the one above but uses the 10 manually-chosen most-polite generic utterances as candidate responses for each context. Specifically, we collect all ground-truth polite requests from the Stanford Politeness Corpus, split each one into sentences, and then manually pick the most frequent 10 polite sentences. 8 We then determine which one to retrieve as a response for each input context, based again on the TF-IDF vector similarity method described above.", |
| "cite_spans": [ |
| { |
| "start": 334, |
| "end": 335, |
| "text": "8", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generic-10", |
| "sec_num": null |
| }, |
| { |
| "text": "As discussed above, we propose models that can deal with style data coming from an unpaired, nonparallel domain, different from the domain of the dialogue dataset. For our style (politeness) domain, we use the Stanford Politeness Corpus (Danescu-Niculescu-Mizil et al., 2013), which contains a collection of requests from Wikipedia (WIKI) editor's talk pages and the Stack Exchange (SE) questionanswering communities. Based on scores from human annotators, these requests are labeled with either Polite or Rude, with each class equally consisting of 1,089 requests for the Wikipedia domain and 1,651 requests for the Stack Exchange domain. For the content (dialogue) domain, we use the popular MovieTriples dialogue corpus (Serban et al., 2016), which contains 245K conversations extracted from IMSDB movie scripts in X-Y-X triplet-utterance format, where X and Y correspond to two movie characters (and the model's task is to generate the last response).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Human To evaluate our models' ability to generate polite responses without sacrificing dialogue quality, we conducted several comprehensive human evaluation studies on Amazon Mechanical Turk (MTurk). Specifically, we compare the three stylistic models w.r.t. the base model on both dialogue quality (i.e., context relevance and coherence) and politeness level. 9 For this, we randomly sampled 300 contexts covering all types of conversations and their generated responses from the Seq2seq base model, the three stylistic models, and the retrievalbased models. For each source input, the six responses are randomly shuffled to anonymize model identities. Each response was then annotated by two human evaluators that were located in the US, had an approval rate greater than 98%, and had at least 10, 000 approved HITs (Human Intelligence Tasks) on record (to prevent those who had just started using MTurk and hence unconditionally enjoyed a high acceptance rate.). All our human evaluations are performed by two annotators (for both dialogue quality and politeness level) in order to calculate inter-rater agreement, for which we employ Cohens Kappa \u03ba (Cohen, 1968) , a score that measures the level of inter-rater agreement between two annotators on a classification problem (Artstein and Poe-sio, 2008) . For both dialogue quality and politeness evaluations, the human raters were shown the conversation context (input) and the six shuffled responses (from the six models). Clear instructions (closely following those from Wang et al. (2017)) corresponding to each score were shown in the interface. More specifically, we asked the annotators to first read the context and each of the generated/retrieved responses, and assign a score to each response. They then scored each response on a fivepoint Likert scale (Likert, 1932 ) (for both politeness and dialogue quality), hence providing absolute measurements but in an overall comparative (relative) setting. 10 We explicitly stated that it is possible for them to find some conversation disconnected or lacking context, and encouraged them to make the best guess when in doubt. Using similar instructions (and a 300-sized sample), we also performed a separate 3-way LFT model comparison by setting its target politeness scores to 1.0, 0.5, and 0.0, respectively.", |
| "cite_spans": [ |
| { |
| "start": 1153, |
| "end": 1166, |
| "text": "(Cohen, 1968)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1277, |
| "end": 1305, |
| "text": "(Artstein and Poe-sio, 2008)", |
| "ref_id": null |
| }, |
| { |
| "start": 1815, |
| "end": 1828, |
| "text": "(Likert, 1932", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 1963, |
| "end": 1965, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Methods", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Automatic Since there do not exist ground-truth stylized versions of the response to the MovieTriples conversations, we only use automatic evaluation metrics as complementary and trend-verification information to the primary human perception studies in this work: we compute BLEU (a phrase-matching based metric; (Papineni et al., 2002) ) as an approximation of dialogue quality as used by some previous work (Ritter et al., 2011; Galley et al., 2015; Li et al., 2016c) . Note that we choose to report BLEU scores not to draw any immediate conclusion found that BLEU does not correlate well with human studies on dialogue quality), but rather to check for match with the trends from 10 The Likert scale is a bipolar scaling method that maps each score to a text item that describes the score, e.g., our politeness level interface uses 'Polite', 'Slightly Polite', 'Neutral', 'Slightly Rude', 'Rude'; and our dialogue quality study uses 'Very good', 'Good', 'Acceptable', 'Poor', and 'Very poor', instead of the abstract scores 1-5. Note that we did not adopt pairwise comparisons because first, it will create several independent sets of pairwise results (15 sets in our case), which also raises the cost substantially, and secondly, pairwise comparison does not tell us \"by how much\" a response is better/equal/worse than the other. In contrast, our absolute scores can help future research compare more directly to our results. We will release our detailed instructions and MTurk interfaces, plus our annotation scores on our webpage. human evaluation. We also compute the politeness classifier's scores as an approximation of politeness level. Sec. 6.3 discusses these results.", |
| "cite_spans": [ |
| { |
| "start": 313, |
| "end": 336, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 409, |
| "end": 430, |
| "text": "(Ritter et al., 2011;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 431, |
| "end": 451, |
| "text": "Galley et al., 2015;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 452, |
| "end": 469, |
| "text": "Li et al., 2016c)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Methods", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We now present some important training details. 11", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Details", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Embedding Initialization For all our models, we initialized the embedding matrix with word2vec trained on Google News dataset (about 100 billion words) 12 (Mikolov et al., 2013) ; we use Xavier initializer (Glorot and Bengio, 2010) for out-ofvocabulary words.", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 177, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 206, |
| "end": 231, |
| "text": "(Glorot and Bengio, 2010)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Details", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Pretraining Following Serban et al. 2016, we pretrained the Seq2seq base model for 4 epochs with Q-A SubTle corpus (Ameixa et al., 2014) , which contains around 5.5M movie subtitle Q&A pairs. Implementation Details We used 300-dim embeddings, the AdamOptimizer (Kingma and Ba, 2015) with a learning rate of 0.001, and a dropout rate of 0.2. All models were trained with a minibatch of size 96. The classifier was trained for 3 epochs, and the three proposed stylistic models were each trained for 35 epochs. The polite language model used in the Fusion model was trained until there was no improvement for perplexity on a heldout dev-set (all tuning decisions were made on the respective dev-sets). We use a balanced value of 0.5 for the fusion ratio (\u03b1 in Eq. 1), and 2.0 for the RL weight (\u03b2 in Eq. 4) after some light empirical tuning. Due also to the nearly perfect balance between the number of polite and rude examples in the Stanford Politeness Corpus, we set the baseline reward of Polite-RL (R b in Eq. 4) to a constant 0.5 at all times. 13 Note that for effective and non-confusing MTurk studies, for all our models (the base model 11 We will add all reproducibility details and more analysis examples in a post-publication supplement on our webpage.", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 136, |
| "text": "(Ameixa et al., 2014)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1047, |
| "end": 1049, |
| "text": "13", |
| "ref_id": null |
| }, |
| { |
| "start": 1142, |
| "end": 1144, |
| "text": "11", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Details", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "12 https://code.google.com/archive/p/ word2vec/ 13 We also tried using a self-critical baseline as in Rennie et al. (2017), but found that our way of setting the constant-based baseline led to better responses. We speculate that this is because a self-critical approach tries to make an utterance as polite as possible, which usually leads to a few very generic and very polite responses at convergence (because the model gets a positive reward only when the sampled utterance is more polite than the greedy-decoded one). and the three stylistic models), we avoid UNK tokens to appear in the generated response, by not back-propagating the MLE loss for these tokens. We also do the same for a short list (around 10) of very offensive swear words (from Wiktionary).", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 50, |
| "text": "13", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Details", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In this results section, we first briefly present our politeness classifier (Sec. 3) and base dialogue model (Sec. 4.1) results, and then focus on the stylisticdialogue results (retrieval and generative).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Following Danescu-Niculescu-Mizil et al. 2013, we use accuracy (i.e., percentage of correctly labeled messages for binary polite/rude labels) to evaluate our politeness classifier's generalization ability. Specifically, we used data from the training set of WIKI, and test on both the test set of WIKI and the entire SE (Stack Exchange) corpus. We used the same train-validation-test split setup (7:1:2) as in Aubakirova and Bansal (2016) . 14 As shown in Table 1 , our LSTM-CNN model improved crossdomain accuracy (while maintaining comparable indomain accuracy) compared to that of the SVM and CNN models reported in Aubakirova and Bansal (2016) . This is similar to how Zhou et al. (2015) also found that a combination of LSTM-RNNs and CNNs is superior to an LSTM-RNN or CNN alone for sentiment classification, likely because the joint model captures both long-distance relationships as well as local windowed filter-based features, and this could make it easier to separate in-domain and outof-domain properties. We also observe more improvement on cross-domain accuracy because it has much more space for improvement, as opposed to in-domain accuracy which is already very close to human performance. The higher accuracy is also important because we need a cross-domain-accurate style classifier so that it can effectively stylize responses in diverse dialogue corpora domains such as MovieTriples.", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 438, |
| "text": "Aubakirova and Bansal (2016)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 619, |
| "end": 647, |
| "text": "Aubakirova and Bansal (2016)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 673, |
| "end": 691, |
| "text": "Zhou et al. (2015)", |
| "ref_id": "BIBREF67" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 456, |
| "end": 463, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Politeness Classification Results", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Next, in Table 2 , we show that our starting point, base dialogue model is comparable in quality to a popular, representative previous model of Serban et al. 2016, trained on the same corpora with similar model architectures. We use their Perplexity (PPL) and Word Error Rate (WER) metrics. In order to have a meaningful perplexity (i.e., the probability of regenerating a reference response) comparison between two language generation models, they should have the same vocabulary set. Since the vocabulary of our politeness dialogue models is a combination of vocabulary sets drawn from the MovieTriples and Stanford Politeness corpora, for fair comparison in this section, we separately train a base Seq2seq model following exactly the vocabulary (10, 000 most frequent tokens, plus an UNK for the rest) and preprocessing protocols from . We bootstrapped the model with 4 epochs on the SubTle corpus (see Sec. 5.3), and then trained on MovieTriples until there was no improvement on perplexity for the validation set. The comparison for this base model with their hierarchical-encoder HRED models is presented in Table 2 . As shown, we get comparable results overall on all metrics, and hence we have a good starting-point dialogue model, to which we add politeness, via the following three approaches.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 16, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1115, |
| "end": 1122, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Base Dialogue Model Results", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Primary Human Evaluation Results In this section, we present our primary human evaluation Table 3 : MTurk human evaluation results on politeness level and dialogue quality (as well as the absolute value difference between the two, to show balance) of the Retrieval Models, Seq2seq and the three proposed generative models (avg. of two annotators is shown here). Top results are boldfaced.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 97, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stylistic Dialogue Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "(MTurk) results on both politeness level and dialogue quality (context-relevance) of the generated response, based on two annotators and a 300-sized test sample. Table 3 shows the annotator-average scores for each of these two metrics and their absolute difference, based on our Likert scales of 1 to 5 (see Sec. 5.2). We can first see that all three of our stylistic generative models improve on politeness compared to the Seq2seq base model. However, the Fusion model's politeness gain is not statistically significant, 15 and moreover it achieves this minor politeness level improvement at the cost of significantly compromising dialogue quality (because its output is half-determined by a standalone politeness-trained LM that ignores context). Next, we see that the LFT model is the most polite (stat. significance of p < 0.01 over the Seq2seq model), and also has dialogue quality close (statistically equal) to that of Seq2seq. Our final Polite-RL model wins over Seq2seq on politeness (stat. significance of p < 0.01) as well as achieves a small improvement in dialogue quality (though not at stat. significance level; but it is stat. significantly better in quality than Retrieval, Generic-10 and Fusion.). Moreover, the politeness levels of the LFT and Polite-RL models are statistically equal. Therefore, both models, with their training depth and multitasking trade-offs (see Sec. 4), can produce strong levels of stylistic content, without harming contextrelevance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 162, |
| "end": 169, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stylistic Dialogue Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Lastly, we can also see that our two retrievalbased models are both very polite (but not stat. sig-nificantly better over LFT); and as expected, they both have dialogue quality lower than Seq2seq, Polite-RL and LFT (stat. significance of p < 0.01). They also feature two of the worst balances between average politeness and dialogue quality score. This is the type of sacrifice we want to avoid from imposing on dialogue quality when building a stylistic dialogue model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stylistic Dialogue Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "For inter-annotator agreement, the Kappa score was 0.35 (fair 16 ) on Dialogue Quality and 0.46 (moderate) on Politeness. If we employ a collapsed-Likert version, where the more ambiguous and extreme scores of {1, 2} and {4, 5} are bucketed together, 17 we obtained a Kappa score of 0.42 (moderate) on Dialogue Quality and 0.55 (moderate) on Politeness.", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 253, |
| "text": "17", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stylistic Dialogue Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Human Evaluation Results on 3-way LFT Models We also present results on a 3-way politeness level comparison MTurk study among the Polite-LFT, Neutral-LFT, and Rude-LFT models, i.e., the LFT model with three levels (scores) of scaling the prepended style label, corresponding to politeness scores 1.0, 0.5 and 0.0, respectively (Table 4 , Continuous-LFT column). The table shows that Polite-LFT is significantly more polite than Neutral-LFT (stat. significance of p < 0.01), and Neutral-LFT is in turn more polite than Rude-LFT (stat. significance of p < 0.01). For inter-annotator agreement on this 3-way LFT study, we get a Kappa of 0.51 (moderate), and 0.61 (substantial) for the collapsed-Likert case.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 327, |
| "end": 335, |
| "text": "(Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stylistic Dialogue Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We also experimented earlier with a discrete version of LFT, where we treated responses in the [0.8, 1.0] range as polite, [0.2, 0.8] as neutral, and [0.0, 0.2] as rude. Instead of scaling a single label embedding with continuous politeness scores (as described in Section 4.3), we assigned to each response one of these three labels with no scaling, according to its corresponding politeness bin. The human evaluation scores for that model were 3.52, 3.09 and 2.93, respectively, which features less score difference between neutral and rude ( Table 4 : MTurk human evaluation results on politeness level of 3 LFT models, for both the continuous and the discrete versions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 545, |
| "end": 552, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stylistic Dialogue Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "LFT column).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stylistic Dialogue Model Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "As discussed in Sec. 5.2, we also use some automatic evaluation metrics to complement and verify the MTurk human study results. In Table 5 , we present the average politeness classifier and BLEU-4 scores of responses from each model. First, we can see that our politeness classifier agrees reasonably well with the human politeness judgments in Table 3 , since both identify the Retrieval-based models and LFT as the most polite, followed by Polite-RL and Fusion in descending order. We quantified this 'agreement' concretely, and found high correlation between the six human Politeness scores (Table 3 Politeness column) and the six automatic classifier scores (Table 5 Politeness Score column): Pearson correlation is 0.827 (stat. significance p = 0.0422), and Spearman's rank-order correlation is 0.9276 (p = 0.0077). Next, for BLEU scores, although these scores (as percentages) are very low (consistent with the observation in Ritter et al. (2011) and Li et al. (2016b) ), their relative system-ranking still roughly agrees with that of human judgments -we found reasonably high correlation between human Dialogue Quality and BLEU (based on the six scores in Table 3 Quality column and Table 5 BLEU-4 column): Pearson correlation is 0.793 (stat. significance p = 0.0597), and Spearman's rank-order correlation is 0.771 (p = 0.0724).", |
| "cite_spans": [ |
| { |
| "start": 932, |
| "end": 952, |
| "text": "Ritter et al. (2011)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 957, |
| "end": 974, |
| "text": "Li et al. (2016b)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 131, |
| "end": 138, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 345, |
| "end": 352, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 594, |
| "end": 602, |
| "text": "(Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1164, |
| "end": 1171, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1191, |
| "end": 1198, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Automatic Metric Evaluation Results", |
| "sec_num": null |
| }, |
| { |
| "text": "Hence, overall, the automatic metric evaluation again shows that without politeness training, the base dialogue model produces neutral responses on average (0.49 score), while the retrieval-based models and all three proposed generative models improve on politeness score. Also, the BLEU scores show, similar to the human study results in Table 3 , that among the three proposed models, the Fusion model sacrifices the most dialog quality to become more polite, whereas the LFT and RL models main- tain comparable quality with improved politeness over the base model (Seq2seq). For the retrieval models, we again see that their politeness levels are better than LFT and RL models, but with a corresponding loss in dialogue quality.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 339, |
| "end": 346, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Automatic Metric Evaluation Results", |
| "sec_num": null |
| }, |
| { |
| "text": "We start our analysis by providing qualitative examples of how well our politeness classifier performs on the target sequences from MovieTriples train dataset. This is important to check because the classifier is trained on Wikipedia (Wiki) admin request messages, and while our LSTM-CNN performs bet- : Output dialogue response examples by Retrieval, Generic-10, Seq2seq (denoted as S2S) and the 3 generative polite models Fusion, LFT, and RL (shows conversation history turns of X and Y, and then the generated 3rd turn response by X).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Politeness Classifier", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "ter on cross-domain Stack Exchange (SE) data, the MovieTriples dialogue corpus is still quite different and diverse in domain from both Wiki and SE. Hence, it is important to have a reasonably accurate politeness classifier such that it can provide useful labels and rewards for our polite-dialogue models. Table 6 presents some randomly-selected (i.e., noncherry-picked) responses from MovieTriples and their politeness classifier scores. We can see that the classifier provides a reasonably correct score a majority of the time, capturing several psycholinguistic politeness strategies mentioned in Danescu- Niculescu-Mizil et al. 2013, e.g., positive ones such as gratitude, deference, greeting, positive lexicon, indirection, indicative modal, and negative ones such as negative lexicon, direct question, direct start, 2nd person start. However, it does occasionally give strongly polite or rude scores to some mild or neutral responses, e.g., \"they were in a car accident\", showing scope for classifier improvements.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 307, |
| "end": 314, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of Politeness Classifier", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Next, we show some output examples of our polite dialogue models w.r.t. the base Seq2seq model as well as the retrieval-based models. We use these examples to demonstrate the politeness strategies our proposed generative models have learned (in Table 7 ). In the first example, our stylistic models use politeness strategies such as indirection, positive lexicon and counterfactual modal (Danescu-Niculescu-Mizil et al., 2013) . This example also illustrates the behavior of the Retrieval model, i.e., most of the time it just outputs an utterance that has word overlap with but totally irrelevant to the context. Thus although all its retrieved responses have oracle-level fluency and grammaticality, its average dialogue quality score in the human evaluation is still not as good as that of Seq2seq. In the second example, Fusion uses indirection, while LFT is being polite even when disagreeing with the abusive language from Y . This example also shows that Generic-10, due to its limited space for retrieval, oftentimes fails to provide a relevant answer, although it is the most polite one since its candidate responses are manually picked. In the third example, Fusion and LFT both use positive lexicon, and RL makes a compliment. In the fourth example, each of the three proposed models uses positive lexicon. It is worth noting that in the last example, while LFT and Polite-RL seem to provide a relevant compliment, they are actually complimenting the wrong person. This kind of issue motivates us toward creating persona-based (Li et al., 2016c) politeness models for future work.", |
| "cite_spans": [ |
| { |
| "start": 388, |
| "end": 426, |
| "text": "(Danescu-Niculescu-Mizil et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1538, |
| "end": 1556, |
| "text": "(Li et al., 2016c)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 245, |
| "end": 252, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Output Examples of Stylistic Dialogue", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Using derivative saliency (Simonyan et al., 2013; Li et al., 2016a; Aubakirova and Bansal, 2016) , we also visualize how much each token in the sampled response contributes to the classifier's reward during Polite-RL model's training. Fig. 5 shows three such heatmaps that correspond to the magnitudes of the derivative in absolute value with respect to each dimension. The figures clearly show that the classifier has learned to identify multiple politeness strategies, e.g., \"smart\" (deference), \"sir\" (polite address), and the two \"sorry\"s (apologizing).", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 49, |
| "text": "(Simonyan et al., 2013;", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 50, |
| "end": 67, |
| "text": "Li et al., 2016a;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 68, |
| "end": 96, |
| "text": "Aubakirova and Bansal, 2016)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 235, |
| "end": 241, |
| "text": "Fig. 5", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Visualization of Polite-RL Reward", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "We first presented three diverse generative models that can generate rich polite-to-rude spectrum dialogue responses (based on the politeness theories by Brown and Levinson (1987) ), without using any parallel data (which is usually assumed for tasks such as machine translation) and only relying on a style classifier. Via multiple human evaluation studies and automatic metrics, we demonstrated that all three models generate more polite responses (displaying several politeness strategies discussed in previous psycholinguistic works), while LFT and Polite-RL are able to do so without losing dialogue quality, as opposed to the Fusion model as well as the two retrieval-based models.", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 179, |
| "text": "Brown and Levinson (1987)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In future work, there is still much room for improvement on the politeness as well as dialogue quality side, and one could employ more recent, advanced models such as variational, adversarial, and decoder-regulation techniques.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Though we focused on politeness for the scope of this paper, our models can be easily generalized to other emotion and personality styles (only relying on a style classifier), hopefully contributing towards the valuable paradigm of human-like and engaging intelligent tutors and personal assistants. In future work, our polite-RL model could also be extended to stylistic task-based dialogue generation, where both content preservation and style transfer are needed, potentially by disentangling politeness and content of the generated response and then only feeding the politeness portion to the classifier for RL training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Although we trained the politeness classifier to be binary, its outputs are probabilities ranging from 0.0 to 1.0. This allows us to interpret the outputs as continuous politeness scores.4 Note that the position of the label did not affect the results much (e.g.,Sennrich et al. (2016a) appended the label at the end of the input sequence). Moreover, our models use a bidirectional encoder, which does not distinguish between the beginning and end of the source sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, it is hard to be polite in answering questions like \"What's your name?\" (The most \"legitimate\" answer would be \"My name is XX.\", rather than \"Thanks for asking! My humble name is XX if you would allow me to say so.\")", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "However, to make the reward-based model capable of multitasking, one could also prepend various politeness labels to each of the context in the training set (thus generating several examples out of one context), and encourage the generated response to be consistent with the given label. We will explore this extension in future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We treat only responses in the higher, more-accurate percentile of [0.8, 1.0] range as polite (and [0.0, 0.2] range as rude).8 The 10 final polite sentences for Generic-10 are \"thanks.\", \"can you help?\", \"can you clarify?\", \"no problem.\", \"you're welcome.\", \"interesting question.\", \"thanks for the answer.\", \"could you help please?\", \"can you elaborate?\" and \"nice.\". The 2 rejected ones are \"what have you tried?\" and \"what do you think?\". This shortlist needed some human filtering because in the Stanford Politeness Corpus, each polite example consists of two sentences, and sometimes not both of them are polite, i.e., one of them could be neutral (more generic and taskbased).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We opted for dialogue quality rather than several separated, fine-grained metrics such as relevance, specificity, informativeness becauseLowe et al. (2017) found that little additional information was provided by adding in more metrics on top of overall dialogue quality, and it also confused MTurkers in many scenarios. We had similar observations in our initial human study on MTurk.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that this train/dev/test split is only for verifying the strength of the classification model. The classifier used for the three proposed polite-dialogue models was trained on the entire Stanford Politeness Corpus (due to the small amount of politeness-labeled data available).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We test stat. significance via the bootstrap test(Noreen, 1989;Efron and Tibshirani, 1994) with 100K samples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "These levels were defined by Landis andKoch (1977); also see https://en.wikipedia.org/wiki/Cohens_kappa 17 As discussed inWeijters et al. (2010),James et al. (1984), and https://en.wikipedia.org/wiki/Likert_ scale, the 'central tendency bias' makes raters avoid using the two extreme response categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the action editor and the anonymous reviewers for their helpful comments and discussions. This work was supported by DARPA (YFA17-D17AP00022), Facebook ParlAI Research Award, Google Faculty Research Award, Bloomberg Data Science Research Grant, and Nvidia GPU awards. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Luke, I am your father: Dealing with out-of-domain requests by using movies subtitles", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Ameixa", |
| "suffix": "" |
| }, |
| { |
| "first": "Luisa", |
| "middle": [], |
| "last": "Coheur", |
| "suffix": "" |
| }, |
| { |
| "first": "Pedro", |
| "middle": [], |
| "last": "Fialho", |
| "suffix": "" |
| }, |
| { |
| "first": "Paulo", |
| "middle": [], |
| "last": "Quaresma", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "International Conference on Intelligent Virtual Agents", |
| "volume": "", |
| "issue": "", |
| "pages": "13--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Ameixa, Luisa Coheur, Pedro Fialho, and Paulo Quaresma. 2014. Luke, I am your father: Dealing with out-of-domain requests by using movies subti- tles. In International Conference on Intelligent Virtual Agents, pages 13-21. Springer.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Inter-coder agreement for computational linguistics", |
| "authors": [ |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "4", |
| "pages": "555--596", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Interpreting neural networks to improve politeness comprehension", |
| "authors": [ |
| { |
| "first": "Malika", |
| "middle": [], |
| "last": "Aubakirova", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2035--2041", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malika Aubakirova and Mohit Bansal. 2016. Interpret- ing neural networks to improve politeness compre- hension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2035-2041.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "1--15", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of In- ternational Conference on Learning Representations, pages 1-15.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Politeness: Some Universals in Language Usage", |
| "authors": [ |
| { |
| "first": "Penelope", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "C" |
| ], |
| "last": "Levinson", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Penelope Brown and Stephen C. Levinson. 1987. Polite- ness: Some Universals in Language Usage, volume 4. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Buhrmester", |
| "suffix": "" |
| }, |
| { |
| "first": "Tracy", |
| "middle": [], |
| "last": "Kwang", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [ |
| "D" |
| ], |
| "last": "Gosling", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Psychological Science", |
| "volume": "6", |
| "issue": "1", |
| "pages": "3--5", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling. 2011. Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Per- spectives on Psychological Science, 6(1):3-5.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Weighted Kappa: Nominal scale agreement provision for scaled disagreement or partial credit", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "Psychological Bulletin", |
| "volume": "70", |
| "issue": "4", |
| "pages": "213--220", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Cohen. 1968. Weighted Kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213-220.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Natural language processing (almost) from scratch", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A computational approach to politeness with application to social factors", |
| "authors": [ |
| { |
| "first": "Cristian", |
| "middle": [], |
| "last": "Danescu-Niculescu-Mizil", |
| "suffix": "" |
| }, |
| { |
| "first": "Moritz", |
| "middle": [], |
| "last": "Sudhof", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Jure", |
| "middle": [], |
| "last": "Leskovec", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "250--259", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with appli- cation to social factors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 250-259.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Multi-task learning for multiple language translation", |
| "authors": [ |
| { |
| "first": "Daxiang", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hua", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Dianhai", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Haifeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1723--1732", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multi- ple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), pages 1723-1732.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "An Introduction to the Bootstrap", |
| "authors": [ |
| { |
| "first": "Bradley", |
| "middle": [], |
| "last": "Efron", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "J" |
| ], |
| "last": "Tibshirani", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bradley Efron and Robert J. Tibshirani. 1994. An Intro- duction to the Bootstrap. CRC press.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Controlling linguistic style aspects in neural language generation", |
| "authors": [ |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Ficler", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Workshop on Stylistic Variation", |
| "volume": "", |
| "issue": "", |
| "pages": "94--104", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94-104.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Style transfer in text: Exploration and evaluation", |
| "authors": [ |
| { |
| "first": "Zhenxin", |
| "middle": [], |
| "last": "Fu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoye", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nanyun", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongyan", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", |
| "volume": "", |
| "issue": "", |
| "pages": "663--670", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 663-670.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets", |
| "authors": [ |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sordoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangfeng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "445--450", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Process- ing (Short Papers), pages 445-450.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Image style transfer using convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Leon", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gatys", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "S" |
| ], |
| "last": "Ecker", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Bethge", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "2414--2423", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neu- ral networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414-2423.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Affect-LM: A neural language model for customizable affective text generation", |
| "authors": [ |
| { |
| "first": "Sayan", |
| "middle": [], |
| "last": "Ghosh", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathieu", |
| "middle": [], |
| "last": "Chollet", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Laksana", |
| "suffix": "" |
| }, |
| { |
| "first": "Louis-Philippe", |
| "middle": [], |
| "last": "Morency", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Scherer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "634--642", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis- Philippe Morency, and Stefan Scherer. 2017. Affect- LM: A neural language model for customizable affec- tive text generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 634-642.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Understanding the difficulty of training deep feedforward neural networks", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Glorot", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics", |
| "volume": "", |
| "issue": "", |
| "pages": "249--256", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural net- works. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics, pages 249-256.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Toward controlled generation of text", |
| "authors": [ |
| { |
| "first": "Zhiting", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zichao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodan", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "P" |
| ], |
| "last": "Xing", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 34th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1587--1596", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning, PMLR 70, pages 1587-1596.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Estimating within-group interrater reliability with and without response bias", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [ |
| "R" |
| ], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "G" |
| ], |
| "last": "Demaree", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerrit", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Journal of Applied Psychology", |
| "volume": "69", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence R. James, Robert G. Demaree, and Gerrit Wolf. 1984. Estimating within-group interrater reliability with and without response bias. Journal of Applied Psychology, 69(1):85.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Shakespearizing modern language using copy-enriched sequence-to-sequence models", |
| "authors": [ |
| { |
| "first": "Harsh", |
| "middle": [], |
| "last": "Jhamtani", |
| "suffix": "" |
| }, |
| { |
| "first": "Varun", |
| "middle": [], |
| "last": "Gangal", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Nyberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Workshop on Stylistic Variation", |
| "volume": "", |
| "issue": "", |
| "pages": "10--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language us- ing copy-enriched sequence-to-sequence models. In Proceedings of the Workshop on Stylistic Variation, pages 10-19.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics", |
| "authors": [ |
| { |
| "first": "Melvin", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxim", |
| "middle": [], |
| "last": "Krikun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonghui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikhil", |
| "middle": [], |
| "last": "Thorat", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernanda", |
| "middle": [], |
| "last": "Vi\u00e9gas", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Wattenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Macduff", |
| "middle": [], |
| "last": "Hughes", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "339--351", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the Asso- ciation for Computational Linguistics, v. 5, pages 339- 351.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Exploring the limits of language modeling", |
| "authors": [ |
| { |
| "first": "Rafal", |
| "middle": [], |
| "last": "Jozefowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonghui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. CoRR abs/1602.02410.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Controlling output length in neural encoder-decoders", |
| "authors": [ |
| { |
| "first": "Yuta", |
| "middle": [], |
| "last": "Kikuchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryohei", |
| "middle": [], |
| "last": "Sasano", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroya", |
| "middle": [], |
| "last": "Takamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Manabu", |
| "middle": [], |
| "last": "Okumura", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1328--1338", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1328-1338.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Learning to discover cross-domain relations with generative adversarial networks", |
| "authors": [ |
| { |
| "first": "Taeksoo", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Moonsu", |
| "middle": [], |
| "last": "Cha", |
| "suffix": "" |
| }, |
| { |
| "first": "Hyunsoo", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Jungkwon", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiwon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 34th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1857--1865", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. 2017. Learning to discover cross-domain relations with generative adversarial net- works. In Proceedings of the 34th International Con- ference on Machine Learning, pages 1857-1865.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "Diederik", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representa- tions.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Domain control for neural machine translation", |
| "authors": [ |
| { |
| "first": "Catherine", |
| "middle": [], |
| "last": "Kobus", |
| "suffix": "" |
| }, |
| { |
| "first": "Josep", |
| "middle": [], |
| "last": "Crego", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Senellart", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "372--378", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Catherine Kobus, Josep Crego, and Jean Senellart. 2017. Domain control for neural machine translation. In Proceedings of Recent Advances in Natural Language Processing, pages 372-378.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "The measurement of observer agreement for categorical data", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Landis", |
| "suffix": "" |
| }, |
| { |
| "first": "Gary", |
| "middle": [ |
| "G" |
| ], |
| "last": "Koch", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Biometrics", |
| "volume": "", |
| "issue": "", |
| "pages": "159--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, pages 159-174.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Visualizing and understanding neural models in NLP", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Xinlei", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of North American Chapter of the Association for Computational Linguistics-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "681--691", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural models in NLP. In Proceedings of North American Chapter of the Association for Computational Linguistics-HLT, pages 681-691.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "A diversity-promoting objective function for neural conversation models", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of North American Chapter of the Association for Computational Linguistics-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "110--119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016b. A diversity-promoting objec- tive function for neural conversation models. In Pro- ceedings of North American Chapter of the Associa- tion for Computational Linguistics-HLT, pages 110- 119.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "A persona-based neural conversation model", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "994--1003", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016c. A persona-based neural con- versation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics, pages 994-1003.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "A technique for the measurement of attitudes. Archives of Psychology", |
| "authors": [ |
| { |
| "first": "Rensis", |
| "middle": [], |
| "last": "Likert", |
| "suffix": "" |
| } |
| ], |
| "year": 1932, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of Psychology.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Coupled generative adversarial networks", |
| "authors": [ |
| { |
| "first": "Ming-", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Oncel", |
| "middle": [], |
| "last": "Tuzel", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "469--477", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ming-Yu Liu and Oncel Tuzel. 2016. Coupled genera- tive adversarial networks. In Advances in Neural In- formation Processing Systems, pages 469-477.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", |
| "authors": [ |
| { |
| "first": "Chia-Wei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Iulian", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Noseworthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Charlin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2122--2132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Unsupervised image-to-image translation networks", |
| "authors": [ |
| { |
| "first": "Ming-Yu", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Breuel", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2122--2132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ming-Yu Liu, Thomas Breuel, and Jan Kautz. 2017. Unsupervised image-to-image translation networks. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2122-2132.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "The Ubuntu Dialogue Corpus: A large dataset for research in unstructured multi-turn dialogue systems", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Nissan", |
| "middle": [], |
| "last": "Pow", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Iulian", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2015)", |
| "volume": "", |
| "issue": "", |
| "pages": "285--294", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Lowe, Nissan Pow, Iulian V. Serban, and Joelle Pineau. 2015. The Ubuntu Dialogue Corpus: A large dataset for research in unstructured multi-turn di- alogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2015), pages 285-294.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Towards an automatic turing test: Learning to evaluate dialogue responses", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Noseworthy", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Iulian", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Angelard-Gontier", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1116--1126", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Lowe, Michael Noseworthy, Iulian V. Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1116-1126.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Multi-task learning for speakerrole adaptation in neural conversation models", |
| "authors": [ |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 8th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "605--614", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yi Luan, Chris Brockett, Bill Dolan, Jianfeng Gao, and Michel Galley. 2017. Multi-task learning for speaker- role adaptation in neural conversation models. In Pro- ceedings of the 8th International Joint Conference on Natural Language Processing, pages 605-614.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Multi-task sequence to sequence learning", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning. In Proceedings of In- ternational Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Personage: Personality generation for dialogue", |
| "authors": [ |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Mairesse", |
| "suffix": "" |
| }, |
| { |
| "first": "Marilyn", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "496--503", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fran\u00e7ois Mairesse and Marilyn Walker. 2007. Person- age: Personality generation for dialogue. In Proceed- ings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 496-503.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of International Conference on Learning Representations Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. In Proceedings of International Conference on Learning Representations Workshop.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Sequence to better sequence: Continuous revision of combinatorial structures", |
| "authors": [ |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Mueller", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Gifford", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "2536--2544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonas Mueller, David Gifford, and Tommi Jaakkola. 2017. Sequence to better sequence: Continuous revi- sion of combinatorial structures. In International Con- ference on Machine Learning, pages 2536-2544.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Computer-Intensive Methods for Testing Hypotheses", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [ |
| "W" |
| ], |
| "last": "Noreen", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses. Wiley New York.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "BLEU: A method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 311-318.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "A deep reinforced model for abstractive summarization", |
| "authors": [ |
| { |
| "first": "Romain", |
| "middle": [], |
| "last": "Paulus", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In Proceedings of International Confer- ence on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Using TF-IDF to determine word relevance in document queries", |
| "authors": [ |
| { |
| "first": "Juan", |
| "middle": [], |
| "last": "Ramos", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the First Instructional Conference on Machine Learning", |
| "volume": "242", |
| "issue": "", |
| "pages": "133--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Juan Ramos. 2003. Using TF-IDF to determine word relevance in document queries. In Proceedings of the First Instructional Conference on Machine Learning, volume 242, pages 133-142.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Sequence level training with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Aurelio", |
| "middle": [], |
| "last": "Marc", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zaremba", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In Proceedings of In- ternational Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Self-critical sequence training for image captioning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Steven", |
| "suffix": "" |
| }, |
| { |
| "first": "Etienne", |
| "middle": [], |
| "last": "Rennie", |
| "suffix": "" |
| }, |
| { |
| "first": "Youssef", |
| "middle": [], |
| "last": "Marcheret", |
| "suffix": "" |
| }, |
| { |
| "first": "Jarret", |
| "middle": [], |
| "last": "Mroueh", |
| "suffix": "" |
| }, |
| { |
| "first": "Vaibhava", |
| "middle": [], |
| "last": "Ross", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goel", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recogni- tion, page 1197.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Data-driven response generation in social media", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "B" |
| ], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "583--593", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 583-593.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Bidirectional recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Kuldip", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Paliwal", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "IEEE Transactions on Signal Processing", |
| "volume": "45", |
| "issue": "11", |
| "pages": "2673--2681", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Controlling politeness in neural machine translation via side constraints", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "35--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Controlling politeness in neural machine trans- lation via side constraints. In Proceedings of North American Chapter of the Association for Computa- tional Linguistics, pages 35-40.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Improving neural machine translation models with monolingual data", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "86--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 86-96.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", |
| "authors": [ |
| { |
| "first": "Iulian", |
| "middle": [], |
| "last": "Vlad Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sordoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [ |
| "C" |
| ], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "The Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16)", |
| "volume": "", |
| "issue": "", |
| "pages": "3776--3784", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Build- ing end-to-end dialogue systems using generative hier- archical neural network models. In The Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), pages 3776-3784.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Generating high-quality and informative conversation responses with sequence-to-sequence models", |
| "authors": [ |
| { |
| "first": "Yuanlong", |
| "middle": [], |
| "last": "Shao", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Gouws", |
| "suffix": "" |
| }, |
| { |
| "first": "Denny", |
| "middle": [], |
| "last": "Britz", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Goldie", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Strope", |
| "suffix": "" |
| }, |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Kurzweil", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2210--2219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Gen- erating high-quality and informative conversation re- sponses with sequence-to-sequence models. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 2210- 2219.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Style transfer from non-parallel text by cross-alignment", |
| "authors": [ |
| { |
| "first": "Tianxiao", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Lei", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "6833--6844", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Informa- tion Processing Systems, pages 6833-6844.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", |
| "authors": [ |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Simonyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Vedaldi", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Zisserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1312.6034" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Numerically grounded language models for semantic error correction", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Georgios", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabelle", |
| "middle": [], |
| "last": "Spithourakis", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Augenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "987--992", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Georgios P. Spithourakis, Isabelle Augenstein, and Se- bastian Riedel. 2016. Numerically grounded language models for semantic error correction. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 987-992.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Policy gradient methods for reinforcement learning with function approximation", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [ |
| "S" |
| ], |
| "last": "Sutton", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcallester", |
| "suffix": "" |
| }, |
| { |
| "first": "Satinder", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Yishay", |
| "middle": [], |
| "last": "Mansour", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Advances in Neural Information Processing Systems 12", |
| "volume": "", |
| "issue": "", |
| "pages": "1057--1063", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard S. Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Sys- tems 12, pages 1057-1063. MIT Press.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Unsupervised cross-domain image generation", |
| "authors": [ |
| { |
| "first": "Yaniv", |
| "middle": [], |
| "last": "Taigman", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Polyak", |
| "suffix": "" |
| }, |
| { |
| "first": "Lior", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1611.02200" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaniv Taigman, Adam Polyak, and Lior Wolf. 2016. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Improving LSTM-based video description with linguistic knowledge mined from text", |
| "authors": [ |
| { |
| "first": "Subhashini", |
| "middle": [], |
| "last": "Venugopalan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lisa", |
| "middle": [ |
| "Anne" |
| ], |
| "last": "Hendricks", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| }, |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Saenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1961--1966", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Subhashini Venugopalan, Lisa Anne Hendricks, Ray- mond J. Mooney, and Kate Saenko. 2016. Improving LSTM-based video description with linguistic knowl- edge mined from text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1961-1966.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Steering output style and topic in neural response generation", |
| "authors": [ |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nebojsa", |
| "middle": [], |
| "last": "Jojic", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Nyberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2140--2150", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Ny- berg. 2017. Steering output style and topic in neural response generation. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2140-2150.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "The effect of rating scale format on response styles: The number of response categories and response category labels", |
| "authors": [ |
| { |
| "first": "Bert", |
| "middle": [], |
| "last": "Weijters", |
| "suffix": "" |
| }, |
| { |
| "first": "Elke", |
| "middle": [], |
| "last": "Cabooter", |
| "suffix": "" |
| }, |
| { |
| "first": "Niels", |
| "middle": [], |
| "last": "Schillewaert", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "International Journal of Research in Marketing", |
| "volume": "27", |
| "issue": "3", |
| "pages": "236--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bert Weijters, Elke Cabooter, and Niels Schillewaert. 2010. The effect of rating scale format on response styles: The number of response categories and re- sponse category labels. International Journal of Re- search in Marketing, 27(3):236-247.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ronald", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Reinforcement Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "5--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. In Reinforcement Learning, pages 5-32. Springer.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Paraphrasing for style", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 24th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2899--2914", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In Proceedings of the 24th International Conference on Computational Linguistics, pages 2899-2914.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Controlling the voice of a sentence in Japanese-to-English neural machine translation", |
| "authors": [ |
| { |
| "first": "Hayahide", |
| "middle": [], |
| "last": "Yamagishi", |
| "suffix": "" |
| }, |
| { |
| "first": "Shin", |
| "middle": [], |
| "last": "Kanouchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Takayuki", |
| "middle": [], |
| "last": "Sato", |
| "suffix": "" |
| }, |
| { |
| "first": "Mamoru", |
| "middle": [], |
| "last": "Komachi", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 3rd Workshop on Asian Translation (WAT2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "203--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hayahide Yamagishi, Shin Kanouchi, Takayuki Sato, and Mamoru Komachi. 2016. Controlling the voice of a sentence in Japanese-to-English neural machine trans- lation. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016), pages 203-210.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "DualGAN: Unsupervised dual learning for image-toimage translation", |
| "authors": [ |
| { |
| "first": "Zili", |
| "middle": [], |
| "last": "Yi", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of International Conference on Computer Vision", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. 2017. DualGAN: Unsupervised dual learning for image-to- image translation. In Proceedings of International Conference on Computer Vision.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "Learning discourse-level diversity for neural dialog models using conditional variational autoencoders", |
| "authors": [ |
| { |
| "first": "Tiancheng", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ran", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxine", |
| "middle": [], |
| "last": "Eskenazi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "654--664", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 654-664.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "A C-LSTM neural network for text classification", |
| "authors": [ |
| { |
| "first": "Chunting", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Chonglin", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyuan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Lau", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.08630" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis Lau. 2015. A C-LSTM neural network for text classi- fication. arXiv preprint arXiv:1511.08630.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", |
| "authors": [ |
| { |
| "first": "Jun-Yan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Taesung", |
| "middle": [], |
| "last": "Park", |
| "suffix": "" |
| }, |
| { |
| "first": "Phillip", |
| "middle": [], |
| "last": "Isola", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexei", |
| "middle": [ |
| "A" |
| ], |
| "last": "Efros", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of International Conference on Computer Vision", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation us- ing cycle-consistent adversarial networks. In Proceed- ings of International Conference on Computer Vision.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Our LSTM-CNN politeness classifier.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Saliency heatmaps of the classifier's attention (reward for sampled responses in Polite-RL model).", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF3": { |
| "text": "PPL, WER results computed on {U 1 , U 2 , U 3 } and PPL@L, WER@L computed on {U 3 } conditioned on {U 1 , U 2 }. Lower is better for all metrics. Top results are boldfaced.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF5": { |
| "text": "Discrete-", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Polite Neutral Rude</td><td>Continuous-LFT Discrete-LFT 3.70 3.52 3.15 3.09 1.19 2.93</td></tr></table>", |
| "html": null |
| }, |
| "TABREF7": { |
| "text": "Analysis of politeness classifier: sampled responses from MovieTriples dialogue corpus (without cherry-picking) and their politeness score. The double line separates polite and rude classified responses.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF8": { |
| "text": "InputX: for what ? Y: for forcing me to face my life honestly for the first time .RetrievalX: what 's the c . w . for ? Generic-10 X: thanks for the answer .S2SX: that 's not true . Fusion X: i do n't know . I mean , it 's just a feeling , but that 's not what i 'm saying .", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>LFT RL Input</td><td>X: you 'll be fine . X: i wish i could . X: no, but i wanted to. Y: are you a total lying , scumsucking pig ,</td></tr><tr><td colspan=\"2\">person ? X: so person , what did it total out at ? Generic-10 X: no problem . Retrieval S2S X: oh , that 's ridiculous . Fusion X: well , you 're not a little funny . LFT X: i do n't think so , sir . X: you 're hurting me ! RL X: you 're sweet to say so . Input Y: pretty song . Retrieval X: is there a song you do n't know ? Generic-10 X: can you clarify ? S2S X: person , i know . Fusion X: you 're a good man . LFT X: i love this song . X: you sound like a goddess . RL X: i find it hard to believe myself . Input Y: his lips , unused as they are to such a</td></tr><tr><td/><td>contortion , twisted and stretched . into a</td></tr><tr><td colspan=\"2\">smile . X: thank you . and always smile . Generic-10 X: nice . Retrieval S2S X: how can you be so sure ? Fusion X: you 're a good man , mr . LFT X: your lips are well . RL X: your lips are so beautiful .</td></tr></table>", |
| "html": null |
| }, |
| "TABREF9": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| } |
| } |
| } |
| } |