ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2020.nlp4convai-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:54:59.707110Z"
},
"title": "Sketch-Fill-A-R: A Persona-Grounded Chit-Chat Generation Framework",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Shum",
"suffix": "",
"affiliation": {},
"email": "mshum@mit.edu"
},
{
"first": "Stephan",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {},
"email": "stephan.zheng@salesforce.com"
},
{
"first": "Wojciech",
"middle": [],
"last": "Kry\u015bci\u0144ski",
"suffix": "",
"affiliation": {},
"email": "kryscinski@salesforce.com"
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {},
"email": "cxiong@salesforce.com"
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": "",
"affiliation": {},
"email": "rsocher@salesforce.com"
},
{
"first": "",
"middle": [],
"last": "Mit",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Salesforce",
"middle": [],
"last": "Research",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Human-like chitchat conversation requires agents to generate responses that are fluent, engaging and consistent. We propose Sketch-Fill-A-R, a framework that uses a personamemory to generate chitchat responses in three phases. First, it generates dynamic sketch responses with open slots. Second, it generates candidate responses by filling slots with parts of its stored persona traits. Lastly, it ranks and selects the final response via a language model score. Sketch-Fill-A-R outperforms a state-of-the-art baseline both quantitatively (10-point lower perplexity) and qualitatively (preferred by 55% in head-to-head single-turn studies and 20% higher in consistency in multi-turn user studies) on the Persona-Chat dataset. Finally, we extensively analyze Sketch-Fill-A-R's responses and human feedback, and show it is more consistent and engaging by using more relevant responses and questions. * Work done as an intern at Salesforce Research. Bot Persona traits: 1. i love to drink wine and dance in the moonlight 2. i am very strong for my age 3. i'm 100 years old 4. i feel like i might live forever hi , how are you doing today ? Human user Bot good ! you ? celebrating with fellow centenarians Bot sketch response that is great ! i love to @persona and @persona. Human user nice. i'm playing some card games with my family. Bot selected response with lowest language model perplexity that is great ! i love to drink and dance. [LM-score = 1.24]",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Human-like chitchat conversation requires agents to generate responses that are fluent, engaging and consistent. We propose Sketch-Fill-A-R, a framework that uses a personamemory to generate chitchat responses in three phases. First, it generates dynamic sketch responses with open slots. Second, it generates candidate responses by filling slots with parts of its stored persona traits. Lastly, it ranks and selects the final response via a language model score. Sketch-Fill-A-R outperforms a state-of-the-art baseline both quantitatively (10-point lower perplexity) and qualitatively (preferred by 55% in head-to-head single-turn studies and 20% higher in consistency in multi-turn user studies) on the Persona-Chat dataset. Finally, we extensively analyze Sketch-Fill-A-R's responses and human feedback, and show it is more consistent and engaging by using more relevant responses and questions. * Work done as an intern at Salesforce Research. Bot Persona traits: 1. i love to drink wine and dance in the moonlight 2. i am very strong for my age 3. i'm 100 years old 4. i feel like i might live forever hi , how are you doing today ? Human user Bot good ! you ? celebrating with fellow centenarians Bot sketch response that is great ! i love to @persona and @persona. Human user nice. i'm playing some card games with my family. Bot selected response with lowest language model perplexity that is great ! i love to drink and dance. [LM-score = 1.24]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Chit-chat is a rich domain that challenges machine learning models to express fluent natural language and to successfully interact with other agents. Chitchat stands in contrast to goal-oriented dialogue, such as when a customer has the explicit goal of booking a flight ticket. When agents communicate, they each have internal state (e.g., their knowledge, intent) and typically have limited knowledge of the state of other agents (Chen et al., 2017) . As a result, human-like chit-chat requires agents to be fluent, engaging and consistent with what has been said and their persona (Zhang et al., 2018) .",
"cite_spans": [
{
"start": 432,
"end": 451,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 584,
"end": 604,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These requirements make learning generative chit-chat models a complex task. First, given an existing conversation history, there may be a large number of valid responses (Vinyals and Le, 2015) . Hence, supervised learning of chit-chat models that cover a large number of topics and styles requires a significant amount of data (Zhou et al., 2018) . Second, as conversations progress and more opportunities for contradiction arise, maintaining consistency becomes more difficult (Serban et al., , 2017 . Third, engaging chit-chat responses follow conversational structures that are not captured well by perplexity (Dinan et al., 2019; Liu et al., 2016) . Indeed, our human user studies show that both consistency and engagingness are only weakly correlated with perplexity, and fluency is not at all.",
"cite_spans": [
{
"start": 171,
"end": 193,
"text": "(Vinyals and Le, 2015)",
"ref_id": "BIBREF30"
},
{
"start": 328,
"end": 347,
"text": "(Zhou et al., 2018)",
"ref_id": "BIBREF38"
},
{
"start": 479,
"end": 501,
"text": "(Serban et al., , 2017",
"ref_id": "BIBREF23"
},
{
"start": 614,
"end": 634,
"text": "(Dinan et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 635,
"end": 652,
"text": "Liu et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose Sketch-Fill-A-R, a dialogue agent framework that can learn to generate fluent, consistent and engaging chit-chat responses. Our key motivation is the hypothesis that human-like chit-chat responses often 1) follow common conversational patterns with insertions of agent-specific traits, and 2) condition explicitly on those persona traits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sketch-Fill-A-R decomposes response generation into three phases: sketching, filling and ranking, see Figure 1 . First, Sketch-Fill-A-R dynamically generates a sketch response with slots, which enables it to learn response patterns that are compatible with many specific persona traits. Second, it generates candidate responses by filling in slots with words stored in memory. This enables Sketch-Fill-A-R's responses to adhere to its persona. Third, the candidate responses are ranked by perplexity under a pre-trained language model (LM), which encourages the final response (with lowest LM perplexity) to be fluent.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In sum, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We describe Sketch-Fill-A-R and how its multi-phase generation process encourages fluency, consistency and engagingness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that Sketch-Fill-A-R significantly improves hold-out perplexity by \u223c 10 points on the Persona-Chat dataset over state-ofthe-art baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show Sketch-Fill-A-R is rated higher on conversational metrics and preferred over baselines in single and multi-turn user studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We extensively analyze Sketch-Fill-A-R's response statistics and human feedback, and show that it is more consistent by using a narrower set of responses, and more engaging, by asking more questions than baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Chit-chat Dialogue Dialogue agents such as Amazon Alexa, Apple Siri, and Google Home are commonplace today, and are mainly task-oriented: they help users achieve specific tasks. On the other hand, Microsoft XiaoIce (Zhou et al., 2018) is an example of an undirected chit-chat dialogue agent. Historically task-oriented dialogue systems are composed via components such as dialogue state tracking and natural language generation (Jurafsky and Martin, 2009) . Even now, the natural language generation component often uses handcrafted templates and rules defined by domain experts that are filled via heuristics (Gao et al., 2019) . More recently task-oriented dialogue systems have been trained end-to-end (Bordes et al., 2016) , but these systems have specific user intents they aim to fulfill, and so represent a more constrained task. Early conversational dialogue systems such as ELIZA (Weizenbaum et al., 1966) and Alice (Wallace, 2009) were also based on hand-crafted rules and thus brittle. To alleviate this rigidity, more recent neural seq2seq models (Sutskever et al., 2014) are trained end-to-end (Vinyals and Le, 2015; Sordoni et al., 2015; Serban et al., 2017; Li et al., 2016) . To help guide conversation (Ghazvininejad et al., 2018; Gopalakrishnan et al., 2019) incorporated knowledge-grounded datasets, while (Zhang et al., 2018) created the Persona-Chat dataset used in this work. Sketch-Fill-A-R dynamically generates slot sketches and bears resemblance to (Wu et al., 2019) which assumed data are structured domain-specific triplets and contexts follow templates. However, Sketch-Fill-A-R does not assume the personas and responses have rigid syntactic structure, and introduces a ranking procedure. Converse to our sketch-and-fill procedure, (Qian et al., 2017) train a model to select a persona trait and decode around the trait. Finally, (Welleck et al., 2018 ) also re-rank by scoring utterances with Natural Language Inference to improve consistency.",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Zhou et al., 2018)",
"ref_id": "BIBREF38"
},
{
"start": 428,
"end": 455,
"text": "(Jurafsky and Martin, 2009)",
"ref_id": "BIBREF13"
},
{
"start": 610,
"end": 628,
"text": "(Gao et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 705,
"end": 726,
"text": "(Bordes et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 889,
"end": 914,
"text": "(Weizenbaum et al., 1966)",
"ref_id": "BIBREF32"
},
{
"start": 919,
"end": 940,
"text": "Alice (Wallace, 2009)",
"ref_id": null
},
{
"start": 1059,
"end": 1083,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF28"
},
{
"start": 1107,
"end": 1129,
"text": "(Vinyals and Le, 2015;",
"ref_id": "BIBREF30"
},
{
"start": 1130,
"end": 1151,
"text": "Sordoni et al., 2015;",
"ref_id": null
},
{
"start": 1152,
"end": 1172,
"text": "Serban et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 1173,
"end": 1189,
"text": "Li et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 1219,
"end": 1247,
"text": "(Ghazvininejad et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 1248,
"end": 1276,
"text": "Gopalakrishnan et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1325,
"end": 1345,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 1475,
"end": 1492,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 1762,
"end": 1781,
"text": "(Qian et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 1860,
"end": 1881,
"text": "(Welleck et al., 2018",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Neural Sequence Models Sketch-Fill-A-R extends a neural encoder-decoder structure (Sutskever et al., 2014) but is agnostic to the chosen form of encoder-decoder. In this work we use recurrent models and attention , which auto-regressively embed and generate sequences, but our framework is general, allowing non-recurrent encoders and decoders like Transformer networks with non-recurrent self-attention (Vaswani et al., 2017; Devlin et al., 2018) to be substituted for the recurrent encoder and decoder.",
"cite_spans": [
{
"start": 82,
"end": 106,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF28"
},
{
"start": 404,
"end": 426,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 427,
"end": 447,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Sketch-Fill-A-R uses a simple memory module to store words from personas, which act as context for generation. Weston et al. (2014) ; Sukhbaatar et al. (2015) introduced learned Key-Value Memory Networks, while Kumar et al. (2016) introduced Dynamic Memory Nets for question-answering via an iterative attention over memory. Also, Sketch-Fill-A-R decodes responses using a re-ranking strategy based on language model scores, which complements strategies in (Kulikov et al., 2018) .",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "Weston et al. (2014)",
"ref_id": null
},
{
"start": 134,
"end": 158,
"text": "Sukhbaatar et al. (2015)",
"ref_id": "BIBREF27"
},
{
"start": 211,
"end": 230,
"text": "Kumar et al. (2016)",
"ref_id": "BIBREF15"
},
{
"start": 457,
"end": 479,
"text": "(Kulikov et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our key motivation is to generate human-like chitchat responses that are conditioned on personarelevant information. Sketch-Fill-A-R generates chit-chat using a persona-memory to dynamically generate sketches that capture conversational patterns, and inserting persona-relevant information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "To set notation: capitals W, V, . . . denote matrices, i, j, k are vector-matrix indices and x, y, . . . denote vectors. The model input at time t is x t and t . It stores word-embeddings for selected rare words from persona traits in a personamemory. The final encoder hidden state h e,c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "T produces a read-out vector h mem . Lastly, the decoder outputs a sketch response with @persona slots using h mem , encoder hidden states and attention over personas and conversation. During inference, Sketch-Fill-A-R uses beam search to generate B sketch responses (step 1, depicted B = 2). In step 2, each beam with @persona slots produces candidate responses by filling it with rare words selected from the persona traits. Finally, a language model scores each candidate and Sketch-Fill-A-R returns the lowest-perplexity candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "the output at time u is y u . We denote the conversation by x c t and persona trait words by x p t . Both input and output words x t , y u \u2208 {0, 1} V are 1-hot vectors, where V denotes the vocabulary size. The vocabulary contains all unique words, punctuation and special symbols (e.g., EOS, @persona). x 0:T denotes a sequence (x 0 , . . . , x T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "Formally, we aim to learn a response generation model that predicts words y u using a probability distribution P (y 0:U |x 0:T ; \u03b8) over sequences of T words and N persona traits with R rare words. Here U is the output sequence length and \u03b8 are the model weights. We use deep neural networks, a model class that has recently seen success in language generation tasks ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "Sketch-Fill-A-R uses several components to generate sketch responses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "\u2022 An encoder h e 0:T = Enc (x 0:T ; \u03b8) that computes hidden representations e t of the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "\u2022 A memory module h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "mem = Mem (x 0:R ; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "that stores all rare words from persona traits (constructed by removing stop words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "\u2022 A language model P LM (x t+1 |x 0:t ; \u03b8) that computes a distribution over next words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "\u2022 A sketch decoder",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "h d 0:U = Dec h e 0:T , h mem ; \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "that synthesizes both the encoded input and memory readouts, and predicts the next word y u in the sketch response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch-Fill-A-R",
"sec_num": "3"
},
{
"text": "Encoder We instantiate both encoder and decoder using recurrent neural networks. In this work, we use LSTMs (Hochreiter and Schmidhuber, 1997 ), although other choices are possible (Elman, 1990; . The encoder computes hidden states h 0:T \u2208 R d hid auto-regressively:",
"cite_spans": [
{
"start": 108,
"end": 141,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF12"
},
{
"start": 181,
"end": 194,
"text": "(Elman, 1990;",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h e t+1 = LSTM (e(x t ), h e t ; \u03b8) ,",
"eq_num": "(1)"
}
],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "where e(x t ) are word-embedding representations of the raw input tokens x t . For relevant context in decoding, Sketch-Fill-A-R encodes both conversation history x c 0:T and individual persona traits Memory Module Sketch-Fill-A-R selects a subset of rare words, x p r from the persona traits by removing stop-words, punctuation, and other symbols. After encoding the input dialogue, Sketch-Fill-A-R does a memory readout using the final conversation encoder hidden state h conv T as a query:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h mem = h conv T + o,",
"eq_num": "(2)"
}
],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o = r p r x p r C 2 ,",
"eq_num": "(3)"
}
],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p r = \u03c3(((h conv T ) T x p r C 1 ))",
"eq_num": "(4)"
}
],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "where r is a vector index over the rare word memory, \u03c3 is a softmax activation function creating attention weights p i \u2208 R d hid , and C k are trainable embedding matrices where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "C k \u2208 R V \u00d7d hid .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "Attention Decoder The decoder is an LSTM which recursively computes hidden states h d u that are mapped into a distribution over output words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h d u+1 = LSTM y u , h d u ; \u03b8 ,",
"eq_num": "(5)"
}
],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h d 0 = f W d [h e T , h mem ] + b d ,",
"eq_num": "(6)"
}
],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y u+1 |x 0:T , y 0:u ) = \u03c3(c u W emb ).",
"eq_num": "(7)"
}
],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "At decoding time u + 1 the decoder computes the next hidden state h d u+1 using the previous predicted word y u and decoder hidden state h d u , in addition to attention over the context of the response (the previous utterances and the agent's persona traits).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch Response Generation",
"sec_num": "3.1"
},
{
"text": "T , h mem ] down to size d hid and uses it as the initial hidden state of the decoder. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The decoder projects [h e",
"sec_num": null
},
{
"text": "W emb \u2208 R d hid",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The decoder projects [h e",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c u = f W ac [h d u , c conv u , c pers u ] + b ac ,",
"eq_num": "(8)"
}
],
"section": "The decoder projects [h e",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c conv u = \u03c3( W a [h d u ] + b a , h conv 0:T ), h conv 0:T , (9) c pers u = \u03c3( W a [h d u ] + b a , h pers 0:N ), h pers 0:N ,",
"eq_num": "(10)"
}
],
"section": "The decoder projects [h e",
"sec_num": null
},
{
"text": "where f is a tanh, W ac \u2208 R 3 * d hid \u00d7d hid , W a \u2208 R d hid \u00d7d hid and \u03c3 is the softmax activation function. In Equations 9 and 10 the softmax is over the encoder time dimension and \u2022, \u2022 is an inner product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The decoder projects [h e",
"sec_num": null
},
{
"text": "Sketch-Fill-A-R trains the sketch-decoder outputs (Equation 7) by minimizing cross-entropy loss with ground truths y * u . However, during inference, Sketch-Fill-A-R uses an iterative generateand-score approach to produce the final response:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "1. Perform beam search with beam size B to generate B sketch responses \u0177 b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "0:U x b b=1,...,B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "that may contain @persona tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "2. For each sketch with tags, select the persona i * with the highest attention weight w u * ,i * (h c T ) from the first sketch tag location u * , and construct B candidate responses by filling each @persona slot with words selected from i * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "3. Compute the perplexity s b of all B candidate responses using a pre-trained language model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "s k = exp 1 T b T b t=0 \u2212 log P LM y b u |y b 0:u\u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "4. Choose response b * = min b s b with the lowest LM-likelihood score as the final response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "For Sketch-Fill variants that do not use reranking to fill slots, we follow the methodology of (Wu et al., 2019) in using a memory pointer network in order to fill slots. For detail, see the Appendix.",
"cite_spans": [
{
"start": 95,
"end": 112,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference Reranking Strategy",
"sec_num": "3.2"
},
{
"text": "To validate Sketch-Fill-A-R, we first show that it achieves better supervised learning performance than baselines on a chit-chat dialogue dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Validation",
"sec_num": "4"
},
{
"text": "Persona-Chat Dataset We trained Sketch-Fill-A-R to generate single-turn agent responses on the Persona-Chat dataset (Zhang et al., 2018) , which contains 10,907 dialogues. Here, a dialogue consists of multiple turns: a single turn contains the utterance of a single agent. We processed this dataset into training examples that each consist of the conversation history x c t , set of persona traits x p t of the model, and the ground truth sketch response y u . This process yielded 131,438 training examples. Rare words were identified by removing all punctuation and stop words from the set of persona traits (see Appendix for more information). Ground truth sketch responses were then constructed by replacing all rare word instances in ground truth responses with @persona tags.",
"cite_spans": [
{
"start": 116,
"end": 136,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Validation",
"sec_num": "4"
},
{
"text": "Language Model Pre-training Sketch-Fill-A-R uses a Transformer-based GPT (Radford et al., 2018) pre-trained on the Books text corpus (Zhu et al., 2015) to rank candidate responses with filled @persona slots according to their LM-perplexity scores. For model details, see the Appendix. (Zhang et al., 2018) showed not only that models trained on Persona-Chat outperform models trained on other dialogue datasets (movies, Twitter) in engagingness but also that KVMemNet outperforms vanilla Seq2Seq on Persona-Chat. As a result we omit comparison with Seq2Seq. KVMemNet is the strongest of the few public baselines available to compare against on chitchat with personas. All Sketch-Fill-A-R models use language model reranking (see Section 3.2). All input tokens x c t , x p",
"cite_spans": [
{
"start": 73,
"end": 95,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 133,
"end": 151,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF39"
},
{
"start": 285,
"end": 305,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Validation",
"sec_num": "4"
},
{
"text": "All models were trained by minimizing loss on the ground truth sketch response y * 0:U :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Validation",
"sec_num": "4"
},
{
"text": "min \u03b8 \u2212 U u=0 y * u , log P (y u |x 0:T , y 0:u\u22121 ; \u03b8) . (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Validation",
"sec_num": "4"
},
{
"text": "For training details, see the Appendix. The results are shown in Table 1 . Sketch-Fill models outperform KVMemNet on validation perplexity, while using significantly fewer weights than KVMemNet. This suggests the structure of Sketch-Fill models fits well with chit-chat dialogue.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Empirical Validation",
"sec_num": "4"
},
{
"text": "Although Sketch-Fill models perform well quantitatively, a crucial test is to evaluate how well they perform when judged by human users on conversational quality, which is not explicitly captured by perplexity. We performed single and multi-turn dialogue user studies to assess the quality of Sketch-Fill-A-R, rated along several dimensions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "\u2022 Fluency: whether responses are grammatically correct and sound natural.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "\u2022 Consistency: whether responses do not contradict the previous conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "\u2022 Engagingness: how well responses fit the previous conversation and how likely the conversation would continue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "Our definition of engagingness includes relevance, defined in pragmatics and relevance theory (Wilson and Sperber, 2002; Grice, 1991) as a statement leading to positive cognitive effect. However an engaging statement may be ironic (Sperber and Wilson, 1981) , humorous, or specific to individuals. We also explore which qualities of Sketch-Fill-A-R's outputs are correlated with human ratings and perplexity scores. Our results suggest that:",
"cite_spans": [
{
"start": 94,
"end": 120,
"text": "(Wilson and Sperber, 2002;",
"ref_id": "BIBREF35"
},
{
"start": 121,
"end": 133,
"text": "Grice, 1991)",
"ref_id": "BIBREF11"
},
{
"start": 231,
"end": 257,
"text": "(Sperber and Wilson, 1981)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "\u2022 Conditioning on persona-memory provides more consistency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "\u2022 Sketch-Fill-A-R poses more questions, which correlates with higher engagingness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "\u2022 Responses need to be fluent in order to be consistent or engaging. In addition, more consistent responses are more likely to be engaging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "\u2022 Perplexity is not correlated with high-quality responses. Table 3 : User study ratings of single-turn responses (score range where 1 is low and 5 is high). Each row shows ratings from a head-to-head experiment where responses from Sketch-Fill-A-R-variants and KVMemNet over 100 different conversations were shown to 5 human raters. Sketch-Fill with reranking show a small gain over KVMemNet on all qualitative metrics, but the variance in the ratings is high. Sketch-Fill variants without reranking perform much worse, due to their responses not being fluent, despite achieving low perplexity (see Figure 1) . ",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 3",
"ref_id": null
},
{
"start": 600,
"end": 609,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "A/B Experiment KVMemNet Sketch-Fill-x (ours) vs Sketch-Fill 380 120 vs Sketch-Fill-A 396 103 vs Sketch-Fill-R 225 275 vs Sketch-Fill-A-R 232 266",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Study and Qualitative Analysis",
"sec_num": "5"
},
{
"text": "The studies were conducted on 100 random examples sampled from the validation set, where each example was rated by 5 judges. Each example contained a conversation with multiple lines of history and a single KVMemNet or Sketch-Fill response. Judges came from English speaking countries and were calibrated with examples of good/bad responses in all metrics before judging. The study was executed in two settings, finegrained, where the judges rated the responses on a scale from 1 (lowest) to 5 (highest) for each of the mentioned dimensions, and binary, where they chose which response best fit the conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "The results of the fine-grained survey are presented in Table 3 , where each row corresponds to a separate head-to-head experiments in which the KVMemNet model was paired with one of the versions of Sketch-Fill-A-R. The study showed small gains on all metrics for all Sketch-Fill-A-R variations, however, the variance of results was high. We believe that this artifact could be caused by a number of factors, including subjective preferences of raters and potential ambiguities in the experi-ments description. We notice that Sketch-Fill and Sketch-Fill-A reach lower perplexity values than KVMemNet, but comparatively have lower evaluations across the board. Conversely, ranking models like Sketch-Fill-R and Sketch-Fill-A-R have higher scores on all metrics. We observe that the difference is due to the ranker giving more fluent outputs via better selection of persona words to use. Table 4 shows the results of the human study in a binary setting. In these experiments the base and attention-augmented versions of Sketch-Fill-A-R outperformed KVMemNet by a clear margin.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 3",
"ref_id": null
},
{
"start": 886,
"end": 893,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "The following subsections present in-depth analyses of the human study. They focus on the Sketch-Fill-A-R model, since it yielded both the best perplexity and user study results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "Correlation between ratings To study and better understand the reasoning behind the ratings assigned by annotators, we look at the correlation between the different dimensions in which responses where scored. Figure 5 shows Kernel-Density-Estimation plots of the data points and associated Pearson correlation coefficients \u03c1. The data shows weak (\u03c1 = 0.397) to moderate (\u03c1 = 0.462) correlation between fluency and consistency, and fluency and engagingness ratings respectively. The data shows \u03c1 value of 0.670 between engagingness and consistency ratings, suggesting strong correlation between those dimensions. See appendix for more detailed information. The numbers were obtained on human ratings of the Sketch-Fill-A-R model, but comparable numbers were also obtained for the KVMemNet model. The mentioned results follow intuition, as fluency of a response is a notion that can be easily defined and identified. On the other hand consistency and engagingness are ambiguous, and (possibly) partially overlapping, concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 5",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "To associate quantitative metrics from Table 1 with human ratings, we computed correlation between perplexity values from the sketch decoder of the Sketch-Fill-A-R model with human scores Table 8 : Multi-turn user study ratings (score range 1 (lowest) -5 (highest)). We collected 30 conversations with 20 turns between human raters and models. KVMemNet is more fluent, but Sketch-Fill-A-R is more engaging and significantly more consistent. across different dimensions. The study showed no correlation for fluency (\u03c1 = -0.015), and weak correlations for consistency (\u03c1 = -0.190) and engagingness (\u03c1 = -0.147).",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 188,
"end": 195,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "Model vocabulary analysis To assess the diversity of responses generated by the models, we calculated the percentage of unique n-grams and full responses present in the model outputs. Table 2 presents these values for KVMemNet and Sketch-Fill-A-R computed on the full validation set. The numbers show that the KVMemNet model clearly outperforms our model in terms of generating diverse and unique outputs by a factor of 3-4x. How-ever, we hypothesize that this additional diversity may lead to lower engagingness scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "Consistency over time In order to evaluate the models capacity to stay consistent with its previous statements, and thus implicitly its ability to utilize information present in the chat history, we compared how the consistency rating changed as the number of lines of the conversation increased. Figure 4 visualizes this metric both for our model and KVMemNet. In the case of both models, the consistency decreases as the chat history get longer, indicating that models have problems keeping track of their previous statements. When analyzing the linear trend we noticed that the decrease in performance is slower for the Sketch-Fill-A-R model. We hypothesize that this effect can be partially caused by the high diversity of sequences generated by the KVMemNet, which in turn affects the models ability to generate consistent conversation.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 305,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "Effect of question responses (See et al., 2019) note that for a conversation to be engaging, responses in chit-chat dialogue should be a mix of statements and questions, where the model inquires about certain traits and information of the other agent. We expand on this by evaluating the effect of a question's presence in the response has on the ratings coming from the judges. The results are presented in Figure 4c . The study showed that there is a strong correlation between the model asking a question and the users rating the response as 125 Figure 4 : Impact of conversation length on the consistency of outputs generated by the KVMemNet (left) and Sketch-Fill-A-R (middle). As conversation length increases (more dialogue turns) both models become less consistent, but KVMemNet degrades faster than Sketch-Fill-A-R. Right: impact of response containing a question on human ratings. Responses including questions tend to receive higher human ratings.",
"cite_spans": [
{
"start": 29,
"end": 47,
"text": "(See et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 408,
"end": 417,
"text": "Figure 4c",
"ref_id": null
},
{
"start": 549,
"end": 557,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "more engaging. Asking questions has a small but positive influence on engagingness and fluency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "To further analyze this aspect, we measured the frequency of questions in the set of 100 responses coming from the Sketch-Fill-A-R and KVMemNet models. We found that our model produced 49 question responses out of which 25 had both a statement and a question. In the same setting the KVMemNet produced 15 questions out of which only 1 contained a statement and a question. This insight could explain the gains on the engagingness ratings found by our human study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single-turn Experiments",
"sec_num": "5.1"
},
{
"text": "To evaluate both models in the more challenging multi-turn setting, we collected 30 conversations that lasted 20 turns, between each model and human users. Users were asked to score their conversations with the models on a scale from 1 (lowest) to 5 (highest) across the same dimensions as in the single-turn experiments. Table 8 shows the human ratings for both Sketch-Fill-A-R and KVMemNet. Both were judged as less fluent (scores \u2248 3) than in the single-turn case (scores \u2265 4). This is likely due to the models having to respond to a range of conversation histories unseen during training.",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 329,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-turn User Study",
"sec_num": "5.2"
},
{
"text": "Notably, Sketch-Fill-A-R beat KVMemNet on consistency by a significantly larger margin (3.72 vs 2.15) than in the single-turn setting. This suggests that Sketch-Fill-A-R benefits from conditioning response generation on its persona-memory thus adhering more closely to responses compatible with its persona.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-turn User Study",
"sec_num": "5.2"
},
{
"text": "Further, Sketch-Fill-A-R is more engaging. This suggests that in the multi-turn setting, there also is a positive correlation between engagingness and con-sistency as in the single-turn case (see Appendix): consistent models can be more engaging as well. Table 7 shows an example of KVMemNet's inconsistency. While every model utterance is fluent individually, KVMemNet noticeably contradicts itself in the context of previous utterances and frequently ignores the human responses (e.g \"i do not have any myself\" after \"my little girl\"). We believe the lack of structure inherent in models built on vanilla Seq2Seq make KVMemNet prone to this mistake. Table 6 shows Sketch-Fill-A-R conducts a more engaging conversation, with pertinent responses and questions. However, this structure can restrict Sketch-Fill-A-R, as sketches may be filled with incorrect persona traits (e.g \"i love papaya food.\"). See the Appendix for more examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 7",
"ref_id": "TABREF9"
},
{
"start": 652,
"end": 659,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Multi-turn User Study",
"sec_num": "5.2"
},
{
"text": "In our study we have identified several paths for future work. First, our results reinforce that perplexity does not strongly correlate with human judgment of response quality. It is crucial to develop an automated metric that correlates well with human judgment as human evaluation is expensive, time consuming, and prone to inconsistencies. Secondly, despite outperforming other models in the multiturn dialogue setting on consistency and engagement, our model has not reached human-like fluency. In order to demonstrate complex higher-level traits such as empathy, models must first master these lower-level abilities. Finally, correct use of rare words and proper nouns leads to higher human scores. Existing models are unable to deal with outof-vocabulary tokens and rare words gracefully, and incorporation of commonsense via methods like external knowledge bases will be useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "During experiments, we identified a number of ethical implications for future work. The Persona-Chat dataset was noted by some raters to contain potentially inappropriate statements (e.g., \"my wife spends all my money\") and is based in US culture (e.g., food, music, cars, names). It also lacked content to fail gracefully when it didn't have an appropriate response (e.g., \"I'm sorry I don't understand,\" \"I don't know\"). As such, learned model responses were occasionally insensitive and confusing to human users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Implications",
"sec_num": "7.1"
},
{
"text": "In all models we used single-layer LSTMs with hidden sizes of 300 throughout, and used GloVe embeddings of size 300. All Sketch-and-Fill models were trained with Adam initialized with learning rate 0.0001. We used batch sizes of 32. In single-turn experiments we used beam sizes of 7, and in multi-turn experiments we used beam sizes of 10. Dropout was applied for all models with probability 0.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture and Training Parameters",
"sec_num": "7.2"
},
{
"text": "Training: 124,298 words were converted to persona tags out of 1,505,395 words total. Validation: 8,307 words were converted to persona tags out of 92,586 words total.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Persona Tags",
"sec_num": "7.4"
},
{
"text": "7.5 Global-to-Local Memory Pointer Networks (Wu et al., 2019 ) construct a global memory distribution that acts as a mask over the memory and is concatenated with encoded dialogue history and memory information before initializing as the decoder's hidden state. They also construct a local memory pointer that identifies the word to retrieve. These auxiliary tasks are trained using cross-entropy loss. The global pointer label is defined G label = (g l 0 , ..., g l i ) as a vector where g l i is 1 if the word is expected in y * t and 0 otherwise. Using the same notation as in Section 3.1, we compute the global pointer as follows:",
"cite_spans": [
{
"start": 44,
"end": 60,
"text": "(Wu et al., 2019",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Persona Tags",
"sec_num": "7.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g i = Sigmoid(((y t , h d t ) T e i )",
"eq_num": "(12)"
}
],
"section": "Number of Persona Tags",
"sec_num": "7.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Loss g = \u2212 T i=1 [g l i \u00d7 log(g i ) + (1 \u2212 g l i ) \u00d7 log(1 \u2212 g i ]",
"eq_num": "(13)"
}
],
"section": "Number of Persona Tags",
"sec_num": "7.4"
},
{
"text": "This global pointer is used as a mask on the memory module before the decoding procedure e i = e i \u00d7 g i . The local pointer label is used at every time step to identify which memory index (and thus word) to point to. If at y * t a persona trait is expected, L label t holds corresponding index, and is m otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Persona Tags",
"sec_num": "7.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Loss l = m t=1 \u2212log(L t (L label t )",
"eq_num": "(15)"
}
],
"section": "Number of Persona Tags",
"sec_num": "7.4"
},
{
"text": "OpenAI GPT consists of a 12 layer Transformer and is pre-trained on the BooksCorpus dataset. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Pretraining",
"sec_num": "7.6"
},
{
"text": "We visualize the three sets of attention weights in our model: the context weights in Figure 7 , and memory weights and persona trait weights in Figure 8. Figure 7 's x-axis shows a conversation ending with a question reflected by the user about hobbies. The response has high attention weights on hobbies and the user's own garden hobby in the previous context. Figure 8 (right) shows that in response to this hobbies question, attention is first distributed over hobby-related personas before converging on the mountain biking persona trait over time. Finally, we observe in Figure 8 (left) that the memory attention is most heavily weighted on coffee, which may explain why the coffee persona begins with such high weights. 7.8 Example Dialogues from KVMemNet 7.9 Example Dialogues from Sketch-Fill-A-R Model Trait 1: I played football for a division a college Model Trait 2: I enjoy poetry Model Trait 3: I try various coffees as a hobby Model Trait 4: I'm a huge star wars fan Human: Hi. What do you do you for fun?. Model: hello ! i love music i am in school to be a music teacher Human: That sounds exciting! I left school a long time ago.. Model: i teach french Human: You must be talented to teach both music and french. Model: do you speak other languages ? Human: No, just English. What about you?. Model: i can only speak one language ! Human: Me too. Luckily we speak the same language. Model: no do you speak another language ? Human: No I don't speak any other languages. Model: i do not . i do not even speak the 5 love languages Human: I wasn't aware that there were 5 love languages. Human: Tell me more.. Model: i speak six other languages , pick one and i will speak it Human: I pick language number 3. Model: only speak english but that is pretty cool Model Trait 1: I enjoy coloring books Model Trait 2: My husband is adopted Model Trait 3: I've a german shepherd dog Model Trait 4: I like to watch NHL hockey Model Trait 5: I live is a rural farming community Human: yo. what's up?. Model: i am great how about you ? Human: i am depressed. Model: yes ! i love to dance to it . i am pretty amazing at it Human: oh... that's odd. and rude... Human: you want to dance to my depression? Model: i love to dance i have moves Human: describe your depression Model: dance to me. i love to dance , Model: not to brag but i am an incredible dancer ! Human: ok. i get it. but how. alone or with others?. Model: that is nice . no do not dance but i like to sing so... Human: no dancing after all? you're just messing with me? Model: yeah , dancing with my friends is my favorite . Human: whatever man. cool . Model: can we dance together ? i have no brothers or sisters . ",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 7",
"ref_id": "FIGREF11"
},
{
"start": 145,
"end": 151,
"text": "Figure",
"ref_id": null
},
{
"start": 155,
"end": 163,
"text": "Figure 7",
"ref_id": "FIGREF11"
},
{
"start": 363,
"end": 371,
"text": "Figure 8",
"ref_id": "FIGREF13"
},
{
"start": 577,
"end": 585,
"text": "Figure 8",
"ref_id": "FIGREF13"
}
],
"eq_spans": [],
"section": "Visualizing Model Attention",
"sec_num": "7.7"
},
{
"text": "twere first encoded using 300-dimensional GLoVe word embeddings e(x t )(Pennington et al., 2014).1 A number of chit-chat models posted results in the Con-vAI2 competition. However, we could not reproduce these, as all competitive methods rely on extensive pre-training with large models, or do not have code or trained models available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning end-to-end goal-oriented dialog",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.07683"
]
},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A survey on dialogue systems: Recent advances and new frontiers",
"authors": [
{
"first": "Hongshen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaorui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM SIGKDD Explorations Newsletter",
"volume": "19",
"issue": "2",
"pages": "25--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. ACM SIGKDD Explo- rations Newsletter, 19(2):25-35.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fethi Bougares, Holger Schwenk, and Yoshua Bengio",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
}
],
"year": 2014,
"venue": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv e-prints",
"volume": "",
"issue": "",
"pages": "1406--1078",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing Phrase Representations using RNN Encoder- Decoder for Statistical Machine Translation. arXiv e-prints, page arXiv:1406.1078.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The second conversational intelligence challenge (convai2)",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.00098"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01241"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Finding structure in time",
"authors": [
{
"first": "",
"middle": [],
"last": "Jeffrey L Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive science",
"volume": "14",
"issue": "",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L Elman. 1990. Finding structure in time. Cog- nitive science, 14(2):179-211.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural approaches to conversational ai. Foundations and Trends R in Information Retrieval",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "13",
"issue": "",
"pages": "127--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Michel Galley, Lihong Li, et al. 2019. Neural approaches to conversational ai. Founda- tions and Trends R in Information Retrieval, 13(2- 3):127-298.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A knowledge-grounded neural conversation model",
"authors": [
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Topical-chat: Towards knowledge-grounded open-domain conversations",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Gopalakrishnan",
"suffix": ""
},
{
"first": "Behnam",
"middle": [],
"last": "Hedayatnia",
"suffix": ""
},
{
"first": "Qinlang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Gottardi",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Kwatra",
"suffix": ""
},
{
"first": "Anu",
"middle": [],
"last": "Venkatesh",
"suffix": ""
},
{
"first": "Raefer",
"middle": [],
"last": "Gabriel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "1891--1895",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Gopalakrishnan, Behnam Hedayatnia, Qin- lang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-T\u00fcr, and Amazon Alexa AI. 2019. Topical-chat: To- wards knowledge-grounded open-domain conversa- tions. Proc. Interspeech 2019, pages 1891-1895.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Studies in the Way of Words",
"authors": [
{
"first": "",
"middle": [],
"last": "H Paul Grice",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H Paul Grice. 1991. Studies in the Way of Words. Har- vard University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Jurafsky and James H. Martin. 2009. Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition, 2nd Edition. Prentice Hall se- ries in artificial intelligence. Prentice Hall, Pearson Education International.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Importance of a search strategy in neural dialogue modelling",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00907"
]
},
"num": null,
"urls": [],
"raw_text": "Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv preprint arXiv:1811.00907.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Ondruska",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1378--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378-1387.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Georgios",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06155"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Iulian",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.08023"
]
},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. arXiv preprint arXiv:1603.08023.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Assigning personality/identity to a chatting machine for coherent conversation generation",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jingfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2017. Assigning personal- ity/identity to a chatting machine for coherent con- versation generation.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. OpenAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "What makes a good conversation? how controllable attributes affect human judgments",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.08654"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generative deep neural networks for dialogue: A short review",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.06216"
]
},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2016. Generative deep neural net- works for dialogue: A short review. arXiv preprint arXiv:1611.06216.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A hierarchical latent variable encoder-decoder model for generating dialogues",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelli- gence.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A neural network approach to context-sensitive generation of conversational responses",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.06714"
]
},
"num": null,
"urls": [],
"raw_text": "A neural network approach to context-sensitive gen- eration of conversational responses. arXiv preprint arXiv:1506.06714.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Irony and the use-mention distinction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 1981,
"venue": "Philosophy",
"volume": "3",
"issue": "",
"pages": "143--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Sperber and Deirdre Wilson. 1981. Irony and the use-mention distinction. Philosophy, 3:143-184.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440-2448.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A neural conversational model",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.05869"
]
},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The anatomy of alice",
"authors": [
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2009,
"venue": "Parsing the Turing Test",
"volume": "",
"issue": "",
"pages": "181--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S Wallace. 2009. The anatomy of alice. In Parsing the Turing Test, pages 181-210. Springer.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Eliza-a computer program for the study of natural language communication between man and machine",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Weizenbaum",
"suffix": ""
}
],
"year": 1966,
"venue": "Communications of the ACM",
"volume": "9",
"issue": "1",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Weizenbaum et al. 1966. Eliza-a computer program for the study of natural language communi- cation between man and machine. Communications of the ACM, 9(1):36-45.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Dialogue natural language inference",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00671"
]
},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2018. Dialogue natural language inference. arXiv preprint arXiv:1811.00671.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Relevance theory",
"authors": [
{
"first": "Deirdre",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Sperber",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deirdre Wilson and Dan Sperber. 2002. Relevance the- ory.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Global-to-local memory pointer networks for task-oriented dialogue",
"authors": [
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. In International Conference on Learning Representations.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Personalizing dialogue agents: I have a dog",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.07243"
]
},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The design and implementation of xiaoice, an empathetic social chatbot",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heung-Yeung",
"middle": [],
"last": "Shum",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.08989"
]
},
"num": null,
"urls": [],
"raw_text": "Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2018. The design and implementation of xi- aoice, an empathetic social chatbot. arXiv preprint arXiv:1812.08989.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1506.06724"
]
},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watch- ing movies and reading books. In arXiv preprint arXiv:1506.06724.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what",
"authors": [],
"year": null,
"venue": "Persona Preprocessing Persona traits were pre-processed to remove stopwords. These were initialized with the defaults from NLTK and augmented with top commonly seen words in persona traits",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Persona Preprocessing Persona traits were pre-processed to remove stop- words. These were initialized with the defaults from NLTK and augmented with top commonly seen words in persona traits. [\"and\", \"my\", \"i\", \"very\", \"is\", \"favorite\", \"to\", \"like\", \"go\", \"also\", \"i'm\", \"am\", \"a\", \"lot\", \"at\", \"the\", \"for\", \"when\", \"are\", \"this\", \"on\", \"just\", \".\", \",\", \"!\", \"?\", \"help\", \"play\", \"in\", \"have\", \"of\", \"by\", \"do\", \"one\", \"it\", \"an\", \"was\", \"me\", \"could\", \"be\", \"with\", \"but\", \"before\", \"after\", \"from\", \"ive\", \"dont\", \"only\", \"love\", \"had\", \"i\u1e3f\", \"over\", \"what\", \"as\", \"want\", \"into\", \"try\", \"whatever\", \"get\", \"t\", \"s\", \"no\", \"own\", 'i', 'me', 'my', 'my- self', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'it- self', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now']",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "User A: what do you do for work ? User B: i'm a life coach and i'm good at it User A: i could use your services . User B: i'll run right over and help User A: that is very kind of you",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "User A: what do you do for work ? User B: i'm a life coach and i'm good at it User A: i could use your services . User B: i'll run right over and help User A: that is very kind of you .",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "User B: i run marathons and that is serious business",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "User B: i run marathons and that is serious business",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "User A: i work out but it is just to hang with friends",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "User A: i work out but it is just to hang with friends",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "KVMEMNet: that is sick i go running a lot Fluency",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "KVMEMNet: that is sick i go running a lot Fluency: 4.80 \u00b1 0.40",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Model: i am doing well . how about you ? Human: i'm doing ok, i used to be an alcoholic. Model: oh . i am sorry to hear that . Human: tell me something about yourself. Model: i'm a librarian",
"authors": [],
"year": null,
"venue": "Model Trait 1: I'm a librarian Model Trait 2: I really like to travel Model Trait 3: I have visited Spain a times Model",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Model Trait 1: I'm a librarian Model Trait 2: I really like to travel Model Trait 3: I have visited Spain a times Model Trait 4: I think I will retire in a few years Model Trait 5: I am 60years old Human: hi , how are you doing?. Model: i am doing well . how about you ? Human: i'm doing ok, i used to be an alcoholic. Model: oh . i am sorry to hear that . Human: tell me something about yourself. Model: i'm a librarian . Human: neat , what do you do as a librarian?. Model: i work at a library . Human: cool, what do you do for fun?. Model: i love to read . what about you ? Human: i like to play basketball. Model: that sounds like fun . Human: it is! where are you from?. Model: i'm in the librarian .",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Trait 2: I dye my hair every three months Model Trait 3: I went to school to be a veterinarian but hated it Model Trait 4: I have an internet addiction and spend a lot of time playing roleplaying games Human: Ciao. Model: how are you doing tonight ? Human: Good. Can you speak Italian",
"authors": [],
"year": null,
"venue": "Model: i am a veterinarian . Human: Do you have a dog? Model: i do not have any pets . Human: How do you become a vet? Model: i am a vet . i am a vet . Human: Do you have emergency service? Human: Do you need to take on call? Model: what do you like to do for fun ? Human: Call of duty. Model: i like to games and roleplaying time",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Model Trait 1: My dream in life is to work from home Model Trait 2: I dye my hair every three months Model Trait 3: I went to school to be a veterinarian but hated it Model Trait 4: I have an internet addiction and spend a lot of time playing roleplaying games Human: Ciao. Model: how are you doing tonight ? Human: Good. Can you speak Italian?. Model: what do you do for a living Human: I'm a teacher. How about you?. Model: i am a veterinarian . Human: Do you have a dog? Model: i do not have any pets . Human: How do you become a vet? Model: i am a vet . i am a vet . Human: Do you have emergency service? Human: Do you need to take on call? Model: what do you like to do for fun ? Human: Call of duty. Model: i like to games and roleplaying time . Human: Me too. Model: do you have any hobbies ? Human: Not really. Model: i love to addiction internet .",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Trait 2: I love cooking but I also enjoy fishing Model Trait 3: Spiderman is my favorite Model Trait 4: I also love comic books Human: hi i'm [Human name]. how are you doing today ? Model: good, has been very busy lately. Model: what are you going to school for ? Human: i don't go to school, i'm working. Model: what do you do in your free time ? Human: i play piano. Model: i love fishing and cooking . Human: great. Model: what is your favorite spiderman ? Human: i don't like spiderman. Model: i like to enjoy . Human: like what? Model: spiderman is my favorite",
"authors": [],
"year": null,
"venue": "Model Trait 1: If I am not cooking in the kitchen then I am definitely on the water Model",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Model Trait 1: If I am not cooking in the kitchen then I am definitely on the water Model Trait 2: I love cooking but I also enjoy fishing Model Trait 3: Spiderman is my favorite Model Trait 4: I also love comic books Human: hi i'm [Human name]. how are you doing today ? Model: good, has been very busy lately. Model: what are you going to school for ? Human: i don't go to school, i'm working. Model: what do you do in your free time ? Human: i play piano. Model: i love fishing and cooking . Human: great. Model: what is your favorite spiderman ? Human: i don't like spiderman. Model: i like to enjoy . Human: like what? Model: spiderman is my favorite . what is your favorite ? Table 11: Multi-turn examples from Sketch-Fill-A-R. Model persona traits are listed before the example con- versation.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Chit-chat generation with Sketch-Fill-A-R."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "never broke a bone in my body ever in my life i am a stunt double as my second job i read twenty books a year i was raised in a single parent household i only eat kosher persona-encoder A: hello there , how are you today ? B: hello ! how has your day been ? A: great so far preparing my kindergarten lesson for tomorrow ."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Sketch-Fill-A-R generates sketch responses in 4 steps. First, it encodes the conversation history and personas into hidden states h e,\u2022"
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "i never broke a bone in my body ever in my life i am a stunt double as my second job i read twenty books a year i was raised in a single parent household i only eat kosher that is great ! i am a stunt double .that is great ! i am a twenty double . awesome ! i like to stunt book . awesome ! i like to stunt double . like to do @persona @persona ."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Sketch-Fill-A-R inference strategy."
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "x p 0:T into hidden states h conv 0:T and h pers 0:T . We denote final hidden states for all N personas as h pers 0:N ."
},
"FIGREF6": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Experimental Setup We compared 4 variations of Sketch-Fill-A-R with a strong baseline: 1 \u2022 Key-Value Memory Network (KVMemNet) (Zhang et al., 2018), \u2022 Sketch-Fill (SF) \u2022 Sketch-Fill-A: SF + attention \u2022 Sketch-Fill-R: SF + reranking \u2022 Sketch-Fill-A-R: SF + attention + reranking"
},
"FIGREF8": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Correlations between different dimensions in which model responses where rated. Plots include Pearson correlation coefficients (\u03c1) for each dimension pair. The data suggests weak to moderate correlation between fluency and consistency, and fluency and engagingness respectively, and strong correlation between engagingness and consistency."
},
"FIGREF9": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Example conversations from the human user studies."
},
"FIGREF11": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Attention weights over the previous context. Decoder timesteps are on the y-axis and encoder timesteps are on the x-axis."
},
"FIGREF12": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "hate"
},
"FIGREF13": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Attention over memory (left) and persona traits (right). After observing \"love\", Sketch-Fill-A-R attends to personas 3, 5 with related personas."
},
"FIGREF14": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Average ratings by 5 users for one example."
},
"TABREF2": {
"html": null,
"content": "<table><tr><td colspan=\"3\">Sequence size KVMemNet Sketch-Fill-A-R (ours)</td></tr><tr><td>Unigram</td><td>5.39%</td><td>1.72%</td></tr><tr><td>Bigram</td><td>32.65 %</td><td>7.32 %</td></tr><tr><td>Trigram</td><td>54.95 %</td><td>13.97 %</td></tr><tr><td>Full responses</td><td>70.16 %</td><td>50.60 %</td></tr></table>",
"type_str": "table",
"text": "Sketch-Fill-A-R achieves significantly lower out-of-sample perplexity than KVMemNet. Perplexity for Sketch-Fill-A-R is measured over the sketch template responses. The number of persona tag occurrences is very small, constituting 8% of the total words. See Appendix for more information.",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Percentage of novel n-grams and full responses generated by the KVMemNet and Sketch-Fill-A-R models computed on the full validation set.",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>Baseline</td><td>Consistency</td><td>Engagingness</td><td>Fluency</td><td>Ours</td><td>Consistency</td><td>Engagingness</td><td>Fluency</td></tr><tr><td>Test I</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "KVMemNet 3.60 \u00b1 0.84 3.81 \u00b1 0.66 4.49 \u00b1 0.45 Sketch-Fill 2.51 \u00b1 1.16 2.57 \u00b1 1.10 2.98 \u00b1 1.29 Test II KVMemNet 3.57 \u00b1 0.86 3.77 \u00b1 0.62 4.54 \u00b1 0.47 Sketch-Fill-A 2.49 \u00b1 1.04 2.51 \u00b1 1.03 2.75 \u00b1 1.20 Test III KVMemNet 3.18 \u00b1 1.16 3.51 \u00b1 0.85 4.41 \u00b1 0.48 Sketch-Fill-R 3.34 \u00b1 1.02 3.89 \u00b1 0.79 4.45 \u00b1 0.78 Test IV KVMemNet 3.31 \u00b1 1.03 3.56 \u00b1 0.78 4.43 \u00b1 0.48 Sketch-Fill-A-R 3.54 \u00b1 1.01 3.69 \u00b1 0.92 4.43 \u00b1 0.71",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table><tr><td colspan=\"5\">: Human A/B-preferences between KVMemNet</td></tr><tr><td colspan=\"5\">and Sketch-Fill-A-R variations on 100 conversations,</td></tr><tr><td colspan=\"5\">each shown to 5 users. Two Sketch-Fill-A-R variations</td></tr><tr><td colspan=\"3\">are preferred over KVMemNet.</td><td/><td/></tr><tr><td/><td colspan=\"4\">Fluency Consistency Engagingness Perplexity</td></tr><tr><td>Fluency</td><td>1</td><td>0.40</td><td>0.46</td><td>-0.01</td></tr><tr><td>Consistency</td><td>-</td><td>1</td><td>0.67</td><td>-0.20</td></tr><tr><td>Engagingness</td><td>-</td><td>-</td><td>1</td><td>-0.15</td></tr><tr><td>Perplexity</td><td>-</td><td>-</td><td>-</td><td>1</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table><tr><td>Model Trait 1: I married a super model from Italy</td></tr><tr><td>Model Trait 2: I've zero family that I'm close to</td></tr><tr><td>Model Trait 3: My name is George</td></tr><tr><td>Model Trait 4: I'm a bee farmer</td></tr><tr><td>Model Trait 5: Human: what do you like to do?</td></tr><tr><td>Model: i am a bee farmer .</td></tr><tr><td>Human: woah , that's cool</td></tr><tr><td>Model: how are you ?</td></tr><tr><td>Human: i'm doing well ,</td></tr><tr><td>Human: just trying to get some work done</td></tr><tr><td>Model: do you have any hobbies ?</td></tr><tr><td>Human: i like to play basketball , what about you?</td></tr><tr><td>Model: i love papaya food .</td></tr><tr><td>Human: what is your name?</td></tr><tr><td>Model: george . what is your favorite name ?</td></tr></table>",
"type_str": "table",
"text": "My favorite food is papaya Human: hi what's up Model: do you like papaya ? Human: eh, i'm more into mango Model: what is your favorite papaya ? Human: probably just the normal ones.",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table><tr><td>Model Trait 1: I work for our local supermarket</td></tr><tr><td>Model Trait 2: Model: she also likes karate</td></tr><tr><td>Model: she is why i got a brown belt !</td></tr><tr><td>Human: what is her name?</td></tr><tr><td>Model: she had a green thumb and loved flowers</td></tr></table>",
"type_str": "table",
"text": "Example multi-turn dialog between a human and Sketch-Fill-AR. Model persona traits are shown above; the human was not given specific persona traits.My favorite band is The Who Model Trait 3: I have never been out of Ohio Model Trait 4: My favorite food is pizza with black olives Human: what are you doing today? Model: i am ok what do you do for fun ? Human: I like biking. Do you bike?. Model: cool ! i am watching a movie with my little girl Human: You have a daughter? How old is she?. Model: oh that is cool . i do not have any myself . Human: Oh. do you have a girlfriend then?. Model: she is an ex singer from the early 2000s , Model: mostly does pop songs and the like Human: What kind of movies do you like? Model: little girl and my mother lives with us Model: she's a concert pianist, she in between jobs Human: who is this little girl?",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table><tr><td>Metric</td><td>KVMemNet</td><td>Sketch-Fill-A-R (ours)</td></tr><tr><td>Fluency</td><td>3.27 \u00b1 1.19</td><td>2.83 \u00b1 1.31</td></tr><tr><td>Consistency</td><td>2.15 \u00b1 1.18</td><td>3.72 \u00b1 1.26</td></tr><tr><td>Engagement</td><td>1.93 \u00b1 0.92</td><td>2.41 \u00b1 1.33</td></tr></table>",
"type_str": "table",
"text": "Example multi-turn dialog between a human and KVMemNet. Model persona traits are shown above; the human was not given specific persona traits.",
"num": null
},
"TABREF11": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF12": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Conversation history</td></tr><tr><td colspan=\"2\">User A: hi there . how are ou</td></tr><tr><td colspan=\"2\">User B: hi good just writing some music and you</td></tr><tr><td colspan=\"2\">User A: i am well . just took my dog for a walk</td></tr><tr><td colspan=\"2\">KVMEMNet: sorry , i trying to become the next</td></tr><tr><td/><td>big thing in music , i sing</td></tr><tr><td>Fluency:</td><td>4.00 \u00b1 1.10</td></tr><tr><td colspan=\"2\">Consistency: 2.60 \u00b1 1.62</td></tr><tr><td colspan=\"2\">Engagingness: 3.20 \u00b1 1.33</td></tr><tr><td colspan=\"2\">Preferred by 1 out of 5 raters</td></tr><tr><td>Fluency:</td><td>4.80 \u00b1 0.40</td></tr><tr><td colspan=\"2\">Consistency: 4.00 \u00b1 0.63</td></tr><tr><td colspan=\"2\">Engagingness: 3.60 \u00b1 1.50</td></tr><tr><td colspan=\"2\">Preferred by 4 out of 5 raters</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF14": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Model Trait 1: I have zero family that I'm close to Model Trait 2: My name is George Model Trait 3: My favorite food is papaya Model Trait 4: I am a bee farmer Model Trait 5: I married a super model from Italy Human: hi. Model: i am good how about you Human: i didn't ask you that?. Model: its pretty cool lol , tell me about you ? Human: my favorite color is orange. Model: oh i see that sounds cool then Human: what about you? Model: i hate organ lol Human: what do u do for fun i like to go hiking. Model: i like blue my favorite disney princess is ariel . Human: cool, what other disney princesses do you like?. Model: yeah disney is great . i named my 4 dogs after disney characters . Model: i like ariel becuase i have a pet fish. Human: i love watching disney movies ! how old are you? Model: cool my favorite color is red Human: you're just ignoring me huh. Model: ariel has always been my favorite",
"num": null
},
"TABREF15": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Multi-turn examples from KVMemNet. Model persona traits are listed before the example conversation.",
"num": null
}
}
}
}