ACL-OCL / Base_JSON /prefixD /json /D16 /D16-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D16-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:36:01.006723Z"
},
"title": "Globally Coherent Text Generation with Neural Checklist Models",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Kiddon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recurrent neural networks can generate locally coherent text but often have difficulties representing what has already been generated and what still needs to be said-especially when constructing long texts. We present the neural checklist model, a recurrent neural network that models global coherence by storing and updating an agenda of text strings which should be mentioned somewhere in the output. The model generates output by dynamically adjusting the interpolation among a language model and a pair of attention models that encourage references to agenda items. Evaluations on cooking recipes and dialogue system responses demonstrate high coherence with greatly improved semantic coverage of the agenda.",
"pdf_parse": {
"paper_id": "D16-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "Recurrent neural networks can generate locally coherent text but often have difficulties representing what has already been generated and what still needs to be said-especially when constructing long texts. We present the neural checklist model, a recurrent neural network that models global coherence by storing and updating an agenda of text strings which should be mentioned somewhere in the output. The model generates output by dynamically adjusting the interpolation among a language model and a pair of attention models that encourage references to agenda items. Evaluations on cooking recipes and dialogue system responses demonstrate high coherence with greatly improved semantic coverage of the agenda.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recurrent neural network (RNN) architectures have proven to be well suited for many natural language generation tasks (Mikolov et al., 2010; Mikolov et al., 2011; Sordoni et al., 2015; Xu et al., 2015; Wen et al., 2015; Mei et al., 2016) . Previous neural generation models typically generate locally coherent language that is on topic; however, overall they can miss information that should have been introduced or introduce duplicated or superfluous content. These errors are particularly common in situations where there are multiple distinct sources of input or the length of the output text is sufficiently long. In this paper, we present a new recurrent neural model that maintains coherence while improv- \"salt,\" \"lime,\" etc.) have already been used (checked boxes).",
"cite_spans": [
{
"start": 118,
"end": 140,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 141,
"end": 162,
"text": "Mikolov et al., 2011;",
"ref_id": "BIBREF18"
},
{
"start": 163,
"end": 184,
"text": "Sordoni et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 185,
"end": 201,
"text": "Xu et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 202,
"end": 219,
"text": "Wen et al., 2015;",
"ref_id": "BIBREF29"
},
{
"start": 220,
"end": 237,
"text": "Mei et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The model is trained to interpolate an RNN (e.g., encode \"pico de gallo\" and decode a recipe) with attention models over new (left column) and used (middle column) items that identify likely items for each time step (shaded boxes; \"tomatoes,\" etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ing coverage by globally tracking what has been said and what is still left to be said in complete texts. For example, consider the challenge of generating a cooking recipe, where the title and ingredient list are provided as inputs and the system must generate a complete text that describes how to produce the desired dish. Existing RNN models may lose track of which ingredients have already been mentioned, especially during the generation of a long recipe with many ingredients. Recent work has focused on adapting neural network architectures to improve coverage (Wen et al., 2015) with application to generating customer service responses, such as hotel information, where a single sentence is generated to describe a few key ideas. Our focus is instead on developing a model that maintains coherence while producing longer texts or covering longer input specifications (e.g., a long ingredient list).",
"cite_spans": [
{
"start": 569,
"end": 587,
"text": "(Wen et al., 2015)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More specifically, our neural checklist model generates a natural language description for achieving a goal, such as generating a recipe for a particular dish, while using a new checklist mechanism to keep track of an agenda of items that should be mentioned, such as a list of ingredients (see Fig. 1 ). The checklist model learns to interpolate among three components at each time step: (1) an encoder-decoder language model that generates goal-oriented text, (2) an attention model that tracks remaining agenda items that need to be introduced, and (3) an attention model that tracks the used, or checked, agenda items. Together, these components allow the model to learn representations that best predict which words should be included in the text and when references to agenda items should be checked off the list (see check marks in Fig. 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 301,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 839,
"end": 845,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our approach on a new cooking recipe generation task and the dialogue act generation from Wen et al. (2015). In both cases, the model must correctly describe a list of agenda items: an ingredient list or a set of facts, respectively. Generating recipes additionally tests the ability to maintain coherence in long procedural texts. Experiments in dialogue generation demonstrate that our approach outperforms previous work with up to a 4 point BLEU improvement. Our model also scales to cooking recipes, where both automated and manual evaluations demonstrate that it maintains the strong local coherence of baseline RNN techniques while significantly improving the global coverage by effectively integrating the agenda items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a goal g and an agenda E = {e 1 , . . . , e |E| }, our task is to generate a goal-oriented text x by making use of items on the agenda. For example, in the cooking recipe domain, the goal is the recipe title (\"pico de gallo\" in Fig. 1 ), and the agenda is the ingredient list (e.g., \"lime,\" \"salt\"). For dialogue systems, the goal is the dialogue type (e.g., inform or query) and the agenda contains information to be mentioned (e.g., a hotel name and address). For example, if g =\"inform\" and E = {name(Hotel Stratford), has internet(no)}, an output text might be x =\"Hotel Stratford does not have internet.\"",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 240,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Task",
"sec_num": "2"
},
{
"text": "Attention models have been used for many NLP tasks such as machine translation (Balasubramanian et al., 2013; Bahdanau et al., 2014) , abstractive sentence summarization (Rush et al., 2015) , machine reading (Cheng et al., 2016) , and image caption generation (Xu et al., 2015) . Our model uses new types of attention to record what has been said and to select new agenda items to be referenced.",
"cite_spans": [
{
"start": 79,
"end": 109,
"text": "(Balasubramanian et al., 2013;",
"ref_id": "BIBREF2"
},
{
"start": 110,
"end": 132,
"text": "Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 170,
"end": 189,
"text": "(Rush et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 208,
"end": 228,
"text": "(Cheng et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 260,
"end": 277,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Recently, other researchers have developed new ways to use attention mechanisms for related generation challenges. Most closely related, Wen et al. (2015) and Wen et al. (2016) present neural network models for generating dialogue system responses given a set of agenda items. They focus on generating short texts (1-2 sentences) in a relatively small vocabulary setting and assume a fixed set of possible agenda items. Our model composes substantially longer texts, such as recipes, with a more varied and open ended set of possible agenda items. We also compare performance for our model on their data.",
"cite_spans": [
{
"start": 137,
"end": 154,
"text": "Wen et al. (2015)",
"ref_id": "BIBREF29"
},
{
"start": 159,
"end": 176,
"text": "Wen et al. (2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Maintaining coherence and avoiding duplication have been recurring challenges when generating text using RNNs for other applications, including image captioning (Jia et al., 2015; Xu et al., 2015) and machine translation (Tu et al., 2016b; Tu et al., 2016a) . A variety of solutions have been developed to address infrequent or out-of-vocabulary words in particular (G\u00fcl\u00e7ehre et al., 2016; Jia and Liang, 2016) . Instead of directly copying input words or deterministically selecting output, our model can learn how to generate them (e.g., it might prefer to produce the word \"steaks\" when the original recipe ingredient was \"ribeyes\"). Finally, recent work in machine translation models has introduced new training objectives to encourage attention to all input words (Luong et al., 2015) , but these models do not accumulate attention while decoding.",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "(Jia et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 180,
"end": 196,
"text": "Xu et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 221,
"end": 239,
"text": "(Tu et al., 2016b;",
"ref_id": "BIBREF27"
},
{
"start": 240,
"end": 257,
"text": "Tu et al., 2016a)",
"ref_id": "BIBREF26"
},
{
"start": 366,
"end": 389,
"text": "(G\u00fcl\u00e7ehre et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 390,
"end": 410,
"text": "Jia and Liang, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 769,
"end": 789,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Generating recipes was an early task in planning (Hammond, 1986) and generating referring expression research (Dale, 1988) . These can be seen as key steps in classic approaches to generating natural language text: a formal meaning representation is provided as input and the model first does content selection to determine the non-linguistic concepts to be conveyed by the output text (i.e., what to say) and then does realization to describe those concepts The top portion shows how the checklist and available/used agenda item matrices are updated.",
"cite_spans": [
{
"start": 49,
"end": 64,
"text": "(Hammond, 1986)",
"ref_id": "BIBREF10"
},
{
"start": 110,
"end": 122,
"text": "(Dale, 1988)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "330 E t Generate output E t+1 \u03b1 t \u03c3 h t-1 g E t x t + r t s t q t z t h t ref-type(h t ) Ph t E t x x x x \u03b1 t f t o t + a t-1 a t f t new E t+1 new 1-a t E a t x E x E E 2 x GRU",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "in natural language text (i.e., how to say it) (Thompson, 1977; Reiter and Dale, 2000) . More recently, machine learning methods have focused on parts of this approach (Barzilay and Lapata, 2005; Liang et al., 2009) or the full two-stage approach (Angeli et al., 2010; Konstas and Lapata, 2013) . Most of these models shorter texts, although Mori et al. (2014) did consider longer cooking recipes. Our approach is a joint model that instead operates with textual input and tries to cover all of the content it is given. Fig. 2 shows a graphical representation of the neural checklist model. At a high level, our model uses a recurrent neural network (RNN) language model that encodes the goal as a bag-of-words and then generates output text token by token. It additionally stores a vector that acts as a soft checklist of what agenda items have been used so far during generation. This checklist is updated every time an agenda item reference is generated and is used to compute the available agenda items at each time step. The available items are used as an input to the language model and to constrain which agenda items can still be referenced during generation. Agenda embeddings are also used when generating item references.",
"cite_spans": [
{
"start": 47,
"end": 63,
"text": "(Thompson, 1977;",
"ref_id": "BIBREF25"
},
{
"start": 64,
"end": 86,
"text": "Reiter and Dale, 2000)",
"ref_id": "BIBREF20"
},
{
"start": 168,
"end": 195,
"text": "(Barzilay and Lapata, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 196,
"end": 215,
"text": "Liang et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 247,
"end": 268,
"text": "(Angeli et al., 2010;",
"ref_id": "BIBREF0"
},
{
"start": 269,
"end": 294,
"text": "Konstas and Lapata, 2013)",
"ref_id": "BIBREF14"
},
{
"start": 342,
"end": 360,
"text": "Mori et al. (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 520,
"end": 526,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "We assume the goal g and agenda items E (see Sec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input variable definitions",
"sec_num": "4.1"
},
{
"text": "2) are each defined by a set of tokens. Goal tokens come from a fixed vocabulary V goal , the item tokens come from a fixed vocabulary V agenda , and the tokens of the text x t come from a fixed vocabulary V text . In an abuse of notation, we represent each goal g, agenda item e i , and text token x t as a k-dimensional word embedding vector. We compute these embeddings by creating indicator vectors of the vocabulary token (or set of tokens for goals and agenda items) and embed those vectors using a trained k \u00d7 |V z | projection matrix, where z \u2208 {goal, agenda, text} depending whether we are generating a goal, agenda item, or text token. Given a goal embedding g \u2208 R k , a matrix of L agenda items E \u2208 R L\u00d7k , a checklist soft record of what items have been used a t\u22121 \u2208 R L , a previous hidden state h t\u22121 \u2208 R k , and the current input word embedding x t \u2208 R k , our architecture computes the next hidden state h t , an embedding used to generate the output word o t , and the updated checklist a t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input variable definitions",
"sec_num": "4.1"
},
{
"text": "To generate the output token probability distribution (see \"Generate output\" box in Fig. 2 ), w t \u2208 R |Vtext| , we project the output hidden state o t into the vocabulary space and apply a softmax:",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 90,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "w t = softmax(W o o t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "where W o \u2208 R |V |\u00d7k is a trained projection matrix. The output hidden state is the linear interpolation of (1) content c gru t from a Gated Recurrent Unit 331 (GRU) language model, (2) an encoding c new t generated from the new agenda item reference model (Sec. 4.3), and (3) and an encoding c used t generated from a previously used item model (Sec. 4.4):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "o t = f gru t c gru t + f new t c new t + f used t c used t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "The interpolation weights,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "f gru t , f new t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": ", and f used t , are probabilities representing how much the output token should reflect the current state of the language model or a chosen agenda item. f gru t is the probability of a non-agenda-item token, f new t is the probability of an new item reference token, and f used t is the probability of a used item reference. In the Fig. 1 example, f new t is high in the first row when new ingredient references \"tomatoes\" and \"onion\" are generated; f used t is high when the reference back to \"tomatoes\" is made in the second row, and f gru t is high the rest of the time.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 339,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "To generate these weights, our model uses a threeway probabilistic classifier, ref -type(h t ), to determine whether the hidden state of the GRU h t will generate non-agenda tokens, new agenda item references, or used item references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "ref -type(h t ) gener- ates a probability distribution f t \u2208 R 3 as f t = ref -type(h t ) = sof tmax(\u03b2Sh t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "where S \u2208 R 3\u00d7k is a trained projection matrix and \u03b2 is a temperature hyper-parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "f gru t = f 1 t , f new t = f 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "t , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "f used t = f 3 t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "ref -type() does not use the agenda, only the hidden state h t : h t must encode when to use the agenda, and ref -type() is trained to identify that in h t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating output token probabilities",
"sec_num": "4.2"
},
{
"text": "The two key features of our model are that it (1) predicts which agenda item is being referred to, if any, at each time step and (2) stores those predictions for use during generation. These components allow for improved output texts that are more likely to mention agenda items while avoiding repetition and references to irrelevant items not in the agenda.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "These features are enabled by a checklist vector a t \u2208 R L that represents the probability each agenda item has been introduced into the text. The checklist vector is initialized to all zeros at t = 1, representing that all items have yet to be introduced. The checklist vector is a soft record with each a t,i \u2208 [0, 1]. 1 We introduce the remaining items as a matrix E new t \u2208 R L\u00d7k , where each row is an agenda item embedding weighted by how likely it is to still need to be referenced. For example, in Fig. 1 , after the first \"tomatoes\" is generated, the row representing \"chopped tomatoes\" in the agenda will be weighted close to 0. We calculate E new t using the checklist vector (see \"Update [...] items\" box in Fig. 2 ):",
"cite_spans": [
{
"start": 321,
"end": 322,
"text": "1",
"ref_id": null
},
{
"start": 700,
"end": 705,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 506,
"end": 512,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 720,
"end": 726,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "E new t = ((1 L \u2212 a t\u22121 ) \u2297 1 k ) \u2022 E,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "where 1 L = {1} L , 1 k = {1} k , and the outer product \u2297 replicates 1 L \u2212 a t\u22121 for each dimension of the embedding space. \u2022 is the Hadamard product (i.e., element-wise multiplication) of two matrices with the same dimensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "The model predicts when an agenda item will be generated using ref -type() (see Sec. 4.2 for details). When it does, the encoding c new t approximates which agenda item is most likely. c new t is computed using an attention model that generates a learned soft alignment \u03b1 new t \u2208 R L between the hidden state h t and the rows of E new t (i.e., available items). The alignment is a probability distribution representing how close h t is to each item:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "\u03b1 new t \u221d exp(\u03b3E new t P h t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "where P \u2208 R k\u00d7k is a learned projection matrix and \u03b3 is a temperature hyper-parameter. In Fig. 1 , the shaded squares in the top line (i.e., the first \"tomatoes\" and the onion references) represent this alignment. The attention encoding c new t is then the attention-weighted sum of the agenda items:",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 96,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "c new t = E T \u03b1 new t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "At each step, the model updates the checklist vector based on the probability of generating a new agenda ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New agenda item reference model",
"sec_num": "4.3"
},
{
"text": "We also allow references to be generated for previously used agenda items through the previously used item encoding c used t . This is useful in longer texts -when agenda items can be referred to more than once -so that the agenda is always responsible for generating its own referring expressions. The example in Fig. 1 refers back to tomatoes when generating to what to add the diced onion.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 320,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Previously used item reference model",
"sec_num": "4.4"
},
{
"text": "At each time step t, we use a second attention model to compare h t to a used items matrix E used t \u2208 R L\u00d7k . Like the remaining agenda item matrix E new t , E used t is calculated using the checklist vector generated at the previous time step:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previously used item reference model",
"sec_num": "4.4"
},
{
"text": "E used t = (a t\u22121 \u2297 1 k ) \u2022 E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previously used item reference model",
"sec_num": "4.4"
},
{
"text": "The attention over the used items, \u03b1 used t \u2208 R L , and the used attention encoding c used t are calculated in the same way as those over the available items (see Sec. 4.3 for comparison):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previously used item reference model",
"sec_num": "4.4"
},
{
"text": "\u03b1 used t \u221d exp(\u03b3E used t P h t ), c used t = E T \u03b1 used t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previously used item reference model",
"sec_num": "4.4"
},
{
"text": "Our decoder RNN adapts a Gated Recurrent Unit (GRU) . Given an input x t \u2208 R k at time step t and the previous hidden state h t\u22121 \u2208 R k , a GRU computes the next hidden state h t as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "h t = (1 \u2212 z t )h t\u22121 + z tht .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "The update gate, z t , interpolates between h t\u22121 and new content,h t , defined respectively as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "z t = \u03c3(W z x t + U z h t\u22121 ), h t = tanh(W x t + r t U h t\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "is an element-wise multiplication, and the reset gate, r t , is calculated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "r t = \u03c3(W r x t + U r h t\u22121 ). W z , U z , W , U , W r , U r \u2208 R k\u00d7k are trained projec- tion matrices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "We adapted a GRU to allow extra inputs, namely the goal g and the available agenda items E new t (see \"GRU language model\" box in Fig. 2 ). These extra inputs help guide the language model stay on topic. Our adapted GRU has a change to the computation of the new contenth t as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 136,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "h t = tanh(W h x t + r t U h h t\u22121 + s t Y g + q t (1 T L ZE new t ) T ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "where s t is a goal select gate and q t is a item select gate, respectively defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "s t = \u03c3(W s x t + U s h t\u22121 ), q t = \u03c3(W q x t + U q h t\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "1 L sums the rows of the available item matrix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "E new t . Y , Z, W s , U s , W q , U q \u2208 R k\u00d7k are trained projec- tion matrices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "The goal select gate controls when the goal should be taken into account during generation: for example, the recipe title may be used to decide what the imperative verb for a new step should be. The item select gate controls when the available agenda items should be taken into account (e.g., when generating a list of ingredients to combine). The GRU hidden state is initialized with a projection of the goal:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "h 0 = U g g, where U g \u2208 R k\u00d7k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "The content vector c gru t that is used to compute the output hidden state o t is a linear projection of the GRU hidden state, c gru t = P h t , where P is the same learned projection matrix used in the computation of the attention weights (see Sections 4.3 and 4.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU language model",
"sec_num": "4.5"
},
{
"text": "Given a training set of (goal, agenda, output text) triples {(g (1) , E (1) , x (1) ), . . . , (g (J) , E (J) , x (J) )}, we train model parameters by minimizing negative log-likelihood:",
"cite_spans": [
{
"start": 64,
"end": 67,
"text": "(1)",
"ref_id": null
},
{
"start": 98,
"end": 101,
"text": "(J)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.6"
},
{
"text": "N LL(\u03b8) = \u2212 J j=1 Nj i=2 log p(x (j) i |x (j) 1 , . . . , x (j) i\u22121 , g (j) , E (j) ; \u03b8), where x (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.6"
},
{
"text": "1 is the start symbol. We use mini-batch stochastic gradient descent, and back-propagate through the goal, agenda, and text embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.6"
},
{
"text": "It is sometimes the case that weak heuristic supervision on latent variables can be easily gathered to improve training. For example, for recipe generation, we can approximate the linear interpolation weights f t and the attention updates a new t and a used t using string match heuristics comparing tokens in the text to tokens in the ingredient list. 2 When this extra signal is available, we add mean squared loss terms to N LL(\u03b8) to encourage the latent variables to take those values; for example, if f * t is the true value and f t is the predicted value, a loss term \u2212(f * t \u2212 f t ) 2 is added. When this signal is not available, as is the case with our dialogue generation task, we instead introduce a mean squared loss term that encourages the final checklist a (j) N j to be a vector of 1s (i.e., every agenda item is accounted for).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.6"
},
{
"text": "We generate text using beam search, which has been shown to be fast and accurate for RNN decoding (Graves, 2012; Sutskever et al., 2014) . When the beam search completes, we select the highest probability sequence that uses the most agenda items. This is the count of how many times the three-way classifier, ref -type(h t ), chose to generate an new item reference with high probability (i.e., > 50%).",
"cite_spans": [
{
"start": 98,
"end": 112,
"text": "(Graves, 2012;",
"ref_id": "BIBREF8"
},
{
"start": 113,
"end": 136,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "4.7"
},
{
"text": "Our model was implemented and trained using the Torch scientific computing framework for Lua. 3 Experiments We evaluated neural checklist models on two natural language generation tasks. The first task is cooking recipe generation. Given a recipe title (i.e., the name of the dish) as the goal and the list of ingredients as the agenda, the system must generate the correct recipe text. Our second evaluation is based on the task from Wen et al. (2015) for generating dialogue responses for hotel and restaurant information systems. The task is to generate a natural language response given a query type (e.g., informing or querying) and a list of facts to convey (e.g., a hotel's name and address).",
"cite_spans": [
{
"start": 89,
"end": 95,
"text": "Lua. 3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Parameters We constrain the gradient norm to 5.0 and initialize parameters uniformly on [\u22120.35, 0.35] . We used a beam of size 10 for generation. Based on dev set performance, a learning rate of 0.1 was chosen, and the temperature hyperparameters (\u03b2, \u03b3) were (5, 2) for the recipe task and (1, 10) for the dialogue task. The models for the recipe task had a hidden state size of k = 256; the 3 http://torch.ch/ models for the dialogue task had k = 80 to compare to previous models. We use a batch size 30 for the recipe task and 10 for the dialogue task.",
"cite_spans": [
{
"start": 88,
"end": 101,
"text": "[\u22120.35, 0.35]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Recipe data and pre-processing We use the Now You're Cooking! recipe library: the data set contains over 150,000 recipes in the Meal-Master TM format. 4 We heuristically removed sentences that were not recipe steps (e.g., author notes, nutritional information, publication information). 82,590 recipes were used for training, and 1,000 each for development and testing. We filtered out recipes to avoid exact duplicates between training and dev (test) sets.",
"cite_spans": [
{
"start": 151,
"end": 152,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "We collapsed multi-word ingredient names into single tokens using word2phrase 5 ran on the training data ingredient lists. Titles and ingredients were cleaned of non-word tokens. Ingredients additionally were stripped of amounts (e.g., \"1 tsp\"). As mentioned in Sec. 4.6, we approximate true values for the interpolation weights and attention updates for recipes based on string match between the recipe text and the ingredient list. The first ingredient reference in a sentence cannot be the first token or after a comma (e.g., the bold tokens cannot be ingredients in \"oil the pan\" and \"in a large bowl, mix [...]\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Recipe data statistics Automatic recipe generation is difficult due to the length of recipes, the size of the vocabulary, and the variety of possible dishes. In our training data, the average recipe length is 102 tokens, and the longest recipe has 814 tokens. The vocabulary of the recipe text from the training data (i.e., the text of the recipe not including the title or ingredient list) has 14,103 unique tokens. About 31% of tokens in the recipe vocabulary occur at least 100 times in the training data; 8.6% of the tokens occur at least 1000 times. The training data also represents a wide variety of recipe types, defined by the recipe titles. Of 3793 title tokens, only 18.9% of the title tokens in the title vocabulary occur at least 100 times in the training data, which demonstrates the large variability in the titles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Dialogue system data and processing We used the hotel and restaurant dialogue system corpus and the same train-development-test split from Wen et al. (2015) . We used the same pre-processing, sets of reference samples, and baseline output, and we were given model output to compare against. 6 For training, slot values (e.g., \"Red Door Cafe\") were replaced by generic tokens (e.g., \"NAME TOKEN\"). After generation, generic tokens were swapped back to specific slot values. Minor post-processing included removing duplicate determiners from the relexicalization and merging plural \"-s\" tokens onto their respective words. After replacing specific slot values with generic tokens, the training data vocabulary size of the hotel corpus is 445 tokens, and that of the restaurant corpus is 365 tokens. The task has eight goals (e.g., inform, confirm).",
"cite_spans": [
{
"start": 150,
"end": 156,
"text": "(2015)",
"ref_id": null
},
{
"start": 291,
"end": 292,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Models Our main baseline EncDec is a model using the RNN Encoder-Decoder framework proposed by and Sutskever et al. (2014) . The model encodes the goal and then each agenda item in sequence and then decodes the text using GRUs. The encoder has two sets of parameters: one for the goal and the other for the agenda items. For the dialogue task, we also compare against the SC-LSTM system from Wen et al. 2015and the handcrafted rule-based generator described in that paper.",
"cite_spans": [
{
"start": 99,
"end": 122,
"text": "Sutskever et al. (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "For the recipe task, we also compare against three other baselines. The first is a basic attention model, Attention, that generates an attention encoding by comparing the hidden state h t to the agenda. That encoding is added to the hidden state, and a nonlinear transformation is applied to the result before projecting into the output space. We also present a nearest neighbor baseline (NN) that simply copies over an existing recipe text based on the input similarity computed using cosine similarity over the title and the ingredient list. Finally, we present a hybrid approach (NN-Swap) that revises a nearest neighbor recipe using the neural checklist model. The neural checklist model is forced to generate the returned recipe nearly verbatim, except that it can generate new strings to replace any extraneous ingredients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Our neural checklist model is labeled Checklist. We also present the Checklist+ model, which interactively re-writes a recipe to better cover the input agenda: if the generated text does not use every agenda item, embeddings corresponding to missing items are multiplied by increasing weights and a new recipe is generated. This process repeats until the 6 We thank the authors for sharing their system outputs. Table 1 : Quantitative results on the recipe task. The line with ot = ht has the results for the non-interpolation ablation.",
"cite_spans": [
{
"start": 355,
"end": 356,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "new recipe does not contain new items. We also report the performance of our checklist model without the additional weak supervision of heuristic ingredient references (-no supervision) (see Sec. 4.6). 7 we also evaluate two ablations of our checklist model on the recipe task. First, we remove the linear interpolation and instead use h t as the output (see Sec. 4.2). Second, we remove the previously used item reference model by changing ref -type() to a 2-way classifier between new ingredient references and all other tokens (see Sec. 4.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "We include commonly used metrics like BLEU-4, 8 and METEOR (Denkowski and Lavie, 2014) . Because neither of these metrics can measure how well the generated recipe follows the input goal and the agenda, we also define two additional metrics. The first measures the percentage of the agenda items corrected used, while the second measures the number of extraneous items incorrectly introduced. Both these metrics are computed based on simple string match and can miss certain referring expressions (e.g., \"meat\" to refer to \"pork\"). Because of the approximate nature of these automated metrics, we also report a human evaluation.",
"cite_spans": [
{
"start": 59,
"end": 86,
"text": "(Denkowski and Lavie, 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": null
},
{
"text": "6 Recipe generation results Fig. 1 results for recipe generation. All BLEU and METEOR scores are low, which is expected for long texts. Our checklist model performs better than both neural network baselines (Attention and EncDec) in all metrics. Nearest neighbor baselines (NN and NN-Swap) Figure 3 : Counts of the most used vocabulary tokens (sorted by count) in the true dev set recipes and in generated recipes.",
"cite_spans": [
{
"start": 273,
"end": 289,
"text": "(NN and NN-Swap)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 28,
"end": 34,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 290,
"end": 298,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": null
},
{
"text": "METEOR; this is due to a number of recipes that have very similar text but make different dishes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": null
},
{
"text": "However, NN baselines are not successful in generating a goal-oriented text that follows the given agenda: compared to Checklist+ (83.4%), they use substantially less % of the given ingredients (40% -58.2%) while also introducing extra ingredients not provided. EncDec and Attention baselines similarly generate recipes that are not relevant to the given input, using only 22.8% -26.9% of the agenda items. Checklist models rarely introduce extraneous ingredients not provided (0.6 -0.8), while other baselines make a few mistakes on average (2.0 -4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": null
},
{
"text": "The ablation study demonstrates the empirical contribution of different model components. (o t = h t ) shows the usefulness of the attention encodings when generating the agenda references, while (-no used) shows the need for separate attention mechanisms between new and used ingredient references for more accurate use of the agenda items. Similarly, (-no supervision) demonstrates that the weak supervision encourages the model to learn more accurate management of the agenda items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": null
},
{
"text": "Human evaluation Because neither BLEU nor METEOR is suitable for evaluating generated text in terms of their adherence to the provided goal and the agenda, we also report human evaluation using Amazon Mechanical Turk. We evaluate the generated recipes on (1) grammaticality, (2) how well the recipe adheres to the provided ingredient list, and (3) how well the generated recipe accomplishes the desired dish. We selected 100 random test recipes. For each question we used a Likert scale (\u2208 [1, 5] ) and report averaged ratings among five turkers. Table 2 shows the averaged scores over the responses. The checklist models outperform all baselines in generating recipes that follow the provided agenda closely and accomplish the desired goal, where NN in particular often generates the wrong dish. Perhaps surprisingly, both the Attention and EncDec baselines and the Checklist model beat the true recipes in terms of having better grammar. This can partly be attributed to noise in the parsing of the true recipes, and partly because the neural models tend to generate shorter, simpler texts. Fig. 3 shows the counts of the most used vocabulary tokens in the true dev set recipes compared to the recipes generated by EncDec and Checklist+. Using the vocabulary from the training data, the true dev recipes use 5206 different tokens. The EncDec's vocabulary is only \u223c16% of that size, while the Checklist+ model is a third of the size.",
"cite_spans": [
{
"start": 490,
"end": 493,
"text": "[1,",
"ref_id": null
},
{
"start": 494,
"end": 496,
"text": "5]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 547,
"end": 554,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1093,
"end": 1099,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": null
},
{
"text": "An error analysis on the dev set shows that the EncDec baseline over-generates catch-all phrases like \"all ingredients\" or \"the ingredients,\" used in 21% of the generated recipes, whereas only 7.8% of true recipes use that construction. This phrase type simplifies the recipe, but using all ingredients in one step reduces the chance of accomplishing the desired dish correctly. The Checklist model only generates those phrases 13% of the time. Fig. 4 shows two dev set recipes with generations from the EncDec and Checklist+ models. The EncDec model is much more likely to both use incorrect ingredients and to introduce ingredients more than once (e.g., \"baking power\" and \"salt\" in the bottom example are not in the ingredient list, and \"milk\" in the top example is duplicated). In the top example, the Checklist+ model refers to both Parmesean and Swiss cheese as \"cheese\"; generating more precise referring expressions is an important area for future work. The Checklist+ recipes generate the correct dishes to an extent: for example, the top recipe makes a casserole but does not cook the ingredients together before baking and mixes in biscuits instead of putting Title: Casino royale casserole Ingredients: 10 oz chopped broccoli, 2 tbsp butter, 2 tbsp flour, 1/2 tsp salt, 1/4 tsp black pepper, 1/4 tsp ground nutmeg, 1 cup milk, 1 1/2 cup shredded swiss cheese, 2 tsp lemon juice, 2 cup cooked cubed turkey, 4 oz mushrooms, 1/4 cup grated Parmesan cheese, 1 can refrigerated biscuits Mix the turkey with the flour, salt, pepper, milk and lemon juice. Stir in the cheese. Add broccoli, mushrooms, nutmeg, and cheese. Mix well. Add the butter and mix well. Add the biscuits and mix well. Spread in a greased 13x9 inch baking pan. Bake in a 350f oven for 35 to 40 minutes, or until pick inserted in the center comes out clean. Serve hot.",
"cite_spans": [],
"ref_spans": [
{
"start": 445,
"end": 451,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": null
},
{
"text": "Title: Kathie Frieman's cheese cake",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": null
},
{
"text": "Ingredients: 2 lb cream cheese, 1 3/4 cups sugar, 5 eggs, 15 oreo cookies, 1/2 tsp vanilla, 1/2 tsp almond extract",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": null
},
{
"text": "Butter pan well. Whip cheese until soft and add sugar, combine until fluffy. Add extracts and eggs. Chop oreos into pieces and fold into mixture. Pour into pan and place in a pan of 1 or more inches of water. Bake at 350 1 1/2 hours ( check after 1 1/4 hours ) until center is set. Cool in pan of water. When cool turn upside-down on plate. Sprinkle with cookie crumbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truth EncDec Checklist+",
"sec_num": null
},
{
"text": "Beat cream cheese and sugar together until light and fluffy. Add eggs, one at a time, beating well after each addition. Blend in vanilla. Sift flour, measure, and sift with baking power and salt. Add to creamed mixture alternately with milk. Fold in stiffly beaten egg whites.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truth EncDec Checklist+",
"sec_num": null
},
{
"text": "Pour into 9x13 pan. Bake at 350 degrees for 25 to 30 minutes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Truth EncDec Checklist+",
"sec_num": null
},
{
"text": "Cream sugar and cream cheese until smooth. Add eggs, one at a time, beating well after each addition. Add vanilla, almond extract, and cookies. Mix well. Pour into greased and floured tube pan. Bake at 350 degrees for 30 minutes. Figure 4 : Example dev set generated recipes. Tokenization, newlines, and capitalization changed for space and readability. Bolded ingredient references are either ingredients not in the list and/or duplicated initial ingredient references. Table 3 : Quantitative evaluation of the top generations in the hotel and restaurant domains them on top. Future work could better model the full set of steps needed to achieve the overall goal.",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 238,
"text": "Figure 4",
"ref_id": null
},
{
"start": 471,
"end": 478,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Truth EncDec Checklist+",
"sec_num": null
},
{
"text": "7 Dialogue system results Figure 3 shows our results on the hotel and restaurant dialogue system generation tasks. HDC is the rule-based baseline from Wen et al. (2015). For both domains, the checklist model achieved the highest BLEU-4 and METEOR scores, but both neural systems performed very well. The power of our model is in generating long texts, but this experiment shows that our model can generalize well to other tasks with different kinds of agenda items and goals.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Truth EncDec Checklist+",
"sec_num": null
},
{
"text": "We present the neural checklist model that generates globally coherent text by keeping track of what has been said and still needs to be said from a provided agenda. Future work includes incorporating referring expressions for sets or compositions of agenda items (e.g., \"vegetables\"). The neural checklist model is sensitive to hyperparameter initialization, which should be investigated in future work. The neural checklist model can also be adapted to handle multiple checklists, such as checklists over composite entities created over the course of a recipe (see Kiddon (2016) for an initial proposal).",
"cite_spans": [
{
"start": 567,
"end": 580,
"text": "Kiddon (2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work and conclusions",
"sec_num": "8"
},
{
"text": "By definition, at is non-negative. We truncate any values greater than 1 using a hard tanh function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recipes and format at http://www.ffts.com/recipes.htm 5 See https://code.google.com/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For this model, parameters were initialized on [-0.2, 0.2] to maximize development accuracy. 8 See Moses system (http://www.statmt.org/moses/)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by the Intel Science and Technology Center for Pervasive Computing (ISTC-PC), NSF (IIS-1252835 and IIS-1524371), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and gifts by Google and Facebook. We thank our anonymous reviewers for their comments and suggestions, as well as Yannis Konstas, Mike Lewis, Mark Yatskar, Antoine Bosselut, Luheng He, Eunsol Choi, Victoria Lin, Kenton Lee, and Nicholas FitzGerald for helping us read and edit. We also thank Mirella Lapata and Annie Louis for their suggestions for baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple domain-independent probabilistic approach to generation",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "502--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 502-512.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. In ICLR 2015.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating coherent event schemas at scale",
"authors": [
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1721--1731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating coherent event schemas at scale. In Proceedings of the 2013 Con- ference on Empirical Methods on Natural Language Processing, pages 1721-1731.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Collective content selection for concept-to-text generation",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "331--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2005. Collec- tive content selection for concept-to-text generation. In Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing, pages 331- 338.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Long short-term memory-networks for machine reading",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine read- ing. In Proceedings of the 2016 Conference on Empir- ical Methods in Natural Language Processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine trans- lation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generating Referring Expressions in a Domain of Objects and Processes",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Dale. 1988. Generating Referring Expressions in a Domain of Objects and Processes. Ph.D. the- sis, Centre for Cognitive Science, University of Ed- inburgh.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EACL 2014 Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "376--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor uni- versal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation, pages 376-380.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sequence transduction with recurrent neural networks. Representation Learning Worksop",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2012. Sequence transduction with recur- rent neural networks. Representation Learning Work- sop, ICML.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pointing the unknown words",
"authors": [
{
"first": "Sungjin",
"middle": [],
"last": "\u00c7 Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "140--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 aglar G\u00fcl\u00e7ehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics, pages 140-149.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "CHEF: A model of casebased planning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kristian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammond",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the Fifth National Conference on Artificial Intelligence (AAAI-86)",
"volume": "",
"issue": "",
"pages": "267--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristian J. Hammond. 1986. CHEF: A model of case- based planning. In Proceedings of the Fifth National Conference on Artificial Intelligence (AAAI-86), pages 267-271.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Data recombination for neural semantic parsing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "12--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Jia and P. Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics, pages 12-22.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Guiding long-short term memory for image caption generation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Efstratios",
"middle": [],
"last": "Gavves",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Tinne",
"middle": [],
"last": "Tuytelaars",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2407--2415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Jia, Efstratios Gavves, Basura Fernando, and Tinne Tuytelaars. 2015. Guiding long-short term memory for image caption generation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2407-2415.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning to Interpret and Generate Instructional Recipes",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Kiddon",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Kiddon. 2016. Learning to Interpret and Generate Instructional Recipes. Ph.D. thesis, Computer Science & Engineering, University of Washington.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A global model for concept-to-text generation",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research (JAIR)",
"volume": "48",
"issue": "",
"pages": "305--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Ar- tificial Intelligence Research (JAIR), 48:305-346.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning semantic correspondences with less supervision",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervi- sion. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna- tional Joint Conference on Natural Language Process- ing of the AFNLP: Volume 1 -Volume 1, pages 91-99.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "What to talk about and how? Selective generation using lstms with coarse-to-fine alignment",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning. ; Hongyuan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"R"
],
"last": "Bansal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Walter",
"suffix": ""
}
],
"year": 2015,
"venue": "The 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "720--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, September. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? Selective genera- tion using lstms with coarse-to-fine alignment. In The 15th Annual Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 720-730.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of INTERSPEECH 2010, the 11th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Luk\u00e1s Burget, Jan Cer- nock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neu- ral network based language model. In Proceedings of INTERSPEECH 2010, the 11th Annual Conference of the International Speech Communication Association, pages 1045-1048.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Extensions of recurrent neural network language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Kombrink",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "5528--5531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Stefan Kombrink, Luk\u00e1s Burget, Jan Cernock\u00fd, and Sanjeev Khudanpur. 2011. Exten- sions of recurrent neural network language model. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP 2011), pages 5528-5531.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "FlowGraph2Text: Automatic sentence skeleton compilation for procedural text generation",
"authors": [
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Hirokuni",
"middle": [],
"last": "Maeta",
"suffix": ""
},
{
"first": "Tetsuro",
"middle": [],
"last": "Sasada",
"suffix": ""
},
{
"first": "Koichiro",
"middle": [],
"last": "Yoshino",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Funatomi",
"suffix": ""
},
{
"first": "Yoko",
"middle": [],
"last": "Yamakata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "118--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinsuke Mori, Hirokuni Maeta, Tetsuro Sasada, Koichiro Yoshino, Atsushi Hashimoto, Takuya Fu- natomi, and Yoko Yamakata. 2014. FlowGraph2Text: Automatic sentence skeleton compilation for proce- dural text generation. In Proceedings of the 8th In- ternational Natural Language Generation Conference, pages 118-122.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building Natural Language Generation Systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, New York, NY, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 379-389.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A neural network approach to context-sensitive generation of conversational responses",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Meg",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "196--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversa- tional responses. In Conference of the North American Chapter of the Association for Computational Linguis- tics Human Language Technologies (NAACL-HLT), pages 196-205.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Advances in Neural Information Processing Systems",
"authors": [
{
"first": "K",
"middle": [
"Q"
],
"last": "Lawrence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Strategy and tactics: a model for language production",
"authors": [
{
"first": "S",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1977,
"venue": "Papers from the Thirteenth Regional Meeting of the Chicago Linguistics Society",
"volume": "",
"issue": "",
"pages": "89--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry S. Thompson. 1977. Strategy and tactics: a model for language production. In Papers from the Thir- teenth Regional Meeting of the Chicago Linguistics Society, pages 89-95. Chicago Linguistics Society.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Context gates for neural machine translation",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2016a. Context gates for neural machine translation. CoRR, abs/1608.06043.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling coverage for neural machine translation",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016b. Modeling coverage for neu- ral machine translation. In Proceedings of the 54th",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "76--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 76-85.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Semantically conditioned LSTM-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrksic",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1711--1721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-hao Su, David Vandyke, and Steve J. Young. 2015. Se- mantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Multi-domain neural network language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"Maria"
],
"last": "Mrksic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "120--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei-hao Su, David Vandyke, and Steve J. Young. 2016. Multi-domain neural network language generation for spoken dia- logue systems. In Proceedings of the 15th Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Lan- guage Technologies, pages 120-129.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. Proceedings of the 32nd International Con- ference on Machine Learning, pages 2048-2057.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Example checklist recipe generation. A checklist (right dashed column) tracks which agenda items (top boxes;"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "A diagram of the neural checklist model. The bottom portion depicts how the model generates the output embedding ot."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Then, the new checklist a t is a t = a t\u22121 + a new t ."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Similar to a new t , a used t = f used t \u2022 \u03b1 used t ."
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>hidden state projected into agenda space</td><td>available items</td><td>probability of using new item</td></tr><tr><td/><td>used</td><td/></tr><tr><td/><td>items</td><td/></tr><tr><td/><td/><td>hidden state</td></tr><tr><td/><td/><td>classifier</td></tr></table>",
"num": null,
"type_str": "table",
"text": ""
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Model Attention EncDec NN NN-Swap Checklist Checklist+ Truth</td><td>Syntax Ingredient use Follows goal 4.47 3.02 3.47 4.58 3.29 3.61 4.22 3.02 3.36 4.11 3.51 3.78 4.58 3.80 3.94 4.39 3.95 4.10 4.39 4.03 4.34</td></tr></table>",
"num": null,
"type_str": "table",
"text": "perform the best in terms of BLEU and"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Token counts in dev recipes</td><td>1.00 10.00 100.00 1000.00 10000.00</td><td/><td colspan=\"2\">True recipes EncDec Checklist+</td></tr><tr><td/><td>0</td><td>500</td><td>1000</td><td>1500</td><td>2000</td></tr><tr><td/><td/><td/><td>Tokens (sorted by count)</td><td/></tr></table>",
"num": null,
"type_str": "table",
"text": "Human evaluation results on the generated and true recipes. Scores range in[1, 5]."
}
}
}
}