ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:52:08.437978Z"
},
"title": "May I take your order? A Neural Model for Extracting Structured Information from Conversations",
"authors": [
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Shatin",
"region": "N.T. Hong Kong"
}
},
"email": "blpeng@se.cuhk.edu.hk"
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Seltzer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"addrLine": "One Microsoft Way",
"settlement": "Redmond",
"region": "WA",
"country": "USA"
}
},
"email": "mseltzer@microsoft.com"
},
{
"first": "Yun-Cheng",
"middle": [],
"last": "Ju",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"addrLine": "One Microsoft Way",
"settlement": "Redmond",
"region": "WA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"addrLine": "One Microsoft Way",
"settlement": "Redmond",
"region": "WA",
"country": "USA"
}
},
"email": "gzweig@microsoft.com"
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Shatin",
"region": "N.T. Hong Kong"
}
},
"email": "kfwong@se.cuhk.edu.hk"
},
{
"first": "Caesar",
"middle": [],
"last": "Salad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Shatin",
"region": "N.T. Hong Kong"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we tackle a unique and important problem of extracting a structured order from the conversation a customer has with an order taker at a restaurant. This is motivated by an actual system under development to assist in the order taking process. We develop a sequence-tosequence model that is able to map from unstructured conversational input to the structured form that is conveyed to the kitchen and appears on the customer receipt. This problem is critically different from other tasks like machine translation where sequence-to-sequence models have been used: the input includes two sides of a conversation; the output is highly structured; and logical manipulations must be performed, for example when the customer changes his mind while ordering. We present a novel sequence-to-sequence model that incorporates a special attention-memory gating mechanism and conversational role markers. The proposed model improves performance over both a phrase-based machine translation approach and a standard sequence-to-sequence model. Hi, how can I help you ? We'd like a large cheese pizza. Any toppings? Yeah, how about pepperoni and two diet cokes. What size? Uh, medium and make that three cokes. Anything else? A small Caesar salad with the dressing on the side Sure, is that it? Yes, that's all, thanks.",
"pdf_parse": {
"paper_id": "E17-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we tackle a unique and important problem of extracting a structured order from the conversation a customer has with an order taker at a restaurant. This is motivated by an actual system under development to assist in the order taking process. We develop a sequence-tosequence model that is able to map from unstructured conversational input to the structured form that is conveyed to the kitchen and appears on the customer receipt. This problem is critically different from other tasks like machine translation where sequence-to-sequence models have been used: the input includes two sides of a conversation; the output is highly structured; and logical manipulations must be performed, for example when the customer changes his mind while ordering. We present a novel sequence-to-sequence model that incorporates a special attention-memory gating mechanism and conversational role markers. The proposed model improves performance over both a phrase-based machine translation approach and a standard sequence-to-sequence model. Hi, how can I help you ? We'd like a large cheese pizza. Any toppings? Yeah, how about pepperoni and two diet cokes. What size? Uh, medium and make that three cokes. Anything else? A small Caesar salad with the dressing on the side Sure, is that it? Yes, that's all, thanks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Extracting structured information from unstructured text is a critically important problem in natural language processing. In this paper, we attack a deceptively simple form of the problem: understanding what a customer wants when ordering at a restaurant. In this problem, we seek to convert the conversation between the customer and the order taker, i.e. the waiter or waitress, into the structured form that is conveyed to the kitchen to prepare the food, and which appears on the customer receipt. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Size Qty Modifiers Pizza large 1 add pepperoni Caesar Salad small 1 side dressing Diet Coke medium 3 Table 1 : An example of the structured data record corresponding to the conversation in Figure 1 We develop this system to analyze real-time interactions with the aim of discovering errors in the order-entry process. Note that the objective is to analyze the interaction and suggest corrections to the human order-taker. Thus, we take both sides of the order-taking interaction as input, and are not attempting to predict the order-taker's side of the conversation.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 1",
"ref_id": null
},
{
"start": 189,
"end": 197,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "While we focus on the restaurant domain in this work, this problem is relevant in any scenario in which a conversation results in the creation of structured information. Other examples include a sales interaction which results in a purchase order, a call to a help desk which results in a service record, or a conversation with a travel agent that results in an itinerary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "An example of the problem of interest is shown in Figure 1 . The structured data record that corresponds to this conversation is shown in Table 1 . There are several things to note about this example:",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 138,
"end": 145,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "\u2022 The output is a stylized and structured representation of the input",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "\u2022 The items in the structured order may appear in a different sequence than they are mentioned",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "\u2022 Inference occurs across turns, for example that \"medium\" applies to the coke and not the pizza whose size was earlier specified",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "\u2022 Logical manipulations must be done, for example changing the number of cokes from two to three",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "\u2022 In contrast to machine translation, we do not wish to create a verbatim \"translation\" of the input, but instead a logical distillation of it",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "To attack this problem, we implemented two baselines and several sequence-to-sequence models. The first baseline is an information-retrieval approach based on a TF-IDF match (Salton et al., 1975) which finds the most similar conversation in the training data, and returns the associated order. The second uses phrase-based machine translation (Koehn et al., 2003) to \"translate\" from the conversational input to the tokens in the structured order. We compare these to a sequence-to-sequence (s2s) model with attention (Chan et al., 2016; Devlin et al., 2015; Sutskever et al., 2014; Mei et al., 2016) , and then extend the s2s model with the addition of a gating mechanism on the attention memory and with an auxiliary input that indicates the conversational role of the speaker (customer or ordertaker). We show that it is in fact possible to extract the orders from conversations recorded at a real restaurant 1 , and achieve an F measure of over 70 from raw text and 65 from ASR transcriptions.",
"cite_spans": [
{
"start": 174,
"end": 195,
"text": "(Salton et al., 1975)",
"ref_id": "BIBREF26"
},
{
"start": 343,
"end": 363,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF15"
},
{
"start": 518,
"end": 537,
"text": "(Chan et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 538,
"end": 558,
"text": "Devlin et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 559,
"end": 582,
"text": "Sutskever et al., 2014;",
"ref_id": "BIBREF28"
},
{
"start": 583,
"end": 600,
"text": "Mei et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Item",
"sec_num": null
},
{
"text": "The precise problem setting in this paper is as follows. The training data consists of input/output pairs of examples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "(X 1 , Y 1 ), . . . , (X N , Y N ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "where X k is a conversation consisting of several utterances, similar to the example shown in Figure 1 , and Y k is the corresponding structured data record such as the one in Table 1 . Given a conversation X k , the goal of our model is to extract the structured data record Y k so that:",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 102,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 176,
"end": 183,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Y k = argmax Y log P (Y |X k )",
"eq_num": "(1)"
}
],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "We cast this task as a sequence modeling problem which aims to map the sequence of words in a conversation X k to the sequence of tokens in the corresponding structured data record Y k . The input sequence is formed by concatenating the utterances in the conversation, while the output sequence is formed by concatenating the rows in the structured data record. For example, the utterances in the conversation shown in Figure 1 are concatenated to predict the sequence y = Pizza, size=large, qty=1, modifiers=(add pepperoni) | Diet Coke, size=medium, qty=3 | Caesar Salad, size=small, qty=1, modifiers=(side dressing) which is derived from Table 1 . Under this sequential model, the conditional probability of the structured data record Y given the observed conversation X can be written as",
"cite_spans": [],
"ref_spans": [
{
"start": 419,
"end": 428,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 641,
"end": 648,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Y |X, \u03b8) = T t=1 P (y t |y 1:t\u22121 , X, \u03b8)",
"eq_num": "(2)"
}
],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "where y 1:t\u22121 denotes the first t \u2212 1 terms in the structured data record and \u03b8 represents the model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "2"
},
{
"text": "The proposed model is based on an encoderdecoder architecture with attention , as shown in Figure 2 . The encoder network reads the input conversation X one word at a time and updates its hidden state h t according to current input w t and previous hidden state h t\u22121 ,",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = f e (w t , h t\u22121 ), t \u2208 {1, \u2022 \u2022 \u2022 , M }",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "where f e is a nonlinear function which is elaborated in the following section. After reading all the tokens, the encoder network yields a context vector c as the representation of the entire conversation. The decoder then processes this representation and generates a hypothesized structured data record Y as an output sequence, word by word given the context vector c and all previous predicted tokens. The conditional probability can be expressed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y t |y 1 , \u2022 \u2022 \u2022 , y t\u22121 , X) = f d (y t\u22121 , s t , c)",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t = g(y t\u22121 , s t\u22121 , c) t \u2208 {1, \u2022 \u2022 \u2022 , N }",
"eq_num": "(5)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "where f d and g are nonlinear functions and s t is the hidden state of decoder at time t. Critically, our decoder also utilizes an attention mechanism, which stores the intermediate encoder representations of each input word for use by the decoder. Two improvements to the conventional encoderdecoder model architecture are proposed in this work. First, we incorporate gates controlled by the encoder into the neural attention memory to adaptively modulate the representations in the memory based on their semantic importance. Second, we propose a way to incorporate conversational role information into the model to reflect the fact that different participants in a multi-party interaction have different roles and the meaning of certain utterances may be dependent on the speaker's role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "A detailed illustration of the proposed model is shown in Figure 3 . We elaborate on each component of this model in the following sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The encoder network is designed to generate a semantically meaningful representation of unstructured conversations. Several neural network architectures have been proposed for this purpose, including CNNs (Kalchbrenner et al., 2014; Hu et al., 2014) , RNNs (Sutskever et al., 2014) and LSTMs (Hochreiter and Schmidhuber, 1997) . In this work, we use an encoder constructed from a recurrent neural network with gated RNN units (GRU) . The GRU has been shown to alleviate the gradient vanishing problem of RNNs, enabling the model to learn long term dependencies in the input sequence. GRUs have been shown to perform comparably to LSTMs (Chung et al., 2014) .",
"cite_spans": [
{
"start": 205,
"end": 232,
"text": "(Kalchbrenner et al., 2014;",
"ref_id": "BIBREF13"
},
{
"start": 233,
"end": 249,
"text": "Hu et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 257,
"end": 281,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF28"
},
{
"start": 292,
"end": 326,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 636,
"end": 656,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "At time t, the new state of a GRU is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "\u210e 1 \u210e 1 \u210e 2 \u210e 2 \u210e 3 \u210e 3 1 2 3 1 2 3 1 2 3 \u2026 \u2026 \u2026 + \u22121 a \u03b1 1 \u03b1 2 \u03b1 3 \u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "Figure 3: Graphical structure of memory-gated encoder-decoder model with attention mechanism. w 1 represents input; \u2212 \u2192 h 1 and \u2190 \u2212 h 1 are the hidden states of forward and backward GRUs, respectively. g 1 , \u03b1 1 represent the context gates and attention weights, respectively. Small dot node means element-wise product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "z t = \u03c3(W z x t + U z h t\u22121 + b z ) (6) r t = \u03c3(W r x t + U r h t\u22121 + b r ) (7) h t = tanh(W h x t + U h (r t h t\u22121 )) (8) h t = (1 \u2212 z t ) h t\u22121 + z t h t (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "where stands for element-wise multiplication. W , U are weight matrixes applied to input and previous hidden state, respectively .h t is a linear combination of the previous state h t\u22121 and the hypothesis state\u0125 t .\u0125 t is computed with new sequence information. The update gate, z t , controls to what extent the past information is kept and how much new information is added. The reset gate, r t , controls to what extent the history state contributes to the hypothesis state. If r t is zero, then GRU ignores all the history information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "The conversation encoding is obtained by concatenating the GRU hidden state vectors from the forward and backward directions. Thus the encoder operation can be summarized as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x t = W e w t , t \u2208 [1, T ] (10) \u2212 \u2192 h t = \u2212 \u2212\u2212 \u2192 GRU (x t ), t \u2208 [1, T ] (11) \u2190 \u2212 h t = \u2190 \u2212\u2212 \u2212 GRU (x t ), t \u2208 [T, 1]",
"eq_num": "(12)"
}
],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h + t = \u2212 \u2192 h t \u2295 \u2190 \u2212 h t",
"eq_num": "(13)"
}
],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "where w t is the one-hot input vector, W e is the embedding matrix, and x t is the word embedding for w t . The functions \u2212 \u2212\u2212 \u2192 GRU (x t ) and \u2190 \u2212\u2212 \u2212 GRU (x t ) represent the GRU operating in the forward and backward directions, respectively, with processing defined by equations 6-9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "This produces a sequence of context vectors, h + t which are subsequently consumed by an attention mechanism in the decoder. We use the final attention vector h + T to initialize the hidden state of the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder Network",
"sec_num": "3.1"
},
{
"text": "In most sequence-to-sequence tasks such as machine translation, every word in the input is important. However, in our scenario, where the input to the system is conversational speech, not all the words in the conversation contribute to the prediction of structured data record. For example, it is reasonable to ignore the chit-chat that is present in many conversations. Further, in other tasks, gating mechanisms have been shown to be useful to dynamically select important information Hochreiter and Schmidhuber, 1997; Tu et al., 2016) .",
"cite_spans": [
{
"start": 487,
"end": 520,
"text": "Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF10"
},
{
"start": 521,
"end": 537,
"text": "Tu et al., 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Gate",
"sec_num": "3.2"
},
{
"text": "In light of this, we propose the use of an additional memory gate to select important information from the memory vector. The memory gate we use consists of a single-layer feed-forward neural network",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Gate",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g t = \u03c3(W g h + t + b g )",
"eq_num": "(14)"
}
],
"section": "Memory Gate",
"sec_num": "3.2"
},
{
"text": "where \u03c3 is a sigmoid activation function and W g and b g are weight matrix and bias, respectively, and h + t is the context vector at time t defined in equation 10. The gate is then applied to the context vector h + t using an element-wise multiplication operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Gate",
"sec_num": "3.2"
},
{
"text": "c t = g t h + t (15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Gate",
"sec_num": "3.2"
},
{
"text": "After applying memory gate, the gated context vector c t is then fed into attention memory of the decoder network in place of the original context vector h + t . Figure 4 illustrates an example of the gating weights for a sample utterance. The darker colors indicates values close to 1 while the lighter colors indicate values close to 0. As the figure shows, the network learns to suppress semantically unimportant words.",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Memory Gate",
"sec_num": "3.2"
},
{
"text": "In many sequence-to-sequence models, there is no notion of different speakers with different roles. Inspired by the work in dialog generation (Li et al., 2016) and spoken language understanding (Hori et al., 2016) , we propose the addition of speaker information into the encoder network to explicitly model the interaction patterns of the customer and order-taker.",
"cite_spans": [
{
"start": 142,
"end": 159,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 194,
"end": 213,
"text": "(Hori et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Role Information",
"sec_num": "3.2.1"
},
{
"text": "Specifically we learn separate word and role embeddings, and concatenate them to form the input. The input to the encoder network becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Role Information",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x w t = W e w t , t \u2208 [1, T ]",
"eq_num": "(16)"
}
],
"section": "Role Information",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x r t = W r r t , t \u2208 [1, T ]",
"eq_num": "(17)"
}
],
"section": "Role Information",
"sec_num": "3.2.1"
},
{
"text": "x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Role Information",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t = x w t \u2295 x r t , t \u2208 [1, T ]",
"eq_num": "(18)"
}
],
"section": "Role Information",
"sec_num": "3.2.1"
},
{
"text": "The decoder network is used to predict the next word given all the previously predicted words and the context vectors from the encoder network (Luong et al., 2015; . We use an RNN with GRU units to predict each word y t sequentially based on the previously predicted word y t\u22121 and the output of the attention process a t that computes a weighted combination of the context vectors in memory.",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "(Luong et al., 2015;",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "If we define s t as the hidden layer of the decoder at time t, the decoder's operation can be expressed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t = \u2212 \u2212\u2212 \u2192 GRU (y t\u22121 \u2295 a t )",
"eq_num": "(19)"
}
],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = softmax(W o s t + b o )",
"eq_num": "(20)"
}
],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "where y t\u22121 \u2295 a t is the concatenation of the previously predicted output y t\u22121 and the output of the attention process a t , and \u2212 \u2212\u2212 \u2192 GRU (\u2022) is defined by equations 6-9, as before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "The attention vector a t is computed as a linear combination of the gated context vectors generated by the encoder network. This can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a t = M j=1 \u03b1 ij c j",
"eq_num": "(21)"
}
],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "where the weights \u03b1 ij are computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 ij = exp(e ij ) N k=1 exp(e ik )",
"eq_num": "(22)"
}
],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "A single-layer feed-forward neural network is used to compute e ij as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e ij = V T a tanh(W a s t\u22121 + U a c j )",
"eq_num": "(23)"
}
],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "where V a , W a , and U a are weight matrices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder Network",
"sec_num": "3.3"
},
{
"text": "The model is trained to maximize the log probability of the structured data records given the corresponding conversation,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(Y k ,X k )\u2208D log P (Y k |X k )",
"eq_num": "(24)"
}
],
"section": "Model Training",
"sec_num": "3.4"
},
{
"text": "where D is the set containing all the training pairs and P (Y k |X k ) is computed with equation 2. The standard adadelta algorithm (Zeiler, 2012) is used for parameter updates. Gradients are clipped to 1 to avoid exponentially increasing values (Pascanu et al., 2013) .",
"cite_spans": [
{
"start": 246,
"end": 268,
"text": "(Pascanu et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.4"
},
{
"text": "In this section, we evaluate our proposed model on two data sets and compare performance with several baseline systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We conducted experiments on a corpus of conversations between a customer and an order taker (waiter or waitress) captured in a real restaurant environment. The conversations were manually transcribed by professional annotators. There are 4823 examples in the training set, 543 in the development (dev) set, and 843 in the test set. There are approximately 260 unique items in the record and 150 unique modifiers on these items, but not all modifiers apply to all items. We experimented with two version of the dev and test sets. The first is manually transcribed in the same manner as the training set, while the second is generated by a speech recognition decoder that was trained on the conversations in the training set. We denote the second set as ASR-dev and ASR-test. Table 2 lists the statistics of the data sets. Note that the audio of a conversation was collected as a single file and then automatically segmented into turns for ASR decoding. This process was not perfect and likely introduced some errors. Thus, the average length and number of turns of differ between the ASR transcriptions and the manual transcriptions. ",
"cite_spans": [],
"ref_spans": [
{
"start": 774,
"end": 781,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data sets",
"sec_num": "4.1"
},
{
"text": "All words are lower-cased and an unknown word token is used for words which appear less than four times in the training set. The word embedding matrix is initialized by randomly sampling from a normal distribution, and scaled by 0.01. The recurrent connections of the GRU are initialized with orthogonal matrices (Saxe et al., 2013) and biases are initialized to zero. A single layer GRU is used for both the encoder and decoder. The network has 600 hidden units and uses 300dimensional word embeddings. The dropout rate is set to 0.5. We did not tune hyper-parameters except for the dimension of the role embedding which is selected from {3, 5, 10} on the dev set. During inference, we use beam search decoding with a beam of 5 to generate the structured records. In order to decode without a length bias, the log probability of decoded results is normalized by the number of tokens.",
"cite_spans": [
{
"start": 313,
"end": 332,
"text": "(Saxe et al., 2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.2"
},
{
"text": "A typical metric to evaluate a generation system is BLEU score (Papineni et al., 2002) which uses ngram overlap to quantify the degree to which a hypothesis matches the reference. However, our scenario is more demanding: order items are either correct or incorrect. Therefore, we adopt precision and recall at the item level as our evaluation metric. Note that an item is defined as a row in the structured data record and typically includes multiple fields. Using Table 1 as an example, there are three items to be scored. Only when the model produces an item that is exactly the same as the reference item do we count it as correct. As an additional measure, we report accuracy of the entire order, in which every item in an order must be correct for the order to be counted as correct. ",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 465,
"end": 472,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "We compare the performance of our neural model with baseline models that employ information retrieval (IR) and phrase-based machine translation (PBMT) approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "4.4"
},
{
"text": "The IR method treated the training set of transcriptions as a collection of documents, each mapped to a corresponding order. The test conversation was used as a query to find the most similar training set conversation. The corresponding order was returned as the estimated order. In our experiment, we use TFIDF to compute the similarity score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR:",
"sec_num": null
},
{
"text": "PBMT: The goal of a phrase-based translation model is to map a conversation into its structured record with alignment and language models. In our experiments, we use the Moses decoder, a state-of-the-art phrase-based MT system available for research purposes. We use GIZA++ (Och and Ney, 2003) to learn word alignment and irstlm to learn the language model. The models are trained on the conversation/order pairs in the training set and used to predict the structured data record given a conversation.",
"cite_spans": [
{
"start": 274,
"end": 293,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR:",
"sec_num": null
},
{
"text": "First we discuss the performance of our models on manually transcribed data and then examine the results on ASR recognized data. Table 3 lists the experiment results on manually transcribed dev and test sets. We refer to our model as the neural attention model (NAM) . We see that the NAM is superior to both the IR and PBMT methods by a large margin. Both the proposed memory gate and role modifications yield improvements over the basic NAM. When combined, these produce the best performance in terms of accuracy on the dev set, and both F1 and accuracy on the test set. While there are only small differences in the scores among some of the NAM methods, we are unaware of a measure of statistical significance suitable for this task.",
"cite_spans": [
{
"start": 261,
"end": 266,
"text": "(NAM)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.5"
},
{
"text": "Though not reported, we also found that a basic encoder-decoder s2s model without attention performs poorly; it cannot summarize information across multiple turns into a single vector. The attention mechanism, acting on the entire encoding sequence, is critical in our task. rate around 25%. With this noisy data, we find that the memory gate and role additions consistently improve performance. When combined, both F1 and accuracy improved. Figure 6 shows a sample input and the output from each model. We see that the NAM augmented with memory gates and role information successfully captures the interaction and generates the correct record.",
"cite_spans": [],
"ref_spans": [
{
"start": 442,
"end": 450,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.5"
},
{
"text": "To better understand the proposed model, we visualize the attention weight at each time step in Figure 5 . The figure compares the attention weights produced by a conventional context mem-ory and the proposed gated context memory. We see that both models are able to learn good soft alignment between the input conversation and the output structured data record. However, the attention weights in 5(b), with our proposed gated attention mechanism, are sparser than those in 5(a) and better able to ignore uninformative terms in the input.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 104,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "4.6"
},
{
"text": "There has been much work on information extraction from single utterances. Kate and Mooney (2006) proposed the use of SVM classifiers based on string kenels to parse natural language to a formal meaning representation. Wong and Figure 6 : Examples of outputs generated by each model for the conversation in first row.",
"cite_spans": [
{
"start": 75,
"end": 97,
"text": "Kate and Mooney (2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 228,
"end": 236,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Mooney (2006) used syntax-based statistical machine translation method to do semantic parsing. Translation of natural language to a formal meaning representation is captured by a synchronous context-free grammar in (Wu, 1997; Chiang et al., 2006) . Quirk et al. (2015) created models to map natural language descriptions to executable code using productions from the formal language. Beltagy and Quirk (2016) improved the performance of semantic parsing on If-Then statements by using neural networks to model derivation trees and leveraged several techniques like synthetic training data from paraphrases and grammar combinations to improve generalization and reduce overfitting. In addition, there are some other research works focusing on text generation from structured data records. Angeli et al. (2010) proposed of a domain independent probabilistic approach to performing content selection and surface realization, making text generation as a local decision process. Konstas and Lapata (2013) created a global model to generate text from structured records, which jointly modeled content selection and surface realization with a probabilistic context-free grammar. In contrast, in this paper we focus on generating structured data records from text descriptions.",
"cite_spans": [
{
"start": 215,
"end": 225,
"text": "(Wu, 1997;",
"ref_id": "BIBREF31"
},
{
"start": 226,
"end": 246,
"text": "Chiang et al., 2006)",
"ref_id": "BIBREF4"
},
{
"start": 249,
"end": 268,
"text": "Quirk et al. (2015)",
"ref_id": "BIBREF25"
},
{
"start": 788,
"end": 808,
"text": "Angeli et al. (2010)",
"ref_id": "BIBREF0"
},
{
"start": 974,
"end": 999,
"text": "Konstas and Lapata (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Using spoken language understanding techniques, (Mesnil et al., 2015) tag each word in a sentence with a predefined slot. A dialog modeling approach (Young et al., 2013) is also relevant to our task. However, this approach requires the definition of semantic slot names and human labeling of dialog acts in each utterance.",
"cite_spans": [
{
"start": 48,
"end": 69,
"text": "(Mesnil et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 149,
"end": 169,
"text": "(Young et al., 2013)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "There are a number of relevant applications of neural attention models. Nallapati et al. (2016) proposed using sequence to sequence model to summarize source code into natural language; they used a LSTM as encoder and another attentional LSTM and decoder to jointly learn content selection and realization. Dong and Lapata (2016) presented a sequence to sequence model with a tree structure decoder to map natural language to its logical form. The tree structure decoder shows superior performance on data that has nested output structure. It has also been used in other domains including machine translation (Sutskever et al., 2014; , and image caption generation . From this perspective, the most related work is (Mei et al., 2016) in which they proposed using a sequence-tosequence model to map navigational instructions in natural language to actions, which is conceptually similar to our work. However, we start from conversations and our structured data records are more complex.",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "Nallapati et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 307,
"end": 329,
"text": "Dong and Lapata (2016)",
"ref_id": "BIBREF8"
},
{
"start": 609,
"end": 633,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF28"
},
{
"start": 715,
"end": 733,
"text": "(Mei et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper we have presented an end to end method for extracting structured information from unstructured conversations using an encoderdecoder neural network. The restaurant-ordering domain we study is distinguished from past work by its conversational nature, and the need to handle user corrections and modifications. We incorporate a memory gate and role information into the encoder network to selectively keep important information and capture interaction patterns between conversation participants. Experimental results on both a human transcribed data set and ASR-recognized data set demonstrate the feasibility and effectiveness of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The restaurant will remain anonymous for business reasons, and we have changed the names of menu items in our examples accordingly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was done while Baolin Peng was an intern at Microsoft Research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple domain-independent probabilistic approach to generation",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "502--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Confer- ence on Empirical Methods in Natural Language Processing, pages 502-512, Cambridge, MA, Oc- tober. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improved semantic parsers for if-then statements",
"authors": [
{
"first": "I",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "726--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Beltagy and Chris Quirk. 2016. Improved semantic parsers for if-then statements. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 726-736, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4960--4964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Con- ference on Acoustics, Speech and Signal Process- ing, ICASSP 2016, Shanghai, China, March 20-25, 2016, pages 4960-4964. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Parsing arabic dialects",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Mona",
"middle": [
"T"
],
"last": "Diab",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Safiullah",
"middle": [],
"last": "Shareef",
"suffix": ""
}
],
"year": 2006,
"venue": "EACL 2006, 11st Conference of the European Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Mona T. Diab, Nizar Habash, Owen Rambow, and Safiullah Shareef. 2006. Parsing ara- bic dialects. In Diana McCarthy and Shuly Wintner, editors, EACL 2006, 11st Conference of the Euro- pean Chapter of the Association for Computational Linguistics, Proceedings of the Conference, April 3- 7, 2006, Trento, Italy. The Association for Computer Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar, October. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NIPS Deep Learning and Representation Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence mod- eling. In Proceedings of NIPS Deep Learning and Representation Learning Workshop.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language models for image captioning: The quirks and what works",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "100--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Mar- garet Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 100-105, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language to logical form with neural attention",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "33--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 33-43, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "From captions to visual concepts and back",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Hao Fang",
"suffix": ""
},
{
"first": "Forrest",
"middle": [
"N"
],
"last": "Gupta",
"suffix": ""
},
{
"first": "Rupesh",
"middle": [
"Kumar"
],
"last": "Iandola",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Platt",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1473--1482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Fang, Saurabh Gupta, Forrest N. Iandola, Ru- pesh Kumar Srivastava, Li Deng, Piotr Doll\u00e1r, Jian- feng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 1473-1482.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Context-sensitive and role-dependent spoken language understanding using bidirectional and attention lstms",
"authors": [
{
"first": "Chiori",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "John",
"middle": [
"R"
],
"last": "Hershey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "3236--3240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiori Hori, Takaaki Hori, Shinji Watanabe, and John R. Hershey. 2016. Context-sensitive and role-dependent spoken language understanding us- ing bidirectional and attention lstms. In Interspeech 2016, pages 3236-3240.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Convolutional neural network architectures for matching natural language sentences",
"authors": [
{
"first": "Baotian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2042--2050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network archi- tectures for matching natural language sentences. In Advances in Neural Information Processing Systems 27, pages 2042-2050.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A convolutional neural network for modelling sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "655--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 655-665, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using string-kernels for learning semantic parsers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rohit",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Kate",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "913--920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit J. Kate and Raymond J. Mooney. 2006. Us- ing string-kernels for learning semantic parsers. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguis- tics, pages 913-920, Sydney, Australia, July. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003. The Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics, HLT-NAACL 2003. The Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A global model for concept-to-text generation",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2013,
"venue": "J. Artif. Intell. Res. (JAIR)",
"volume": "48",
"issue": "",
"pages": "305--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. J. Artif. Intell. Res. (JAIR), 48:305-346.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "994--1003",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 994-1003, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lisbon, Portugal, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Listen, attend, and walk: Neural mapping of navigational instructions to action sequences",
"authors": [
{
"first": "Hongyuan",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"R"
],
"last": "Walter",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2772--2778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyuan Mei, Mohit Bansal, and Matthew R. Wal- ter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action se- quences. In Proceedings of the Thirtieth AAAI Con- ference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 2772-2778.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using recurrent neural networks for slot filling in spoken language understanding",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Dilek",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Larry",
"middle": [
"P"
],
"last": "He",
"suffix": ""
},
{
"first": "G\u00f6khan",
"middle": [],
"last": "Heck",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE/ACM Trans. Audio, Speech & Language Processing",
"volume": "23",
"issue": "3",
"pages": "530--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Z. Hakkani-T\u00fcr, Xi- aodong He, Larry P. Heck, G\u00f6khan T\u00fcr, Dong Yu, and Geoffrey Zweig. 2015. Using recurrent neural networks for slot filling in spoken language under- standing. IEEE/ACM Trans. Audio, Speech & Lan- guage Processing, 23(3):530-539.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequence-to-sequence rnns for text summarization",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bing Xiang, and Bowen Zhou. 2016. Sequence-to-sequence rnns for text summa- rization. CoRR, abs/1602.06023.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "On the difficulty of training recurrent neural networks",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 30th International Conference on Machine Learning, ICML 2013",
"volume": "",
"issue": "",
"pages": "1310--1318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, At- lanta, GA, USA, 16-21 June 2013, pages 1310-1318.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Language to code: Learning semantic parsers for if-this-then-that recipes",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "878--888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 878-888, Beijing, China, July. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A vector space model for automatic indexing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gerard Salton",
"suffix": ""
},
{
"first": "C",
"middle": [
"S"
],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 1975,
"venue": "Commun. ACM",
"volume": "18",
"issue": "11",
"pages": "613--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton, A. Wong, and C. S. Yang. 1975. A vec- tor space model for automatic indexing. Commun. ACM, 18(11):613-620.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks",
"authors": [
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Saxe",
"suffix": ""
},
{
"first": "James",
"middle": [
"L"
],
"last": "Mcclelland",
"suffix": ""
},
{
"first": "Surya",
"middle": [],
"last": "Ganguli",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew M. Saxe, James L. McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dy- namics of learning in deep linear neural networks. CoRR, abs/1312.6120.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, Advances in Neural Informa- tion Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Context gates for neural machine translation",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2016. Context gates for neural ma- chine translation. CoRR, abs/1608.06043.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning for semantic parsing with statistical machine translation",
"authors": [
{
"first": "Yuk",
"middle": [
"Wah"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "439--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuk Wah Wong and Raymond Mooney. 2006. Learn- ing for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Con- ference, pages 439-446, New York City, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Sequenceto-sequence neural net models for grapheme-tophoneme conversion",
"authors": [
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2015,
"venue": "INTERSPEECH 2015, 16th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "3330--3334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaisheng Yao and Geoffrey Zweig. 2015. Sequence- to-sequence neural net models for grapheme-to- phoneme conversion. In INTERSPEECH 2015, 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, September 6-10, 2015, pages 3330-3334. ISCA.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Attention with intention for a neural network conversation model",
"authors": [
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversation model. CoRR, abs/1510.08565.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Pomdp-based statistical spoken dialog systems: A review",
"authors": [
{
"first": "Steve",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IEEE",
"volume": "101",
"issue": "5",
"pages": "1160--1179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steve J. Young, Milica Gasic, Blaise Thomson, and Ja- son D. Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "A conversation example of an ordertaking interaction at a restaurant.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "An input unstructured conversation and the corresponding structured record.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Example of memory gate weights at each time stamp.(a) Attention weight of NAM (b) Attention weight of NAM with memory gate Figure 5: Examples of attention weights of models (a) without memory gate and (b) with memory gate. (b) shows sparse and more focused attention weights. (Better viewed in color.)",
"uris": null,
"num": null
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table><tr><td/><td>Model</td><td/><td/><td>Dev</td><td/><td/><td/><td>Test</td></tr><tr><td/><td colspan=\"4\">Gate Role Recall Prec F1</td><td>Accy</td><td colspan=\"2\">Recall Prec</td><td>F1 Accy</td></tr><tr><td>IR</td><td>-</td><td>-</td><td>21.9</td><td>18.9 20.3</td><td>6.9</td><td>25.7</td><td>19.3</td><td>23.8 10.2</td></tr><tr><td>PBMT</td><td>-</td><td>-</td><td>56.8</td><td>20.4 34.1</td><td>23.3</td><td>56.9</td><td>21.5</td><td>35.0 24.7</td></tr><tr><td>NAM</td><td>-</td><td>-</td><td>56.7</td><td>63.5 60.0</td><td>36.9</td><td>60.3</td><td>66.7</td><td>63.4 40.6</td></tr><tr><td>NAM</td><td>-</td><td/><td>57.1</td><td>64.7 60.8</td><td>38.1</td><td>62.5</td><td>67.4</td><td>64.9 42.5</td></tr><tr><td>NAM</td><td/><td>-</td><td>57.0</td><td>64.6 60.7</td><td>39.2</td><td>60.3</td><td>68.3</td><td>64.2 40.8</td></tr><tr><td>NAM</td><td/><td/><td>58.5</td><td>65.2 61.8</td><td>40.5</td><td>63.0</td><td>68.4</td><td>65.7 45.9</td></tr></table>",
"type_str": "table",
"text": "Results of different methods on dev and test set. Human transcriptions are used."
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Results of different methods on ASR-dev and ASR-test set."
},
"TABREF4": {
"num": null,
"html": null,
"content": "<table><tr><td>shows results on the ASR-dev and ASR-</td></tr><tr><td>test sets. These data sets are quite noisy since the</td></tr><tr><td>speech recognizer in this domain has a word error</td></tr></table>",
"type_str": "table",
"text": ""
}
}
}
}