| { |
| "paper_id": "P16-1002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:59:44.806056Z" |
| }, |
| "title": "Data Recombination for Neural Semantic Parsing", |
| "authors": [ |
| { |
| "first": "Robin", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "robinjia@stanford.edu" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "pliang@cs.stanford.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Modeling crisp logical regularities is crucial in semantic parsing, making it difficult for neural models with no task-specific prior knowledge to achieve good results. In this paper, we introduce data recombination, a novel framework for injecting such prior knowledge into a model. From the training data, we induce a highprecision synchronous context-free grammar, which captures important conditional independence properties commonly found in semantic parsing. We then train a sequence-to-sequence recurrent network (RNN) model with a novel attention-based copying mechanism on datapoints sampled from this grammar, thereby teaching the model about these structural properties. Data recombination improves the accuracy of our RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision.", |
| "pdf_parse": { |
| "paper_id": "P16-1002", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Modeling crisp logical regularities is crucial in semantic parsing, making it difficult for neural models with no task-specific prior knowledge to achieve good results. In this paper, we introduce data recombination, a novel framework for injecting such prior knowledge into a model. From the training data, we induce a highprecision synchronous context-free grammar, which captures important conditional independence properties commonly found in semantic parsing. We then train a sequence-to-sequence recurrent network (RNN) model with a novel attention-based copying mechanism on datapoints sampled from this grammar, thereby teaching the model about these structural properties. Data recombination improves the accuracy of our RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Semantic parsing-the precise translation of natural language utterances into logical forms-has many applications, including question answering (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Liang et al., 2011; Berant et al., 2013) , instruction following (Artzi and Zettlemoyer, 2013b) , and regular expression generation (Kushman and Barzilay, 2013) . Modern semantic parsers (Artzi and Zettlemoyer, 2013a; Berant et al., 2013) are complex pieces of software, requiring handcrafted features, lexicons, and grammars.", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 167, |
| "text": "(Zelle and Mooney, 1996;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 168, |
| "end": 198, |
| "text": "Zettlemoyer and Collins, 2005;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 199, |
| "end": 229, |
| "text": "Zettlemoyer and Collins, 2007;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 230, |
| "end": 249, |
| "text": "Liang et al., 2011;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 250, |
| "end": 270, |
| "text": "Berant et al., 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 295, |
| "end": 325, |
| "text": "(Artzi and Zettlemoyer, 2013b)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 362, |
| "end": 390, |
| "text": "(Kushman and Barzilay, 2013)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 417, |
| "end": 447, |
| "text": "(Artzi and Zettlemoyer, 2013a;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 448, |
| "end": 468, |
| "text": "Berant et al., 2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Meanwhile [utah] ] ?", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 16, |
| "text": "[utah]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Figure 1: An overview of our system. Given a dataset, we induce a high-precision synchronous context-free grammar. We then sample from this grammar to generate new \"recombinant\" examples, which we use to train a sequence-to-sequence RNN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recombinant Examples", |
| "sec_num": null |
| }, |
| { |
| "text": "have made swift inroads into many structured prediction tasks in NLP, including machine translation (Sutskever et al., 2014; Bahdanau et al., 2014) and syntactic parsing Dyer et al., 2015) . Because RNNs make very few domain-specific assumptions, they have the potential to succeed at a wide variety of tasks with minimal feature engineering. However, this flexibility also puts RNNs at a disadvantage compared to standard semantic parsers, which can generalize naturally by leveraging their built-in awareness of logical compositionality. In this paper, we introduce data recombination, a generic framework for declaratively inject-", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 124, |
| "text": "(Sutskever et al., 2014;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 125, |
| "end": 147, |
| "text": "Bahdanau et al., 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 170, |
| "end": 188, |
| "text": "Dyer et al., 2015)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recombinant Examples", |
| "sec_num": null |
| }, |
| { |
| "text": "x: \"what is the population of iowa ?\" y: _answer ( NV , ( _population ( NV , V1 ) , _const ( V0 , _stateid ( iowa ) ) ) ) ATIS x: \"can you list all flights from chicago to milwaukee\" y: ( _lambda $0 e ( _and ( _flight $0 ) ( _from $0 chicago : _ci ) ( _to $0 milwaukee : _ci ) ) ) Overnight x: \"when is the weekly standup\" y: ( call listValue ( call getProperty meeting.weekly_standup ( string start_time ) ) )", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 55, |
| "text": "( NV ,", |
| "ref_id": null |
| }, |
| { |
| "start": 56, |
| "end": 76, |
| "text": "( _population ( NV ,", |
| "ref_id": null |
| }, |
| { |
| "start": 77, |
| "end": 81, |
| "text": "V1 )", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GEO", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 2: One example from each of our domains. We tokenize logical forms as shown, thereby casting semantic parsing as a sequence-to-sequence task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GEO", |
| "sec_num": null |
| }, |
| { |
| "text": "ing prior knowledge into a domain-general structured prediction model. In data recombination, prior knowledge about a task is used to build a high-precision generative model that expands the empirical distribution by allowing fragments of different examples to be combined in particular ways. Samples from this generative model are then used to train a domain-general model. In the case of semantic parsing, we construct a generative model by inducing a synchronous context-free grammar (SCFG), creating new examples such as those shown in Figure 1 ; our domain-general model is a sequence-to-sequence RNN with a novel attention-based copying mechanism. Data recombination boosts the accuracy of our RNN model on three semantic parsing datasets. On the GEO dataset, data recombination improves test accuracy by 4.3 percentage points over our baseline RNN, leading to new state-of-the-art results for models that do not use a seed lexicon for predicates.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 540, |
| "end": 548, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "GEO", |
| "sec_num": null |
| }, |
| { |
| "text": "We cast semantic parsing as a sequence-tosequence task. The input utterance x is a sequence of words x 1 , . . . , x m \u2208 V (in) , the input vocabulary; similarly, the output logical form y is a sequence of tokens y 1 , . . . , y n \u2208 V (out) , the output vocabulary. A linear sequence of tokens might appear to lose the hierarchical structure of a logical form, but there is precedent for this choice: showed that an RNN can reliably predict tree-structured outputs in a linear fashion. We evaluate our system on three existing semantic parsing datasets. Figure 2 shows sample input-output pairs from each of these datasets.", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 127, |
| "text": "(in)", |
| "ref_id": null |
| }, |
| { |
| "start": 235, |
| "end": 240, |
| "text": "(out)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 554, |
| "end": 562, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problem statement", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 GeoQuery (GEO) contains natural language questions about US geography paired with corresponding Prolog database queries. We use the standard split of 600 training examples and 280 test examples introduced by Zettlemoyer and Collins (2005) . We preprocess the logical forms to De Brujin index notation to standardize variable naming.", |
| "cite_spans": [ |
| { |
| "start": 210, |
| "end": 240, |
| "text": "Zettlemoyer and Collins (2005)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem statement", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 ATIS (ATIS) contains natural language queries for a flights database paired with corresponding database queries written in lambda calculus. We train on 4473 examples and evaluate on the 448 test examples used by Zettlemoyer and Collins (2007) .", |
| "cite_spans": [ |
| { |
| "start": 214, |
| "end": 244, |
| "text": "Zettlemoyer and Collins (2007)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem statement", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Overnight (OVERNIGHT) contains logical forms paired with natural language paraphrases across eight varied subdomains. constructed the dataset by generating all possible logical forms up to some depth threshold, then getting multiple natural language paraphrases for each logical form from workers on Amazon Mechanical Turk. We evaluate on the same train/test splits as .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem statement", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this paper, we only explore learning from logical forms. In the last few years, there has an emergence of semantic parsers learned from denotations (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; Artzi and Zettlemoyer, 2013b ). While our system cannot directly learn from denotations, it could be used to rerank candidate derivations generated by one of these other systems.", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 172, |
| "text": "(Clarke et al., 2010;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 173, |
| "end": 192, |
| "text": "Liang et al., 2011;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 193, |
| "end": 213, |
| "text": "Berant et al., 2013;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 214, |
| "end": 242, |
| "text": "Artzi and Zettlemoyer, 2013b", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem statement", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 Sequence-to-sequence RNN Model", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem statement", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our sequence-to-sequence RNN model is based on existing attention-based neural machine translation models (Bahdanau et al., 2014; Luong et al., 2015a) , but also includes a novel attention-based copying mechanism. Similar copying mechanisms have been explored in parallel by Gu et al. (2016) and Gulcehre et al. (2016) .", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 129, |
| "text": "(Bahdanau et al., 2014;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 130, |
| "end": 150, |
| "text": "Luong et al., 2015a)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 275, |
| "end": 291, |
| "text": "Gu et al. (2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 296, |
| "end": 318, |
| "text": "Gulcehre et al. (2016)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem statement", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Encoder. The encoder converts the input sequence x 1 , . . . , x m into a sequence of context-sensitive embeddings b 1 , . . . , b m using a bidirectional RNN (Bahdanau et al., 2014) . First, a word embedding function \u03c6 (in) maps each word x i to a fixed-dimensional vector. These vectors are fed as input to two RNNs: a forward RNN and a backward RNN. The forward RNN starts with an initial hidden state h F 0 , and generates a sequence of hidden states h F 1 , . . . , h F m by repeatedly applying the recurrence", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 182, |
| "text": "(Bahdanau et al., 2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h F i = LSTM(\u03c6 (in) (x i ), h F i\u22121 ).", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The recurrence takes the form of an LSTM (Hochreiter and Schmidhuber, 1997) . The backward RNN similarly generates hidden states h B m , . . . , h B 1 by processing the input sequence in reverse order. Finally, for each input position i, we define the context-sensitive embedding b i to be the concatenation of", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 75, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "h F i and h B i Decoder.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The decoder is an attention-based model (Bahdanau et al., 2014; Luong et al., 2015a) that generates the output sequence y 1 , . . . , y n one token at a time. At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j . Formally, the decoder is defined by the following equations:", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 63, |
| "text": "(Bahdanau et al., 2014;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 64, |
| "end": 84, |
| "text": "Luong et al., 2015a)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s 1 = tanh(W (s) [h F m , h B 1 ]).", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e ji = s j W (a) b i .", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 ji = exp(e ji ) m i =1 exp(e ji ) .", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "c j = m i=1 \u03b1 ji b i .", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (y j = w | x, y 1:j\u22121 ) \u221d exp(U w [s j , c j ]). (6) s j+1 = LSTM([\u03c6 (out) (y j ), c j ], s j ).", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "When not specified, i ranges over {1, . . . , m} and j ranges over {1, . . . , n}. Intuitively, the \u03b1 ji 's define a probability distribution over the input words, describing what words in the input the decoder is focusing on at time j. They are computed from the unnormalized attention scores e ji . The matrices W (s) , W (a) , and U , as well as the embedding function \u03c6 (out) , are parameters of the model.", |
| "cite_spans": [ |
| { |
| "start": 324, |
| "end": 327, |
| "text": "(a)", |
| "ref_id": null |
| }, |
| { |
| "start": 374, |
| "end": 379, |
| "text": "(out)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the basic model of the previous section, the next output word y j is chosen via a simple softmax over all words in the output vocabulary. However, this model has difficulty generalizing to the long tail of entity names commonly found in semantic parsing datasets. Conveniently, entity names in the input often correspond directly to tokens in the output (e.g., \"iowa\" becomes iowa in Figure 2 ). 1 To capture this intuition, we introduce a new attention-based copying mechanism. At each time step j, the decoder generates one of two types of actions. As before, it can write any word in the output vocabulary. In addition, it can copy any input word x i directly to the output, where the probability with which we copy x i is determined by the attention score on x i . Formally, we define a latent action a j that is either Write[w] for some w \u2208 V (out) or Copy[i] for some i \u2208 {1, . . . , m}.", |
| "cite_spans": [ |
| { |
| "start": 851, |
| "end": 856, |
| "text": "(out)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 387, |
| "end": 395, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Attention-based Copying", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We then have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention-based Copying", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (a j = Write[w] | x, y 1:j\u22121 ) \u221d exp(U w [s j , c j ]),", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Attention-based Copying", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (a j = Copy[i] | x, y 1:j\u22121 ) \u221d exp(e ji ).", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Attention-based Copying", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The decoder chooses a j with a softmax over all these possible actions; y j is then a deterministic function of a j and x. During training, we maximize the log-likelihood of y, marginalizing out a.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention-based Copying", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Attention-based copying can be seen as a combination of a standard softmax output layer of an attention-based model (Bahdanau et al., 2014 ) and a Pointer Network (Vinyals et al., 2015a) ; in a Pointer Network, the only way to generate output is to copy a symbol from the input.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 138, |
| "text": "(Bahdanau et al., 2014", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 163, |
| "end": 186, |
| "text": "(Vinyals et al., 2015a)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention-based Copying", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The main contribution of this paper is a novel data recombination framework that injects important prior knowledge into our oblivious sequence-tosequence RNN. In this framework, we induce a high-precision generative model from the training data, then sample from it to generate new training examples. The process of inducing this generative model can leverage any available prior knowledge, which is transmitted through the generated examples to the RNN model. A key advantage of our two-stage approach is that it allows us to declare desired properties of the task which might be hard to capture in the model architecture.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Examples (\"what states border texas ?\", answer(NV, (state(V0), next_to(V0, NV), const(V0, stateid(texas))))) (\"what is the highest mountain in ohio ?\", answer(NV, highest(V0, (mountain(V0), loc(V0, NV), const(V0, stateid(ohio)))))) Rules created by ABSENTITIES ROOT \u2192 \"what states border STATEID ?\", answer(NV, (state(V0), next_to(V0, NV), const(V0, stateid(STATEID )))) STATEID \u2192 \"texas\", texas ROOT \u2192 \"what is the highest mountain in STATEID ?\", answer(NV, highest(V0, (mountain(V0), loc(V0, NV), const(V0, stateid(STATEID ))))) STATEID \u2192 \"ohio\", ohio Rules created by ABSWHOLEPHRASES ROOT \u2192 \"what states border STATE ?\", answer(NV, (state(V0), next_to(V0, NV), STATE )) STATE \u2192 \"states border texas\", state(V0), next_to(V0, NV), const(V0, stateid(texas))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "ROOT \u2192 \"what is the highest mountain in STATE ?\", answer(NV, highest(V0, (mountain(V0), loc(V0, NV), STATE ))) Rules created by CONCAT-2 ROOT \u2192 SENT1 </s> SENT2, SENT1 </s> SENT2 SENT \u2192 \"what states border texas ?\", answer(NV, (state(V0), next_to(V0, NV), const(V0, stateid(texas)))) SENT \u2192 \"what is the highest mountain in ohio ?\", answer(NV, highest(V0, (mountain(V0), loc(V0, NV), const(V0, stateid(ohio)))))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Figure 3: Various grammar induction strategies illustrated on GEO. Each strategy converts the rules of an input grammar into rules of an output grammar. This figure shows the base case where the input grammar has rules ROOT \u2192 x, y for each (x, y) pair in the training dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Our approach generalizes data augmentation, which is commonly employed to inject prior knowledge into a model. Data augmentation techniques focus on modeling invariancestransformations like translating an image or adding noise that alter the inputs x, but do not change the output y. These techniques have proven effective in areas like computer vision and speech recognition (Jaitly and Hinton, 2013) .", |
| "cite_spans": [ |
| { |
| "start": 376, |
| "end": 401, |
| "text": "(Jaitly and Hinton, 2013)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In semantic parsing, however, we would like to capture more than just invariance properties. Consider an example with the utterance \"what states border texas ?\". Given this example, it should be easy to generalize to questions where \"texas\" is replaced by the name of any other state: simply replace the mention of Texas in the logical form with the name of the new state. Underlying this phenomenon is a strong conditional independence principle: the meaning of the rest of the sentence is independent of the name of the state in question. Standard data augmentation is not sufficient to model such phenomena: instead of holding y fixed, we would like to apply simultaneous transformations to x and y such that the new x still maps to the new y. Data recombination addresses this need.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In the general setting of data recombination, we start with a training set D of (x, y) pairs, which defines the empirical distributionp(x, y). We then fit a generative modelp(x, y) top which generalizes beyond the support ofp, for example by splicing together fragments of different examples. We refer to examples in the support ofp as recombinant examples. Finally, to train our actual model p \u03b8 (y | x), we maximize the expected value of log p \u03b8 (y | x), where (x, y) is drawn fromp.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Setting", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For semantic parsing, we induce a synchronous context-free grammar (SCFG) to serve as the backbone of our generative modelp. An SCFG consists of a set of production rules X \u2192 \u03b1, \u03b2 , where X is a category (non-terminal), and \u03b1 and \u03b2 are sequences of terminal and non-terminal symbols. Any non-terminal symbols in \u03b1 must be aligned to the same non-terminal symbol in \u03b2, and vice versa. Therefore, an SCFG defines a set of joint derivations of aligned pairs of strings. In our case, we use an SCFG to represent joint deriva-tions of utterances x and logical forms y (which for us is just a sequence of tokens). After we induce an SCFG G from D, the corresponding generative modelp(x, y) is the distribution over pairs (x, y) defined by sampling from G, where we choose production rules to apply uniformly at random.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SCFGs for Semantic Parsing", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "It is instructive to compare our SCFG-based data recombination with WASP (Wong and Mooney, 2006; Wong and Mooney, 2007) , which uses an SCFG as the actual semantic parsing model. The grammar induced by WASP must have good coverage in order to generalize to new inputs at test time. WASP also requires the implementation of an efficient algorithm for computing the conditional probability p(y | x). In contrast, our SCFG is only used to convey prior knowledge about conditional independence structure, so it only needs to have high precision; our RNN model is responsible for boosting recall over the entire input space. We also only need to forward sample from the SCFG, which is considerably easier to implement than conditional inference.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 96, |
| "text": "(Wong and Mooney, 2006;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 97, |
| "end": 119, |
| "text": "Wong and Mooney, 2007)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SCFGs for Semantic Parsing", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Below, we examine various strategies for inducing a grammar G from a dataset D. We first encode D as an initial grammar with rules ROOT \u2192 x, y for each (x, y) \u2208 D. Next, we will define each grammar induction strategy as a mapping from an input grammar G in to a new grammar G out . This formulation allows us to compose grammar induction strategies (Section 4.3.4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SCFGs for Semantic Parsing", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Our first grammar induction strategy, ABSENTI-TIES, simply abstracts entities with their types. We assume that each entity e (e.g., texas) has a corresponding type e.t (e.g., state), which we infer based on the presence of certain predicates in the logical form (e.g. stateid). For each grammar rule X \u2192 \u03b1, \u03b2 in G in , where \u03b1 contains a token (e.g., \"texas\") that string matches an entity (e.g., texas) in \u03b2, we add two rules to G out : (i) a rule where both occurrences are replaced with the type of the entity (e.g., state), and (ii) a new rule that maps the type to the entity (e.g., STATEID \u2192 \"texas\", texas ; we reserve the category name STATE for the next section). Thus, G out generates recombinant examples that fuse most of one example with an entity found in a second example. A concrete example from the GEO domain is given in Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 839, |
| "end": 847, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Abstracting Entities", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "Our second grammar induction strategy, ABSW-HOLEPHRASES, abstracts both entities and whole phrases with their types. For each grammar rule X \u2192 \u03b1, \u03b2 in G in , we add up to two rules to G out . First, if \u03b1 contains tokens that string match to an entity in \u03b2, we replace both occurrences with the type of the entity, similarly to rule (i) from AB-SENTITIES. Second, if we can infer that the entire expression \u03b2 evaluates to a set of a particular type (e.g. state) we create a rule that maps the type to \u03b1, \u03b2 . In practice, we also use some simple rules to strip question identifiers from \u03b1, so that the resulting examples are more natural. Again, refer to Figure 3 for a concrete example.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 653, |
| "end": 661, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Abstracting Whole Phrases", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "This strategy works because of a more general conditional independence property: the meaning of any semantically coherent phrase is conditionally independent of the rest of the sentence, the cornerstone of compositional semantics. Note that this assumption is not always correct in general: for example, phenomena like anaphora that involve long-range context dependence violate this assumption. However, this property holds in most existing semantic parsing datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstracting Whole Phrases", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "The final grammar induction strategy is a surprisingly simple approach we tried that turns out to work. For any k \u2265 2, we define the CONCAT-k strategy, which creates two types of rules. First, we create a single rule that has ROOT going to a sequence of k SENT's. Then, for each rootlevel rule ROOT \u2192 \u03b1, \u03b2 in G in , we add the rule SENT \u2192 \u03b1, \u03b2 to G out . See Figure 3 for an example.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 359, |
| "end": 367, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Concatenation", |
| "sec_num": "4.3.3" |
| }, |
| { |
| "text": "Unlike ABSENTITIES and ABSWHOLE-PHRASES, concatenation is very general, and can be applied to any sequence transduction problem. Of course, it also does not introduce additional information about compositionality or independence properties present in semantic parsing. However, it does generate harder examples for the attention-based RNN, since the model must learn to attend to the correct parts of the now-longer input sequence. Related work has shown that training a model on more difficult examples can improve generalization, the most canonical case being dropout Wager et al., 2013) . Figure 4 : The training procedure with data recombination. We first induce an SCFG, then sample new recombinant examples from it at each epoch.", |
| "cite_spans": [ |
| { |
| "start": 570, |
| "end": 589, |
| "text": "Wager et al., 2013)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 592, |
| "end": 600, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Concatenation", |
| "sec_num": "4.3.3" |
| }, |
| { |
| "text": "We note that grammar induction strategies can be composed, yielding more complex grammars. Given any two grammar induction strategies f 1 and f 2 , the composition f 1 \u2022 f 2 is the grammar induction strategy that takes in G in and returns f 1 (f 2 (G in )). For the strategies we have defined, we can perform this operation symbolically on the grammar rules, without having to sample from the intermediate grammar f 2 (G in ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Composition", |
| "sec_num": "4.3.4" |
| }, |
| { |
| "text": "We evaluate our system on three domains: GEO, ATIS, and OVERNIGHT. For ATIS, we report logical form exact match accuracy. For GEO and OVERNIGHT, we determine correctness based on denotation match, as in Liang et al. (2011) and , respectively.", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 222, |
| "text": "Liang et al. (2011)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We note that not all grammar induction strategies make sense for all domains. In particular, we only apply ABSWHOLEPHRASES to GEO and OVERNIGHT. We do not apply ABSWHOLE-PHRASES to ATIS, as the dataset has little nesting structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Choice of Grammar Induction Strategy", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We tokenize logical forms in a domain-specific manner, based on the syntax of the formal language being used. On GEO and ATIS, we disallow copying of predicate names to ensure a fair comparison to previous work, as string matching between input words and predicate names is not commonly used. We prevent copying by prepending underscores to predicate tokens; see Figure 2 for examples.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 363, |
| "end": 371, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "On ATIS alone, when doing attention-based copying and data recombination, we leverage an external lexicon that maps natural language phrases (e.g., \"kennedy airport\") to entities (e.g., jfk:ap). When we copy a word that is part of a phrase in the lexicon, we write the entity associated with that lexicon entry. When performing data recombination, we identify entity alignments based on matching phrases and entities from the lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We run all experiments with 200 hidden units and 100-dimensional word vectors. We initialize all parameters uniformly at random within the interval [\u22120.1, 0.1]. We maximize the loglikelihood of the correct logical form using stochastic gradient descent. We train the model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs, starting after epoch 15. We replace word vectors for words that occur only once in the training set with a universal <unk> word vector. Our model is implemented in Theano (Bergstra et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 551, |
| "end": 574, |
| "text": "(Bergstra et al., 2010)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "When performing data recombination, we sample a new round of recombinant examples from our grammar at each epoch. We add these examples to the original training dataset, randomly shuffle all examples, and train the model for the epoch. Figure 4 gives pseudocode for this training procedure. One important hyperparameter is how many examples to sample at each epoch: we found that a good rule of thumb is to sample as many recombinant examples as there are examples in the training dataset, so that half of the examples the model sees at each epoch are recombinant.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 236, |
| "end": 244, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "At test time, we use beam search with beam size 5. We automatically balance missing right parentheses by adding them at the end. On GEO and OVERNIGHT, we then pick the highest-scoring logical form that does not yield an executor error when the corresponding denotation is computed. On ATIS, we just pick the top prediction on the beam.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "First, we measure the contribution of the attentionbased copying mechanism to the model's overall Table 1 : Test accuracy on GEO, ATIS, and OVERNIGHT, both with and without copying. On OVERNIGHT, we average across all eight domains. Zettlemoyer and Collins (2007) 84.6 Kwiatkowski et al. (2010) 88.9 Liang et al. (2011) 291.1 Kwiatkowski et al. (2011) 88.6 82.8 Poon (2013) 83.5 Zhao and Huang (2015) 88.9 84. Table 2 : Test accuracy using different data recombination strategies on GEO and ATIS. AE is AB-SENTITIES, AWP is ABSWHOLEPHRASES, C2 is CONCAT-2, and C3 is CONCAT-3. performance. On each task, we train and evaluate two models: one with the copying mechanism, and one without. Training is done without data recombination. The results are shown in Table 1 . On GEO and ATIS, the copying mechanism helps significantly: it improves test accuracy by 10.4 percentage points on GEO and 6.4 points on ATIS. However, on OVERNIGHT, adding the copying mechanism actually makes our model perform slightly worse. This result is somewhat expected, as the OVERNIGHT dataset contains a very small number of distinct entities. It is also notable that both systems surpass the previous best system on OVERNIGHT by a wide margin.", |
| "cite_spans": [ |
| { |
| "start": 233, |
| "end": 263, |
| "text": "Zettlemoyer and Collins (2007)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 269, |
| "end": 294, |
| "text": "Kwiatkowski et al. (2010)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 326, |
| "end": 351, |
| "text": "Kwiatkowski et al. (2011)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 362, |
| "end": 373, |
| "text": "Poon (2013)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 379, |
| "end": 400, |
| "text": "Zhao and Huang (2015)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 98, |
| "end": 105, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 410, |
| "end": 417, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 757, |
| "end": 764, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Impact of the Copying Mechanism", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We choose to use the copying mechanism in all subsequent experiments, as it has a large advantage in realistic settings where there are many distinct entities in the world. The concurrent work of Gu et al. (2016) and Gulcehre et al. (2016) , both of whom propose similar copying mechanisms, provides additional evidence for the utility of copying on a wide range of NLP tasks.", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 212, |
| "text": "Gu et al. (2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 217, |
| "end": 239, |
| "text": "Gulcehre et al. (2016)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GEO ATIS Previous Work", |
| "sec_num": null |
| }, |
| { |
| "text": "2 The method of Liang et al. (2011) is not comparable to For our main results, we train our model with a variety of data recombination strategies on all three datasets. These results are summarized in Tables 2 and 3 . We compare our system to the baseline of not using any data recombination, as well as to state-of-the-art systems on all three datasets.", |
| "cite_spans": [ |
| { |
| "start": 16, |
| "end": 35, |
| "text": "Liang et al. (2011)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 201, |
| "end": 216, |
| "text": "Tables 2 and 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We find that data recombination consistently improves accuracy across the three domains we evaluated on, and that the strongest results come from composing multiple strategies. Combining ABSWHOLEPHRASES, ABSENTITIES, and CONCAT-2 yields a 4.3 percentage point improvement over the baseline without data recombination on GEO, and an average of 1.7 percentage points on OVERNIGHT. In fact, on GEO, we achieve test accuracy of 89.3%, which surpasses the previous state-of-the-art, excluding Liang et al. (2011) , which used a seed lexicon for predicates. On ATIS, we experiment with concatenating more than 2 examples, to make up for the fact that we cannot apply ABSWHOLEPHRASES, which generates longer examples. We obtain a test accuracy of 83.3 with ABSENTITIES composed with CONCAT-3, which beats the baseline by 7 percentage points and is competitive with the state-of-theart.", |
| "cite_spans": [ |
| { |
| "start": 488, |
| "end": 507, |
| "text": "Liang et al. (2011)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Data recombination without copying. For completeness, we also investigated the effects of data recombination on the model without attention-based copying. We found that recombination helped significantly on GEO and ATIS, but hurt the model slightly on OVERNIGHT. On GEO, the best data recombination strategy yielded test accuracy of 82.9%, for a gain of 8.3 percentage points over the baseline with no copying and no recombination; on ATIS, data recombination gives test accuracies as high as 74.6%, a 4.7 point gain over the same baseline. However, no data recombination strategy improved average test accuracy on OVERNIGHT; the best one resulted in a 0.3 percentage point decrease in test accuracy. We hypothesize that data recombination helps less on OVERNIGHT in general because the space of possible logical forms is very limited, making it more like a large multiclass classification task. Therefore, it is less important for the model to learn good compositional representations that generalize to new logical forms at test time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "ours, as they as they used a seed lexicon mapping words to predicates. We explicitly avoid using such prior knowledge in our system. Table 3 : Test accuracy using different data recombination strategies on the OVERNIGHT tasks.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 140, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Depth-2 (same length)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "x: \"rel:12 of rel:17 of ent:14\" y: ( _rel:12 ( _rel:17 _ent:14 ) ) Depth-4 (longer)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "x: \"rel:23 of rel:36 of rel:38 of rel:10 of ent:05\" y: ( _rel:23 ( _rel:36 ( _rel:38 ( _rel:10 _ent:05 ) ) ) ) Figure 6 : The results of our artificial data experiments. We see that the model learns more from longer examples than from same-length examples.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 111, |
| "end": 119, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Interestingly, strategies like ABSWHOLE-PHRASES and CONCAT-2 help the model even though the resulting recombinant examples are generally not in the support of the test distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effect of Longer Examples", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "In particular, these recombinant examples are on average longer than those in the actual dataset, which makes them harder for the attention-based model. Indeed, for every domain, our best accuracy numbers involved some form of concatenation, and often involved ABSWHOLEPHRASES as well. In comparison, applying ABSENTITIES alone, which generates examples of the same length as those in the original dataset, was generally less effective. We conducted additional experiments on artificial data to investigate the importance of adding longer, harder examples. We experimented with adding new examples via data recombination, as well as adding new independent examples (e.g. to simulate the acquisition of more training data). We constructed a simple world containing a set of entities and a set of binary relations. For any n, we can generate a set of depth-n examples, which involve the composition of n relations applied to a single entity. Example data points are shown in Figure 5 . We train our model on various datasets, then test it on a set of 500 randomly chosen depth-2 examples. The model always has access to a small seed training set of 100 depth-2 examples. We then add one of four types of examples to the training set:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 973, |
| "end": 981, |
| "text": "Figure 5", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effect of Longer Examples", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "\u2022 Same length, independent: New randomly chosen depth-2 examples. 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effect of Longer Examples", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "\u2022 Longer, independent: Randomly chosen depth-4 examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effect of Longer Examples", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "\u2022 Same length, recombinant: Depth-2 examples sampled from the grammar induced by applying ABSENTITIES to the seed dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effect of Longer Examples", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "\u2022 Longer, recombinant: Depth-4 examples sampled from the grammar induced by applying ABSWHOLEPHRASES followed by AB-SENTITIES to the seed dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effect of Longer Examples", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "To maintain consistency between the independent and recombinant experiments, we fix the recombinant examples across all epochs, instead of resampling at every epoch. In Figure 6 , we plot accuracy on the test set versus the number of additional examples added of each of these four types. As expected, independent examples are more helpful than the recombinant ones, but both help the model improve considerably. In addition, we see that even though the test dataset only has short examples, adding longer examples helps the model more than adding shorter ones, in both the independent and recombinant cases. These results underscore the importance training on longer, harder examples.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 169, |
| "end": 177, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effect of Longer Examples", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "In this paper, we have presented a novel framework we term data recombination, in which we generate new training examples from a highprecision generative model induced from the original training dataset. We have demonstrated its effectiveness in improving the accuracy of a sequence-to-sequence RNN model on three semantic parsing datasets, using a synchronous context-free grammar as our generative model. There has been growing interest in applying neural networks to semantic parsing and related tasks. Dong and Lapata (2016) concurrently developed an attention-based RNN model for semantic parsing, although they did not use data recombination. Grefenstette et al. (2014) proposed a non-recurrent neural model for semantic parsing, though they did not run experiments. Mei et al. (2016) use an RNN model to perform a related task of instruction following.", |
| "cite_spans": [ |
| { |
| "start": 649, |
| "end": 675, |
| "text": "Grefenstette et al. (2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 773, |
| "end": 790, |
| "text": "Mei et al. (2016)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our proposed attention-based copying mechanism bears a strong resemblance to two models that were developed independently by other groups. Gu et al. (2016) apply a very similar copying mechanism to text summarization and singleturn dialogue generation. Gulcehre et al. (2016) propose a model that decides at each step whether to write from a \"shortlist\" vocabulary or copy from the input, and report improvements on machine translation and text summarization. Another piece of related work is Luong et al. (2015b) , who train a neural machine translation system to copy rare words, relying on an external system to generate alignments.", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 155, |
| "text": "Gu et al. (2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 253, |
| "end": 275, |
| "text": "Gulcehre et al. (2016)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 493, |
| "end": 513, |
| "text": "Luong et al. (2015b)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Prior work has explored using paraphrasing for data augmentation on NLP tasks. Zhang et al. (2015) augment their data by swapping out words for synonyms from WordNet. Wang and Yang (2015) use a similar strategy, but identify similar words and phrases based on cosine distance between vector space embeddings. Unlike our data recombination strategies, these techniques only change inputs x, while keeping the labels y fixed. Additionally, these paraphrasing-based transformations can be described in terms of grammar induction, so they can be incorporated into our framework.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 98, |
| "text": "Zhang et al. (2015)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In data recombination, data generated by a highprecision generative model is used to train a second, domain-general model. Generative oversampling (Liu et al., 2007 ) learns a generative model in a multiclass classification setting, then uses it to generate additional examples from rare classes in order to combat label imbalance. Uptraining (Petrov et al., 2010) uses data labeled by an accurate but slow model to train a computationally cheaper second model. generate a large dataset of constituency parse trees by taking sentences that multiple existing systems parse in the same way, and train a neural model on this dataset.", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 164, |
| "text": "(Liu et al., 2007", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 343, |
| "end": 364, |
| "text": "(Petrov et al., 2010)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Some of our induced grammars generate examples that are not in the test distribution, but nonetheless aid in generalization. Related work has also explored the idea of training on altered or out-of-domain data, often interpreting it as a form of regularization. Dropout training has been shown to be a form of adaptive regularization Wager et al., 2013) . Guu et al. (2015) showed that encouraging a knowledge base completion model to handle longer path queries acts as a form of structural regularization.", |
| "cite_spans": [ |
| { |
| "start": 334, |
| "end": 353, |
| "text": "Wager et al., 2013)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 356, |
| "end": 373, |
| "text": "Guu et al. (2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Language is a blend of crisp regularities and soft relationships. Our work takes RNNs, which excel at modeling soft phenomena, and uses a highly structured tool-synchronous context free grammars-to infuse them with an understanding of crisp structure. We believe this paradigm for simultaneously modeling the soft and hard aspects of language should have broader applicability beyond semantic parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "On GEO and ATIS, we make a point not to rely on orthography for non-entities such as \"state\" to _state, since this leverages information not available to previous models(Zettlemoyer and Collins, 2005) and is much less languageindependent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Technically, these are not completely independent, as we sample these new examples without replacement. The same applies to the longer \"independent\" examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "UW SPF: The University of Washington semantic parsing framework", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1311.3011" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Artzi and L. Zettlemoyer. 2013a. UW SPF: The University of Washington semantic parsing frame- work. arXiv preprint arXiv:1311.3011.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Weakly supervised learning of semantic parsers for mapping instructions to actions", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics (TACL)", |
| "volume": "1", |
| "issue": "", |
| "pages": "49--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Artzi and L. Zettlemoyer. 2013b. Weakly super- vised learning of semantic parsers for mapping in- structions to actions. Transactions of the Associ- ation for Computational Linguistics (TACL), 1:49- 62.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1409.0473" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Semantic parsing on Freebase from question-answer pairs", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Berant", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Chou", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Frostig", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Theano: a CPU and GPU math expression compiler", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bergstra", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Breuleux", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Bastien", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Lamblin", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Pascanu", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Desjardins", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Warde-Farley", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Python for Scientific Computing Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pas- canu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Python for Scientific Com- puting Conference.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Driving semantic parsing from the world's response", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Goldwasser", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "18--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world's re- sponse. In Computational Natural Language Learn- ing (CoNLL), pages 18-27.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Language to logical form with neural attention", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Dong and M. Lapata. 2016. Language to logical form with neural attention. In Association for Com- putational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Transition-based dependency parsing with stack long short-term memory", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Matthews", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Dyer, M. Ballesteros, W. Ling, A. Matthews, and N. A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In As- sociation for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A deep architecture for semantic parsing", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Freitas", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "M" |
| ], |
| "last": "Hermann", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL Workshop on Semantic Parsing", |
| "volume": "", |
| "issue": "", |
| "pages": "22--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Grefenstette, P. Blunsom, N. de Freitas, and K. M. Hermann. 2014. A deep architecture for seman- tic parsing. In ACL Workshop on Semantic Parsing, pages 22-27.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Incorporating copying mechanism in sequence-to-sequence learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "O" |
| ], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Gu, Z. Lu, H. Li, and V. O. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learn- ing. In Association for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Pointing the unknown words", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Ahn", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Nallapati", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Gulcehre, S. Ahn, R. Nallapati, B. Zhou, and Y. Ben- gio. 2016. Pointing the unknown words. In Associ- ation for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Traversing knowledge graphs in vector space", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Guu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Guu, J. Miller, and P. Liang. 2015. Travers- ing knowledge graphs in vector space. In Em- pirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Improving neural networks by preventing coadaptation of feature detectors", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "R" |
| ], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1207.0580" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. 2012. Improving neural networks by preventing co- adaptation of feature detectors. arXiv preprint arXiv:1207.0580.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Long shortterm memory", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. Neural Computation, 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Vocal tract length perturbation (vtlp) improves speech recognition", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Jaitly", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "International Conference on Machine Learning (ICML)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Jaitly and G. E. Hinton. 2013. Vocal tract length perturbation (vtlp) improves speech recog- nition. In International Conference on Machine Learning (ICML).", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Imagenet classification with deep convolutional neural networks", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "1097--1105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Krizhevsky, I. Sutskever, and G. E. Hinton. 2012. Imagenet classification with deep convolutional neu- ral networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097-1105.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Using semantic unification to generate regular expressions from natural language", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Kushman", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "826--836", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Kushman and R. Barzilay. 2013. Using semantic unification to generate regular expressions from nat- ural language. In Human Language Technology and North American Association for Computational Lin- guistics (HLT/NAACL), pages 826-836.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Inducing probabilistic CCG grammars from logical form with higher-order unification", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kwiatkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1223--1233", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order uni- fication. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223-1233.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Lexical generalization in CCG grammar induction for semantic parsing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kwiatkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1512--1523", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Empirical Methods in Natural Language Processing (EMNLP), pages 1512-1523.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Learning dependency-based compositional semantics", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "590--599", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2011. Learn- ing dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590-599.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Generative oversampling for mining imbalanced datasets", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ghosh", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "International Conference on Data Mining (DMIN)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Liu, J. Ghosh, and C. Martin. 2007. Generative oversampling for mining imbalanced datasets. In In- ternational Conference on Data Mining (DMIN).", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Effective approaches to attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1421", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Luong, H. Pham, and C. D. Manning. 2015a. Effective approaches to attention-based neural ma- chine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412-1421.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Addressing the rare word problem in neural machine translation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zaremba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "11--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Associa- tion for Computational Linguistics (ACL), pages 11- 19.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Listen, attend, and walk: Neural mapping of navigational instructions to action sequences", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Mei", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "R" |
| ], |
| "last": "Walter", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Mei, M. Bansal, and M. R. Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Association for the Advancement of Artificial Intelligence (AAAI).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Uptraining for accurate deterministic question parsing", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ringgaard", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Alshawi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Petrov, P. Chang, M. Ringgaard, and H. Alshawi. 2010. Uptraining for accurate deterministic ques- tion parsing. In Empirical Methods in Natural Lan- guage Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Grounded unsupervised semantic parsing", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Poon", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Poon. 2013. Grounded unsupervised semantic pars- ing. In Association for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems (NIPS), pages 3104-3112.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Pointer networks", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Fortunato", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Jaitly", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "2674--2682", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Vinyals, M. Fortunato, and N. Jaitly. 2015a. Pointer networks. In Advances in Neural Information Pro- cessing Systems (NIPS), pages 2674-2682.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Grammar as a foreign language", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "2755--2763", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. 2015b. Grammar as a foreign lan- guage. In Advances in Neural Information Process- ing Systems (NIPS), pages 2755-2763.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Dropout training as adaptive regularization", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Wager", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "I" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Wager, S. I. Wang, and P. Liang. 2013. Dropout training as adaptive regularization. In Advances in Neural Information Processing Systems (NIPS).", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Y. Wang and D. Yang. 2015. That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets. In Empirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Building a semantic parser overnight", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Berant", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In Association for Com- putational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Learning for semantic parsing with statistical machine translation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "W" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "North American Association for Computational Linguistics (NAACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "439--446", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. W. Wong and R. J. Mooney. 2006. Learning for se- mantic parsing with statistical machine translation. In North American Association for Computational Linguistics (NAACL), pages 439-446.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Learning synchronous grammars for semantic parsing with lambda calculus", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "W" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "960--967", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. W. Wong and R. J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL), pages 960-967.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Learning to parse database queries using inductive logic programming", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zelle", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "1050--1055", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic pro- gramming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050-1055.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Uncertainty in Artificial Intelligence (UAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "658--666", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classifica- tion with probabilistic categorial grammars. In Un- certainty in Artificial Intelligence (UAI), pages 658- 666.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Online learning of relaxed CCG grammars for parsing to logical form", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "678--687", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. S. Zettlemoyer and M. Collins. 2007. Online learn- ing of relaxed CCG grammars for parsing to log- ical form. In Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP/CoNLL), pages 678-687.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Characterlevel convolutional networks for text classification", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "X. Zhang, J. Zhao, and Y. LeCun. 2015. Character- level convolutional networks for text classification. In Advances in Neural Information Processing Sys- tems (NIPS).", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Type-driven incremental semantic parsing with polymorphism", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "North American Association for Computational Linguistics (NAACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Zhao and L. Huang. 2015. Type-driven incremen- tal semantic parsing with polymorphism. In North American Association for Computational Linguis- tics (NAACL).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "A sample of our artificial data.", |
| "num": null |
| }, |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Original Examples</td></tr><tr><td>what are the major cities in utah ?</td></tr><tr><td>what states border maine ?</td></tr><tr><td>Induce Grammar</td></tr><tr><td>Synchronous CFG</td></tr><tr><td>Sample New Examples</td></tr><tr><td>Train Model</td></tr><tr><td>Sequence-to-sequence RNN</td></tr></table>", |
| "text": ", recurrent neural networks (RNNs)" |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Sample new example (x , y ) from G</td></tr><tr><td>Add (x , y ) to Dt</td></tr><tr><td>end for</td></tr><tr><td>Shuffle Dt</td></tr><tr><td>for each example (x, y) in Dt do</td></tr><tr><td>\u03b8 \u2190 \u03b8 + \u03b7t\u2207 log p \u03b8 (y | x)</td></tr><tr><td>end for</td></tr><tr><td>end for</td></tr><tr><td>end function</td></tr></table>", |
| "text": "function TRAIN(dataset D, number of epochs T , number of examples to sample n) Induce grammar G from D Initialize RNN parameters \u03b8 randomly for each iteration t = 1, . . . , T do Compute current learning rate \u03b7t Initialize current dataset Dt to D for i = 1, . . . , n do" |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Previous Work</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Wang et al. (2015)</td><td>46.3</td><td>41.9</td><td>74.4</td><td>54.0</td><td>59.0</td><td>70.8</td><td>75.9</td><td>48.2</td><td>58.8</td></tr><tr><td>Our Model</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>No Recombination</td><td>85.2</td><td>58.1</td><td>78.0</td><td>71.4</td><td>76.4</td><td>79.6</td><td>76.2</td><td>81.4</td><td>75.8</td></tr><tr><td>ABSENTITIES</td><td>86.7</td><td>60.2</td><td>78.0</td><td>65.6</td><td>73.9</td><td>77.3</td><td>79.5</td><td>81.3</td><td>75.3</td></tr><tr><td>ABSWHOLEPHRASES</td><td>86.7</td><td>55.9</td><td>79.2</td><td>69.8</td><td>76.4</td><td>77.8</td><td>80.7</td><td>80.9</td><td>75.9</td></tr><tr><td>CONCAT-2</td><td>84.7</td><td>60.7</td><td>75.6</td><td>69.8</td><td>74.5</td><td>80.1</td><td>79.5</td><td>80.8</td><td>75.7</td></tr><tr><td>AWP + AE</td><td>85.2</td><td>54.1</td><td>78.6</td><td>67.2</td><td>73.9</td><td>79.6</td><td>81.9</td><td>82.1</td><td>75.3</td></tr><tr><td>AWP + AE + C2</td><td>87.5</td><td>60.2</td><td>81.0</td><td>72.5</td><td>78.3</td><td>81.0</td><td>79.5</td><td>79.6</td><td>77.5</td></tr></table>", |
| "text": "BASKETBALL BLOCKS CALENDAR HOUSING PUBLICATIONS RECIPES RESTAURANTS SOCIAL Avg." |
| } |
| } |
| } |
| } |