{ "paper_id": "P19-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:28:40.042547Z" }, "title": "Semantic Parsing with Dual Learning", "authors": [ { "first": "Ruisheng", "middle": [], "last": "Cao", "suffix": "", "affiliation": { "laboratory": "MoE Key Lab of Artificial Intelligence SpeechLab", "institution": "Jiao Tong University", "location": { "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Su", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "MoE Key Lab of Artificial Intelligence SpeechLab", "institution": "Jiao Tong University", "location": { "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Chen", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "MoE Key Lab of Artificial Intelligence SpeechLab", "institution": "Jiao Tong University", "location": { "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Jieyu", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "MoE Key Lab of Artificial Intelligence SpeechLab", "institution": "Jiao Tong University", "location": { "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "MoE Key Lab of Artificial Intelligence SpeechLab", "institution": "Jiao Tong University", "location": { "settlement": "Shanghai", "country": "China" } }, "email": "kai.yu@sjtu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Semantic parsing converts natural language queries into structured logical forms. The paucity of annotated training samples is a fundamental challenge in this field. In this work, we develop a semantic parsing framework with the dual learning algorithm, which enables a semantic parser to make full use of data (labeled and even unlabeled) through a dual-learning game. This game between a primal model (semantic parsing) and a dual model (logical form to query) forces them to regularize each other, and can achieve feedback signals from some prior-knowledge. By utilizing the prior-knowledge of logical form structures, we propose a novel reward signal at the surface and semantic levels which tends to generate complete and reasonable logical forms. Experimental results show that our approach achieves new state-of-the-art performance on ATIS dataset and gets competitive performance on OVERNIGHT dataset.", "pdf_parse": { "paper_id": "P19-1007", "_pdf_hash": "", "abstract": [ { "text": "Semantic parsing converts natural language queries into structured logical forms. The paucity of annotated training samples is a fundamental challenge in this field. In this work, we develop a semantic parsing framework with the dual learning algorithm, which enables a semantic parser to make full use of data (labeled and even unlabeled) through a dual-learning game. This game between a primal model (semantic parsing) and a dual model (logical form to query) forces them to regularize each other, and can achieve feedback signals from some prior-knowledge. By utilizing the prior-knowledge of logical form structures, we propose a novel reward signal at the surface and semantic levels which tends to generate complete and reasonable logical forms. Experimental results show that our approach achieves new state-of-the-art performance on ATIS dataset and gets competitive performance on OVERNIGHT dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic parsing is the task of mapping a natural language query into a logical form (Zelle and Mooney, 1996; Wong and Mooney, 2007; Zettlemoyer and Collins, 2007; Lu et al., 2008; Zettlemoyer and Collins, 2005) . A logical form is one type of meaning representation understood by computers, which usually can be executed by an executor to obtain the answers.", "cite_spans": [ { "start": 85, "end": 109, "text": "(Zelle and Mooney, 1996;", "ref_id": "BIBREF54" }, { "start": 110, "end": 132, "text": "Wong and Mooney, 2007;", "ref_id": "BIBREF44" }, { "start": 133, "end": 163, "text": "Zettlemoyer and Collins, 2007;", "ref_id": "BIBREF55" }, { "start": 164, "end": 180, "text": "Lu et al., 2008;", "ref_id": "BIBREF24" }, { "start": 181, "end": 211, "text": "Zettlemoyer and Collins, 2005)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The successful application of recurrent neural networks (RNN) in a variety of NLP tasks (Bahdanau et al., 2014; Sutskever et al., 2014; Vinyals et al., 2015) has provided strong impetus to treat semantic parsing as a sequence-tosequence (Seq2seq) problem (Jia and Liang, 2016; Dong and Lapata, 2016) . This approach generates a logical form based on the input query in an endto-end manner but still leaves two main issues: (1) lack of labeled data and (2) constrained decoding.", "cite_spans": [ { "start": 88, "end": 111, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF0" }, { "start": 112, "end": 135, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF35" }, { "start": 136, "end": 157, "text": "Vinyals et al., 2015)", "ref_id": "BIBREF41" }, { "start": 255, "end": 276, "text": "(Jia and Liang, 2016;", "ref_id": "BIBREF15" }, { "start": 277, "end": 299, "text": "Dong and Lapata, 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Firstly, semantic parsing relies on sufficient labeled data. However, data annotation of semantic parsing is a labor-intensive and time-consuming task. Especially, the logical form is unfriendly for human annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Secondly, unlike natural language sentences, a logical form is strictly structured. For example, the lambda expression of \"show flight from ci0 to ci1\" is ( lambda $0 e ( and ( from $0 ci0 ) ( to $0 ci1 ) ( flight $0 ) ) ). If we make no constraint on decoding, the generated logical form may be invalid or incomplete at surface and semantic levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Surface The generated sequence should be structured as a complete logical form. For example, left and right parentheses should be matched to force the generated sequence to be a valid tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Semantic Although the generated sequence is a legal logical form at surface level, it may be meaningless or semantically ill-formed. For example, the predefined binary predicate from takes no more than two arguments. The first argument must represent a flight and the second argument should be a city.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To avoid producing incomplete or semantically illformed logical forms, the output space must be constrained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce a semantic parsing framework (see Figure 1 ) by incorporating dual learning (He et al., 2016) to tackle the problems mentioned above. In this framework, we have a primal task (query to logical form) and a dual task (logical form to query). They can form a closed loop, and generate informative feedback signals to train the primal and dual models even without supervision. In this loop, the primal and dual models restrict or regularize each other by generating intermediate output in one model and then checking it in the other. Actually, it can be viewed as a method of data augmentation. Thus it can leverage unlabeled data (queries or synthesized logical forms) in a more effective way which helps alleviate the problem of lack of annotated data.", "cite_spans": [ { "start": 104, "end": 121, "text": "(He et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the dual learning framework, the primal and dual models are represented as two agents and they teach each other through a reinforcement learning process. To force the generated logical form complete and well-formed, we newly propose a validity reward by checking the output of the primal model at the surface and semantic levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our approach on two standard datasets: ATIS and OVERNIGHT. The results show that our method can obtain significant improvements over strong baselines on both datasets with fully labeled data, and even outperforms state-of-the-art results on ATIS. With additional logical forms synthesized from rules or templates, our method is competitive with state-ofthe-art systems on OVERNIGHT. Furthermore, our method is compatible with various semantic parsing models. We also conduct extensive experiments to further investigate our framework in semi-supervised settings, trying to figure out why it works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this paper are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 An innovative semantic parsing framework based on dual learning is introduced, which can fully exploit data (labeled or unlabeled) and incorporate various prior-knowledge as feedback signals. We are the first to introduce dual learning in semantic parsing to the best of our knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We further propose a novel validity reward focusing on the surface and semantics of logical forms, which is a feedback signal indicating whether the generated logical form is well-formed. It involves the prior-knowledge about structures of logical forms predefined in a domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct extensive experiments on ATIS and OVERNIGHT benchmarks. The results show that our method achieves new stateof-the-art performance (test accuracy 89.1%) on ATIS dataset and gets competitive performance on OVERNIGHT dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Before discussing the dual learning algorithm for semantic parsing, we first present the primal and dual tasks (as mentioned before) in detail. The primal and dual tasks are modeled on the attention-based Encoder-Decoder architectures (i.e. Seq2seq) which have been successfully applied in neural semantic parsing (Jia and Liang, 2016; Dong and Lapata, 2016) . We also include copy mechanism See et al., 2017) to tackle unknown tokens.", "cite_spans": [ { "start": 314, "end": 335, "text": "(Jia and Liang, 2016;", "ref_id": "BIBREF15" }, { "start": 336, "end": 358, "text": "Dong and Lapata, 2016)", "ref_id": "BIBREF7" }, { "start": 392, "end": 409, "text": "See et al., 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Primal and Dual Tasks of Semantic Parsing", "sec_num": "2" }, { "text": "The primal task is semantic parsing which converts queries into logical forms (Q2LF ). Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "x = x 1 \u2022 \u2022 \u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "x |x| denote the query, and y = y 1 \u2022 \u2022 \u2022 y |y| denote the logical form. An encoder is exploited to encode the query x into vector representations, and a decoder learns to generate logical form y depending on the encoding vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "Encoder Each word x i is mapped to a fixeddimensional vector by a word embedding function \u03c8(\u2022) and then fed into a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) . The hidden vectors are recursively computed at the i-th time step via:", "cite_spans": [ { "start": 134, "end": 168, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2212 \u2192 h i =f LSTM (\u03c8(x i ), \u2212 \u2192 h i\u22121 ), i = 1, \u2022 \u2022 \u2022 , |x| (1) \u2190 \u2212 h i =f LSTM (\u03c8(x i ), \u2190 \u2212 h i+1 ), i = |x|, \u2022 \u2022 \u2022 , 1 (2) h i =[ \u2212 \u2192 h i ; \u2190 \u2212 h i ]", "eq_num": "(3)" } ], "section": "Primal Task", "sec_num": "2.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "[\u2022; \u2022] denotes vector concatenation, h i \u2208 R 2n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": ", n is the number of hidden cells and f LSTM is the LSTM function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "Decoder Decoder is an unidirectional LSTM with the attention mechanism (Luong et al., 2015) . The hidden vector at the t-th time step is computed by", "cite_spans": [ { "start": 71, "end": 91, "text": "(Luong et al., 2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "s t = f LSTM (\u03c6(y t\u22121 ), s t\u22121 ), where \u03c6(\u2022)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "is the token embedding function for logical forms and s t \u2208 R n . The hidden vector of the first time step is initialized as s 0 = \u2190 \u2212 h 1 . The attention weight for the current step t of the decoder, with the i-th step in the encoder is a t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "i = exp(u t i ) |x| j=1 exp(u t j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u t i =v T tanh(W 1 h i + W 2 s t + b a )", "eq_num": "(4)" } ], "section": "Primal Task", "sec_num": "2.1" }, { "text": "Figure 1: An overview of dual semantic parsing framework. The primal model (Q2LF ) and dual model (LF 2Q) can form a closed cycle. But there are two different directed loops, depending on whether they start from a query or logical form. Validity reward is used to estimate the quality of the middle generation output, and reconstruction reward is exploited to avoid information loss. The primal and dual models can be pre-trained and fine-tuned with labeled data to keep the models effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primal Task", "sec_num": "2.1" }, { "text": "where v, b a \u2208 R n , and W 1 \u2208 R n\u00d72n , W 2 \u2208 R n\u00d7n are parameters. Then we compute the vocabulary distribution P gen (y t |y logical_form->query starts from a query, generates possible logical forms by agent Q2LF and tries to reconstruct the original query by LF 2Q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dual Model", "sec_num": "2.2" }, { "text": "The other loop logical_form->query->logical_form starts from the opposite side. Each agent will obtain quality feedback depending on reward functions defined in the directed loops.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dual Model", "sec_num": "2.2" }, { "text": "Suppose we have fully labeled dataset T = { x, y }, unlabeled dataset Q with only queries if available, and unlabeled dataset LF with only logical forms if available. We firstly pre-train the primal model Q2LF and the dual model LF 2Q on T by maximum likelihood estimation (MLE). Let \u0398 Q2LF and \u0398 LF 2Q denote all the parameters of Q2LF and LF 2Q respectively. Our learning algorithm in each iteration consists of three parts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning algorithm", "sec_num": "3.1" }, { "text": "As shown in Figure 1 (a), we sample a query x from Q \u222a T randomly. Given x, Q2LF model could generate k possible logical forms y 1 , y 2 , \u2022 \u2022 \u2022 , y k via beam search (k is beam size). For each y i , we can obtain a validity reward R val q (y i ) (a scalar) computed by a specific reward function which will be discussed in Section 3.2.1. After feeding y i into LF 2Q, we finally get a reconstruction reward R rec q (x, y i ) which forces the generated query as similar to x as possible and will be discussed in Section 3.2.2.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Loop starts from a query", "sec_num": "3.1.1" }, { "text": "A hyper-parameter \u03b1 is exploited to balance these two rewards in r q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a query", "sec_num": "3.1.1" }, { "text": "i = \u03b1R val q (y i ) + (1 \u2212 \u03b1)R rec q (x, y i ), where \u03b1 \u2208 [0, 1].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a query", "sec_num": "3.1.1" }, { "text": "By utilizing policy gradient (Sutton et al., 2000) , the stochastic gradients of \u0398 Q2LF and \u0398 LF 2Q are computed as:", "cite_spans": [ { "start": 29, "end": 50, "text": "(Sutton et al., 2000)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a query", "sec_num": "3.1.1" }, { "text": "\u2207\u0398 Q2LF\u00ca [r] = 1 k k i=1 r q i \u2207\u0398 Q2LF log P (yi|x; \u0398Q2LF ) \u2207\u0398 LF 2Q\u00ca [r] = 1 \u2212 \u03b1 k k i=1 \u2207\u0398 LF 2Q log P (x|yi; \u0398LF 2Q)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a query", "sec_num": "3.1.1" }, { "text": "As shown in Figure 1 (b), we sample a logical form y from LF \u222a T randomly. Given y, LF 2Q model generates k possible queries", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Loop starts from a logical form", "sec_num": "3.1.2" }, { "text": "x 1 , x 2 , \u2022 \u2022 \u2022 , x k via beam search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a logical form", "sec_num": "3.1.2" }, { "text": "For each x i , we can obtain a validity reward R val lf (x i ) (a scalar) which will also be discussed in Section 3.2.1. After feeding x i into Q2LF , we can also get a reconstruction reward R rec lf (y, x i ), which forces the generated logical form as similar to y as possible and will be discussed in Section 3.2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a logical form", "sec_num": "3.1.2" }, { "text": "A hyper-parameter \u03b2 is also exploited to balance these two rewards by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a logical form", "sec_num": "3.1.2" }, { "text": "r lf i = \u03b2R val lf (x i )+(1\u2212 \u03b2)R rec lf (y, x i ), where \u03b2 \u2208 [0, 1]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a logical form", "sec_num": "3.1.2" }, { "text": ". By utilizing policy gradient, the stochastic gradients of \u0398 Q2LF and \u0398 LF 2Q are computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a logical form", "sec_num": "3.1.2" }, { "text": "\u2207\u0398 LF 2Q\u00ca [r] = 1 k k i=1 r lf i \u2207\u0398 LF 2Q log P (xi|y; \u0398LF 2Q) \u2207\u0398 Q2LF\u00ca [r] = 1 \u2212 \u03b2 k k i=1 \u2207\u0398 Q2LF log P (y|xi; \u0398Q2LF )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loop starts from a logical form", "sec_num": "3.1.2" }, { "text": "The previous two stages are unsupervised learning processes, which need no labeled data. If there is no supervision for the primal and dual models after pre-training, these two models would be rotten especially when T is limited. To keep the learning process stable and prevent the models from crashes, we randomly select samples from T to fine-tune the primal and dual models by maximum likelihood estimation (MLE). Details about the dual learning algorithm for semantic parsing are provided in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervisor guidance", "sec_num": "3.1.3" }, { "text": "As mentioned in Section 3.1, there are two types of reward functions in each loop: validity reward (R val q , R val lf ) and reconstruction reward (R rec q , R rec lf ). But each type of reward function may be different in different loops.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reward design", "sec_num": "3.2" }, { "text": "Validity reward is used to evaluate the quality of intermediate outputs in a loop (see Figure 1 ). In the loop starts from a query, it indicates whether the generated logical forms are well-formed at the surface and semantic levels. While in the loop starts from a logical form, it indicates how natural and fluent the intermediate queries are. Loop starts from a query: We estimate the quality of the generated logical forms at two levels, i.e. surface and semantics. Firstly, we check whether the logical form is a complete tree without parentheses mismatching. As for semantics, we check whether the logical form is understandable without errors like type inconsistency. It can be formulated as R val q (y) = grammar_error_indicator(y) 9which returns 1 when y has no error at the surface and semantic levels, and returns 0 otherwise. If there exists an executing program or search engine for logical form y, e.g. dataset OVERNIGHT (Wang et al., 2015) , grammar_error_indicator(\u2022) has been included.", "cite_spans": [ { "start": 934, "end": 953, "text": "(Wang et al., 2015)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "Otherwise, we should construct a grammar error indicator based on the ontology of the corresponding dataset. For example, a specification of ATIS can be extracted by clarifying all (1) entities paired with corresponding types, (2) unary/binary predicates with argument constraints (see Table 1 ). Accordingly, Algorithm 1 abstracts the procedure of checking the surface and semantics for a logical form candidate y based on the specification. Loop starts from a logical form: A language model (LM) is exploited to evaluate the quality of intermediate queries (Mikolov et al., 2010) . We", "cite_spans": [ { "start": 559, "end": 581, "text": "(Mikolov et al., 2010)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 286, "end": 293, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "Algorithm 1 Grammar error indicator on ATIS Input: Logical form string y; specification D Output: 1/0, whether y is valid 1: if to_lisp_tree(y) succeed then 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "lispT ree \u2190 to_lisp_tree(y) using Depth-First-Search for lispT ree 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "if type_consistent(lispT ree, D) then 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "return 1 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "end if 6: end if 7: return 0 apply length-normalization (Wu et al., 2016) to make a fair competition between short and long queries.", "cite_spans": [ { "start": 56, "end": 73, "text": "(Wu et al., 2016)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R val lf (x) = log LM q (x)/Length(x),", "eq_num": "(10)" } ], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "where LM q (\u2022) is a language model pre-trained on all the queries of Q \u222a T (referred in Section 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validity reward", "sec_num": "3.2.1" }, { "text": "Reconstruction reward is used to estimate how similar the output of one loop is compared with the input. We take log likelihood as reconstruction rewards for the loop starts from a query and the loop starts from a logical form. Thus,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reconstruction reward", "sec_num": "3.2.2" }, { "text": "R rec q (x, y i ) = log P (x|y i ; \u0398 LF 2Q ) R rec lf (y, x i ) = log P (y|x i ; \u0398 Q2LF )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reconstruction reward", "sec_num": "3.2.2" }, { "text": "where y i and x i are intermediate outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reconstruction reward", "sec_num": "3.2.2" }, { "text": "In this section, we evaluate our framework on the ATIS and OVERNIGHT datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "ATIS We use the preprocessed version provided by Dong and Lapata (2018) , where natural language queries are lowercased and stemmed with NLTK (Loper and Bird, 2002) , and entity mentions are replaced by numbered markers. We also leverage an external lexicon that maps word phrases (e.g., first class) to entities (e.g., first:cl) like what Jia and Liang (2016) did.", "cite_spans": [ { "start": 49, "end": 71, "text": "Dong and Lapata (2018)", "ref_id": "BIBREF8" }, { "start": 142, "end": 164, "text": "(Loper and Bird, 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "OVERNIGHT It contains natural language paraphrases paired with logical forms across eight domains. We follow the traditional 80%/20% train/valid splits as Wang et al. (2015) to choose the best model during training.", "cite_spans": [ { "start": 155, "end": 173, "text": "Wang et al. (2015)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "ATIS and OVERNIGHT never provide unlabeled queries. To test our method in semi-supervised learning, we keep a part of the training set as fully labeled data and leave the rest as unpaired queries and logical forms which simulate unlabeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Although there is no unlabeled query provided in most semantic parsing benchmarks, it should be easy to synthesize logical forms. Since a logical form is strictly structured and can be modified from the existing one or created from simple grammars, it is much cheaper than query collection. Our synthesized logical forms are public 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthesis of logical forms", "sec_num": "4.2" }, { "text": "On ATIS, we randomly sample a logical form from the training set, and select one entity or predicate for replacement according to the specification in Table 1 . If the new logical form after replacement is valid and never seen, it is added to the unsupervised set. 4592 new logical forms are created for ATIS. An example is shown in Figure 2 . ", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 158, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 333, "end": 341, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Modification based on ontology", "sec_num": "4.2.1" }, { "text": "Wang et al. 2015proposed an underlying grammar to generate logical forms along with their corresponding canonical utterances on OVERNIGHT, which can be found in SEMPRE 3 . We reorder the entity instances (e.g., ENTITYNP) of one type (e.g., TYPENP) in grammar files to generate new logical forms. We could include new entity instances if we want more unseen logical forms, but we didn't do that actually. Finally, we get about 500 new logical forms for each domain on average. More examples can be found in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generation based on grammar", "sec_num": "4.2.2" }, { "text": "We use 200 hidden units and 100-dimensional word vectors for all encoders and decoders of Q2LF and LF 2Q models. The LSTMs used are in single-layer. Word embeddings on query side are initialized by Glove6B (Pennington et al., 2014) . Out-of-vocabulary words are replaced with a special token unk . Other parameters are initialized by uniformly sampling within the interval [\u22120.2, 0.2]. The language model we used is also a single-layer LSTM with 200 hidden units and 100-dim word embedding layer.", "cite_spans": [ { "start": 206, "end": 231, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Base models", "sec_num": "4.3.1" }, { "text": "We individually pre-train Q2LF /LF 2Q models using only labeled data and language model LM q using both labeled and unlabeled queries. The language model is fixed for calculating reward. The hyper-parameters \u03b1 and \u03b2 are selected according to validation set (0.5 is used), and beam size k is selected from {3, 5}. The batch size is selected from {10, 20}. We use optimizer Adam (Kingma and Ba, 2014) with learning rate 0.001 for all experiments. Finally, we evaluate the primal model (Q2LF , semantic parsing) and report test accuracy on each dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and decoding", "sec_num": "4.3.2" }, { "text": "We perform a PSEUDO baseline following the setup in Sennrich et al. (2016) and Guo et al. (2018) . The pre-trained LF 2Q or Q2LF model is used to generate pseudo query, logical f orm pairs from unlabeled logical forms or unlabeled queries, which extends the training set. The pseudo-labeled data is used carefully with a discount factor (0.5) in loss function (Lee, 2013) , when we train Q2LF by supervised training.", "cite_spans": [ { "start": 52, "end": 74, "text": "Sennrich et al. (2016)", "ref_id": "BIBREF32" }, { "start": 79, "end": 96, "text": "Guo et al. (2018)", "ref_id": "BIBREF11" }, { "start": 360, "end": 371, "text": "(Lee, 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.4" }, { "text": "The results are illustrated in Table 2 and 3. ATT and ATTPTR represent that the primal/dual models are attention-based Seq2seq and attention-based Seq2seq with copy mechanism respectively. We train models with the dual learning algorithm if DUAL is included, otherwise we only train the primal model by supervised training. LF refers to the synthesized logical forms. PSEUDO uses the (Wang et al., 2015) 46.3 41.9 74.4 54.0 59.0 70.8 75.9 48.2 58.8 DSP-C (Xiao et al., 2016) 80 Method ATIS ZC07 (Zettlemoyer and Collins, 2007) 84.6 FUBL (Kwiatkowski et al., 2011) 82.8 GUSP++ (Poon, 2013) 83.5 TISP (Zhao and Huang, 2015) 84.2 SEQ2TREE (Dong and Lapata, 2016) 84.6 ASN+SUPATT (Rabinovich et al., 2017) 85.9 TRANX 86.2 COARSE2FINE (Dong and Lapata, 2018) 87 LF 2Q model and LF to generate pseudo-labeled data. From the overall results, we can see that:", "cite_spans": [ { "start": 384, "end": 403, "text": "(Wang et al., 2015)", "ref_id": "BIBREF43" }, { "start": 455, "end": 474, "text": "(Xiao et al., 2016)", "ref_id": "BIBREF48" }, { "start": 495, "end": 526, "text": "(Zettlemoyer and Collins, 2007)", "ref_id": "BIBREF55" }, { "start": 537, "end": 563, "text": "(Kwiatkowski et al., 2011)", "ref_id": "BIBREF20" }, { "start": 576, "end": 588, "text": "(Poon, 2013)", "ref_id": "BIBREF29" }, { "start": 599, "end": 621, "text": "(Zhao and Huang, 2015)", "ref_id": "BIBREF57" }, { "start": 636, "end": 659, "text": "(Dong and Lapata, 2016)", "ref_id": "BIBREF7" }, { "start": 676, "end": 701, "text": "(Rabinovich et al., 2017)", "ref_id": "BIBREF30" }, { "start": 730, "end": 753, "text": "(Dong and Lapata, 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 31, "end": 38, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Main results", "sec_num": "4.4.1" }, { "text": "Bas. Blo. Cal. Hou. Pub. Rec. Res. Soc. Avg. SPO", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main results", "sec_num": "4.4.1" }, { "text": "1) Even without the additional logical forms by synthesizing, the dual learning based semantic parser can outperform our baselines with supervised training, e.g., \"ATT + DUAL\" gets much better performances than \"ATT + PSEUDO(LF)\" in Table 2 and 3. We think the Q2LF and LF 2Q models can teach each other in dual learning: one model sends informative signals to help regularize the other model. Actually, it can also be explained as a data augmentation procedure, e.g., Q2LF can generate samples utilized by LF 2Q and vice versa. While the PSEUDO greatly depends on the quality of pseudo-samples even if a discount factor is considered.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Main results", "sec_num": "4.4.1" }, { "text": "2) By involving the synthesized logical forms LF in the dual learning for each domain respectively, the performances are improved further. We achieve state-of-the-art performance (89.1%) 4 on ATIS as shown in Table 3 . On OVERNIGHT dataset, we achieve a competitive performance on average (80.2%). The best average accuracy is from Su and Yan (2017) , which benefits from cross-domain training. We believe our method could get more improvements with stronger primal models (e.g. with domain adaptation). Our method would be compatible with various models.", "cite_spans": [ { "start": 332, "end": 349, "text": "Su and Yan (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 209, "end": 216, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Main results", "sec_num": "4.4.1" }, { "text": "3) Copy mechanism can remarkably improve accuracy on ATIS, while not on OVERNIGHT. The average accuracy even decreases from 80.2% to 79.9% when using the copy mechanism. We argue that OVERNIGHT dataset contains a very small number of distinct entities that copy is not essential, and it contains less training samples than ATIS. This phenomenon also exists in Jia and Liang (2016) .", "cite_spans": [ { "start": 360, "end": 380, "text": "Jia and Liang (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Main results", "sec_num": "4.4.1" }, { "text": "Semi-supervised learning We keep a part of the training set as labeled data T randomly and leave the rest as unpaired queries (Q) and logical forms (LF) to validate our method in a semi-supervised setting. The ratio of labeled data is 50%. PSEUDO here uses the Q2LF model and Q to generate pseudo-labeled data, as well as LF 2Q model and LF. From Table 4 , we can see that the dual learning method outperforms the PSEUDO baseline in two datasets dramatically. The dual learning method is more efficient to exploit unlabeled data. In general, both unpaired queries and logi- Table 4 : Semi-supervised learning experiments. We keep 50% of the training set as labeled data randomly, and leave the rest as unpaired queries(Q) and logical forms(LF) to simulate unsupervised dataset. cal forms could boost the performance of semantic parsers with dual learning.", "cite_spans": [], "ref_spans": [ { "start": 347, "end": 354, "text": "Table 4", "ref_id": null }, { "start": 574, "end": 581, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Ablation study", "sec_num": "4.4.2" }, { "text": "Different ratios To investigate the efficiency of our method in semi-supervised learning, we vary the ratio of labeled data kept on ATIS from 1% to 90%. In Figure 3 , we can see that dual learning strategy enhances semantic parsing over all proportions. The prominent gap happens when the ratio is between 0.2 and 0.4. Generally, the more unlabeled data we have, the more remarkable the leap is. However, if the labeled data is really limited, less supervision can be exploited to keep the primal and dual models reasonable. For example, when the ratio of labeled data is from only 1% to 10%, the improvement is not that obvious. Does more unlabeled data give better result?", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 164, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Ablation study", "sec_num": "4.4.2" }, { "text": "We also fix the ratio of labeled data as 30%, and change the ratio of unlabeled samples to the rest data on ATIS, as illustrated in Figure 4 . Even Figure 4 : Test accuracies on ATIS. It fixes the ratio of labeled data as 30%, and varies the ratio of unlabeled samples to the rest data.", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 140, "text": "Figure 4", "ref_id": null }, { "start": 148, "end": 156, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Ablation study", "sec_num": "4.4.2" }, { "text": "without unlabeled data (i.e. the ratio of unlabeled data is zero), the dual learning based semantic parser can outperform our baselines. However, the performance of our method doesn't improve constantly, when the amount of unlabeled data is increased. We think the power of the primal and dual models is constrained by the limited amount of labeled data. When some complex queries or logical forms come, the two models may converge to an equilibrium where the intermediate value loses some implicit semantic information, but the rewards are high.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation study", "sec_num": "4.4.2" }, { "text": "Choice for validity reward We conduct another experiment by changing validity reward in Eq.9 with length-normalized LM score (i.e. language model of logical forms) like Eq.10. Results (Table 5) show that \"hard\" surface/semantic check is more suitable than \"soft\" probability of logical Table 5 : Test accuracies on ATIS and OVERNIGHT in semi-supervised learning setting (the ratio of labeled data is 50%). On OVERNIGHT, we average across all eight domains. LM lf means using a logical form language model for validity reward, while \"grammar check\" means using the surface and semantic check.", "cite_spans": [], "ref_spans": [ { "start": 286, "end": 293, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Ablation study", "sec_num": "4.4.2" }, { "text": "form LM. We think that simple language models may suffer from long-dependency and data imbalance issues, and it is hard to capture inner structures of logical forms from a sequential model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation study", "sec_num": "4.4.2" }, { "text": "Lack of data A semantic parser can be trained from labeled logical forms or weakly supervised samples (Krishnamurthy and Mitchell, 2012; Berant et al., 2013; Liang et al., 2017; Goldman et al., 2018) . Yih et al. 2016demonstrate logical forms can be collected efficiently and more useful than merely answers to queries. Wang et al. (2015) construct a semantic parsing dataset starting from grammar rules to crowdsourcing paraphrase. Jia and Liang (2016) induces synchronous contextfree grammar (SCFG) and creates new \"recombinant\" examples accordingly. Su and Yan (2017) use multiple source domains to reduce the cost of collecting data for the target domain. Guo et al. (2018) pre-train a question generation model to produce pseudo-labeled data as a supplement. In this paper, we introduce the dual learning to make full use of data (both labeled and unlabeled). introduce a variational auto-encoding model for semi-supervised semantic parsing. Beyond semantic parsing, the semisupervised and adaptive learnings are also typical in natural language understanding (Tur et al., 2005; Bapna et al., 2017; Zhu et al., 2014 .", "cite_spans": [ { "start": 102, "end": 136, "text": "(Krishnamurthy and Mitchell, 2012;", "ref_id": "BIBREF19" }, { "start": 137, "end": 157, "text": "Berant et al., 2013;", "ref_id": "BIBREF2" }, { "start": 158, "end": 177, "text": "Liang et al., 2017;", "ref_id": "BIBREF22" }, { "start": 178, "end": 199, "text": "Goldman et al., 2018)", "ref_id": "BIBREF9" }, { "start": 320, "end": 338, "text": "Wang et al. (2015)", "ref_id": "BIBREF43" }, { "start": 553, "end": 570, "text": "Su and Yan (2017)", "ref_id": "BIBREF33" }, { "start": 660, "end": 677, "text": "Guo et al. (2018)", "ref_id": "BIBREF11" }, { "start": 1065, "end": 1083, "text": "(Tur et al., 2005;", "ref_id": "BIBREF40" }, { "start": 1084, "end": 1103, "text": "Bapna et al., 2017;", "ref_id": "BIBREF1" }, { "start": 1104, "end": 1120, "text": "Zhu et al., 2014", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Constrained decoding To avoid invalid parses, additional restrictions must be considered in the decoding. Dong and Lapata (2016) propose SEQ2TREE method to ensure the matching of parentheses, which can generate syntactically valid output. Cheng et al. (2017) and Dong and Lapata (2018) both try to decode in two steps, from a coarse rough sketch to a finer structure hierarchically. Krishnamurthy et al. (2017) define a grammar of production rules such that only welltyped logical forms can be generated. Yin and Neubig (2017) and Chen et al. (2018a) both transform the generation of logical forms into query graph construction. Zhao et al. (2019) propose a hierarchical parsing model following the structure of semantic representations, which is predefined by domain developers. We introduce a validity reward at the surface and semantic levels in the dual learning algorithm as a constraint signal. Dual learning Dual learning framework is first proposed to improve neural machine translation (NMT) (He et al., 2016) . Actually, the primal and dual tasks are symmetric in NMT, while not in semantic parsing. The idea of dual learning has been applied in various tasks (Xia et al., 2017) , such as Question Answering/Generation (Tang et al., 2017 , Image-to-Image Translation (Yi et al., 2017) and Open-domain Information Extraction/Narration . We are the first to introduce dual learning in semantic parsing to the best of our knowledge.", "cite_spans": [ { "start": 106, "end": 128, "text": "Dong and Lapata (2016)", "ref_id": "BIBREF7" }, { "start": 239, "end": 258, "text": "Cheng et al. (2017)", "ref_id": "BIBREF6" }, { "start": 263, "end": 285, "text": "Dong and Lapata (2018)", "ref_id": "BIBREF8" }, { "start": 383, "end": 410, "text": "Krishnamurthy et al. (2017)", "ref_id": "BIBREF18" }, { "start": 505, "end": 526, "text": "Yin and Neubig (2017)", "ref_id": "BIBREF51" }, { "start": 531, "end": 550, "text": "Chen et al. (2018a)", "ref_id": "BIBREF3" }, { "start": 629, "end": 647, "text": "Zhao et al. (2019)", "ref_id": "BIBREF58" }, { "start": 1001, "end": 1018, "text": "(He et al., 2016)", "ref_id": "BIBREF12" }, { "start": 1170, "end": 1188, "text": "(Xia et al., 2017)", "ref_id": "BIBREF47" }, { "start": 1229, "end": 1247, "text": "(Tang et al., 2017", "ref_id": "BIBREF38" }, { "start": 1277, "end": 1294, "text": "(Yi et al., 2017)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we develop a semantic parsing framework based on dual learning algorithm, which enables a semantic parser to fully utilize labeled and even unlabeled data through a duallearning game between the primal and dual models. We also propose a novel reward function at the surface and semantic levels by utilizing the prior-knowledge of logical form structures. Thus, the primal model tends to generate complete and reasonable semantic representation. Experimental results show that semantic parsing based on dual learning improves performance across datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "In the future, we want to incorporate this framework with much refined primal and dual models, and design more informative reward signals to make the training more efficient. It would be appealing to apply graph neural networks (Chen et al., 2018b (Chen et al., , 2019 for each possible logical form y i do", "cite_spans": [ { "start": 228, "end": 247, "text": "(Chen et al., 2018b", "ref_id": "BIBREF5" }, { "start": 248, "end": 268, "text": "(Chen et al., , 2019", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Obtain validity reward for y i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "R val q (y i ) = grammar_error_indicator(y i ) 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "Get reconstruction reward for y i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "R rec q (x, y i ) = log P (x|y i ; \u0398 LF 2Q )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "Compute total reward for y i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "r q i = \u03b1R val q (y i ) + (1 \u2212 \u03b1)R rec q (x, y i ) 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11:", "sec_num": null }, { "text": "Compute stochastic gradient of \u0398 Q2LF :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "\u2207\u0398 Q2LF\u00ca [r] = 1 k k i=1 r q i \u2207\u0398 Q2LF log P (yi|x; \u0398Q2LF )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "16:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "Compute stochastic gradient of \u0398 LF 2Q :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "\u2207\u0398 LF 2Q\u00ca [r] = 1 \u2212 \u03b1 k k i=1 \u2207\u0398 LF 2Q log P (x|yi; \u0398LF 2Q)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "17:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "Model updates:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "\u0398 Q2LF \u2190\u0398 Q2LF + \u03b7 1 \u2022 \u2207 \u0398 Q2LF\u00ca [r] \u0398 LF 2Q \u2190\u0398 LF 2Q + \u03b7 2 \u2022 \u2207 \u0398 LF 2Q\u00ca [r]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "18:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "Sample a logical form y from LF \u222a T ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "19:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "LF 2Q model generates k queries x 1 , x 2 , \u2022 \u2022 \u2022 , x k via beam search; 20: for each possible query x i do 21: Obtain validity reward for x i as R val lf (x i ) = log LM q (x i )/Length(x i ) 22: Get reconstruction reward for x i as R rec lf (y, x i ) = log P (y|x i ; \u0398 Q2LF ) 23:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "Compute total reward for x i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "r lf i = \u03b2R val lf (x i ) + (1 \u2212 \u03b2)R rec lf (y, x i ) 24:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15:", "sec_num": null }, { "text": "Compute stochastic gradient of \u0398 LF 2Q :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "25:", "sec_num": null }, { "text": "\u2207\u0398 LF 2Q\u00ca [r] = 1 k k i=1 r lf i \u2207\u0398 LF 2Q log P (xi|y; \u0398LF 2Q)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "25:", "sec_num": null }, { "text": "Compute stochastic gradient of \u0398 Q2LF :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "\u2207\u0398 Q2LF\u00ca [r] = 1 \u2212 \u03b2 k k i=1 \u2207\u0398 Q2LF log P (y|xi; \u0398Q2LF )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "27:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "Model updates:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "\u0398 LF 2Q \u2190\u0398 LF 2Q + \u03b7 2 \u2022 \u2207 \u0398 LF 2Q\u00ca [r] \u0398 Q2LF \u2190\u0398 Q2LF + \u03b7 1 \u2022 \u2207 \u0398 Q2LF\u00ca [r]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "After reinforcement learning process, use labeled data to fine-tune models ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "Update \u0398 Q2LF by \u0398 Q2LF \u2190 \u0398 Q2LF + \u03b7 1 \u2022 \u2207 \u0398 Q2LF log P (y|x; \u0398 Q2LF ) 30: Update \u0398 LF 2Q by \u0398 LF 2Q \u2190 \u0398 LF 2Q + \u03b7 2 \u2022 \u2207 \u0398 LF 2Q log P (x|y; \u0398 LF 2Q ) 31: until Q2LF model converges B Examples of synthesized logical forms Original", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "After Modification Entity Replacement ( lambda $0 e ( and ( flight $0 ) ( meal $0 lunch:me ) ( from $0 ci0 ) ( to $0 ci1 ) ) ) ( lambda $0 e ( and ( flight $0 ) ( meal $0 dinner:me ) ( from $0 ci0 ) ( to $0 ci1 ) ) ) ( = al0 ( abbrev delta:al ) ) ( = al0 ( abbrev usair:al ) ) ( lambda $0 e ( and ( flight $0 ) ( class_type $0 thrift:cl ) ( from $0 ci1 ) ( to $0 ci0 ) ) ) ( lambda $0 e ( and ( flight $0 ) ( class_type $0 business:cl ) ( from $0 ci1 ) ( to $0 ci0 ) ) ) Unary Replacement ( lambda $0 e ( exists $1 ( and ( round_trip $1 ) ( from $1 ci0 ) ( to $1 ci1 ) ( = ( fare $1 ) $0 ) ) ) ) ( lambda $0 e ( exists $1 ( and ( oneway $1 ) ( from $1 ci0 ) ( to $1 ci1 ) ( = ( fare $1 ) $0 ) ) ) ) ( lambda $0 e ( and ( ground_transport $0 ) ( to_city $0 ci0 ) ) ) ( lambda $0 e ( and ( has_meal $0 ) ( to_city $0 ci0 ) ) ) ( lambda $0 e ( and ( taxi $0 ) ( to_city $0 ci0 ) ( from_airport $0 ap0 ) ) ) ( lambda $0 e ( and ( limousine $0 ) ( to_city $0 ci0 ) ( from_airport $0 ap0 ) ) ) Binary Replacement ( lambda $0 e ( and ( flight $0 ) ( airline $0 al0 ) ( approx_departure_time $0 ti0 ) ( from $0 ci0 ) ( to $0 ci1 ) ) ) ( lambda $0 e ( and ( flight $0 ) ( airline $0 al0 ) ( approx_arrival_time $0 ti0 ) ( from $0 ci0 ) ( to $0 ci1 ) ) ) ( lambda $0 e ( and ( flight $0 ) ( from $0 ci0 ) ( to $0 ci1 ) ( day_return $0 da0 ) ( day_number_return $0 dn0 ) ( month_return $0 mn0 ) ) ) ( lambda $0 e ( and ( flight $0 ) ( from $0 ci0 ) ( to $0 ci1 ) ( day_arrival $0 da0 ) ( day_number_arrival $0 dn0 ) ( month_arrival $0 mn0 ) ) ) ( lambda $0 e ( and ( flight $0 ) ( airline $0 al0 ) ( stop $0 ci0 ) ) ) ( lambda $0 e ( and ( flight $0 ) ( airline $0 al0 ) ( from $0 ci0 ) ) ) = ) en.field.history ) ) ( call SW.domain ( string student ) ) ) ( string student ) ) ) Table 7 : Examples of synthesized logical forms on OVERNIGHT.", "cite_spans": [], "ref_spans": [ { "start": 1680, "end": 1683, "text": "= )", "ref_id": null }, { "start": 1768, "end": 1775, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "26:", "sec_num": null }, { "text": "KB \u22121 (\u2022) is the inverse operation of KB(\u2022), which returns the set of all corresponding noun phrases given a KB entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/RhythmCao/ Synthesized-Logical-Forms 3 https://github.com/percyliang/sempre", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although there is another published result that achieved better performance by using word class information from Wiktionary(Wang et al., 2014), it is unfair to compare it with our results and other previous systems which only exploit data resources of ATIS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been supported by the National Key Research and Development Program of China (Grant No.2017YFB1002102) and the China NSFC projects (No. 61573241).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Towards zero-shot frame semantic parsing for domain scaling", "authors": [ { "first": "Ankur", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "G\u00f6khan", "middle": [], "last": "T\u00fcr", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "Larry", "middle": [ "P" ], "last": "Heck", "suffix": "" } ], "year": 2017, "venue": "18th Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "2476--2480", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankur Bapna, G\u00f6khan T\u00fcr, Dilek Hakkani-T\u00fcr, and Larry P. Heck. 2017. Towards zero-shot frame se- mantic parsing for domain scaling. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, pages 2476- 2480.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semantic parsing on freebase from question-answer pairs", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1533--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sequenceto-action: End-to-end semantic graph generation for semantic parsing", "authors": [ { "first": "Bo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "766--777", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Chen, Le Sun, and Xianpei Han. 2018a. Sequence- to-action: End-to-end semantic graph generation for semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 766-777.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Agentgraph: Towards universal dialogue management with structured deep reinforcement learning", "authors": [ { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Sishan", "middle": [], "last": "Long", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.11259" ] }, "num": null, "urls": [], "raw_text": "Lu Chen, Zhi Chen, Bowen Tan, Sishan Long, Mil- ica Gasic, and Kai Yu. 2019. Agentgraph: To- wards universal dialogue management with struc- tured deep reinforcement learning. arXiv preprint arXiv:1905.11259.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Structured dialogue policy with graph neural networks", "authors": [ { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Sishan", "middle": [], "last": "Long", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1257--1268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu Chen, Bowen Tan, Sishan Long, and Kai Yu. 2018b. Structured dialogue policy with graph neu- ral networks. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 1257-1268, Santa Fe, New Mexico, USA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning structured natural language representations for semantic parsing", "authors": [ { "first": "Jianpeng", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Vijay", "middle": [], "last": "Saraswat", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "44--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 44-55.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Language to logical form with neural attention", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "33--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 33-43.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Coarse-to-fine decoding for neural semantic parsing", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "731--742", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 731-742.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Weakly supervised semantic parsing with abstract examples", "authors": [ { "first": "Omer", "middle": [], "last": "Goldman", "suffix": "" }, { "first": "Veronica", "middle": [], "last": "Latcinnik", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Nave", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1809--1819", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Goldman, Veronica Latcinnik, Ehud Nave, Amir Globerson, and Jonathan Berant. 2018. Weakly su- pervised semantic parsing with abstract examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1809-1819.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pointing the unknown words", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "140--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140- 149.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Question generation from sql queries improves neural semantic parsing", "authors": [ { "first": "Daya", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yibo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Chi", "suffix": "" }, { "first": "James", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1597--1607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daya Guo, Yibo Sun, Duyu Tang, Nan Duan, Jian Yin, Hong Chi, James Cao, Peng Chen, and Ming Zhou. 2018. Question generation from sql queries im- proves neural semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 1597-1607.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dual learning for machine translation", "authors": [ { "first": "Di", "middle": [], "last": "He", "suffix": "" }, { "first": "Yingce", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Nenghai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wei-Ying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "820--828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learn- ing for machine translation. In Advances in Neural Information Processing Systems, pages 820-828.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Neural semantic parsing over multiple knowledge-bases", "authors": [ { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "623--628", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Herzig and Jonathan Berant. 2017. Neural semantic parsing over multiple knowledge-bases. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 623-628.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Data recombination for neural semantic parsing", "authors": [ { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "12--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 12-22.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Resource description framework (rdf): Concepts and abstract syntax", "authors": [ { "first": "Graham", "middle": [], "last": "Klyne", "suffix": "" }, { "first": "Jeremy", "middle": [ "J" ], "last": "Carroll", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graham Klyne and Jeremy J Carroll. 2006. Resource description framework (rdf): Concepts and abstract syntax.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural semantic parsing with type constraints for semi-structured tables", "authors": [ { "first": "Jayant", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1516--1526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gard- ner. 2017. Neural semantic parsing with type con- straints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 1516-1526.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Weakly supervised training of semantic parsers", "authors": [ { "first": "Jayant", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "M", "middle": [], "last": "Tom", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "754--765", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jayant Krishnamurthy and Tom M Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 754-765.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Lexical generalization in ccg grammar induction for semantic parsing", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1512--1523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2011. Lexical generaliza- tion in ccg grammar induction for semantic pars- ing. In Proceedings of the conference on empiri- cal methods in natural language processing, pages 1512-1523.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "authors": [ { "first": "Dong-Hyun", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2013, "venue": "Workshop on Challenges in Representation Learning, ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural symbolic machines: Learning semantic parsers on freebase with weak supervision", "authors": [ { "first": "Chen", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Kenneth", "middle": [ "D" ], "last": "Forbus", "suffix": "" }, { "first": "Ni", "middle": [], "last": "Lao", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "23--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic ma- chines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 23-33.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "NLTK: The natural language toolkit", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper and Steven Bird. 2002. NLTK: The nat- ural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A generative model for parsing natural language to meaning representations", "authors": [ { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Wee", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Sun Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "783--792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S Zettlemoyer. 2008. A generative model for pars- ing natural language to meaning representations. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 783- 792.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Luk\u00e1\u0161", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ja\u0148", "middle": [], "last": "Cernock\u1ef3", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2010, "venue": "Eleventh annual conference of the international speech communication association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech com- munication association.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Abstractive text summarization using sequence-tosequence rnns and beyond", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "280--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Ab- stractive text summarization using sequence-to- sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natu- ral Language Learning, pages 280-290.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Grounded unsupervised semantic parsing", "authors": [ { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "933--943", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoifung Poon. 2013. Grounded unsupervised seman- tic parsing. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 933-943.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Abstract syntax networks for code generation and semantic parsing", "authors": [ { "first": "Maxim", "middle": [], "last": "Rabinovich", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Stern", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1139--1149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code genera- tion and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1139-1149.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1073--1083", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Cross-domain semantic parsing via paraphrasing", "authors": [ { "first": "Yu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xifeng", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1235--1246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Su and Xifeng Yan. 2017. Cross-domain seman- tic parsing via paraphrasing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1235-1246.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Logician and orator: Learning from the duality between language and knowledge in open domain", "authors": [ { "first": "Mingming", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2119--2130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingming Sun, Xu Li, and Ping Li. 2018. Logician and orator: Learning from the duality between lan- guage and knowledge in open domain. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2119-2130.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Reinforcement learning: An introduction", "authors": [ { "first": "S", "middle": [], "last": "Richard", "suffix": "" }, { "first": "Andrew", "middle": [ "G" ], "last": "Sutton", "suffix": "" }, { "first": "", "middle": [], "last": "Barto", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard S Sutton and Andrew G Barto. 2018. Rein- forcement learning: An introduction. MIT press.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Policy gradient methods for reinforcement learning with function approximation", "authors": [ { "first": "S", "middle": [], "last": "Richard", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Sutton", "suffix": "" }, { "first": "", "middle": [], "last": "Mcallester", "suffix": "" }, { "first": "P", "middle": [], "last": "Satinder", "suffix": "" }, { "first": "Yishay", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Mansour", "suffix": "" } ], "year": 2000, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1057--1063", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradi- ent methods for reinforcement learning with func- tion approximation. In Advances in neural informa- tion processing systems, pages 1057-1063.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Question answering and question generation as dual tasks", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Zhao", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.02027" ] }, "num": null, "urls": [], "raw_text": "Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and ques- tion generation as dual tasks. arXiv preprint arXiv:1706.02027.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning to collaborate for question answering and asking", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Zhao", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Zhirui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yibo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yuanhua", "middle": [], "last": "Lv", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1564--1574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Nan Duan, Zhao Yan, Zhirui Zhang, Yibo Sun, Shujie Liu, Yuanhua Lv, and Ming Zhou. 2018. Learning to collaborate for question answering and asking. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1564- 1574.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Combining active and semisupervised learning for spoken language understanding", "authors": [ { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "Robert", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 2005, "venue": "Speech Communication", "volume": "45", "issue": "2", "pages": "171--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gokhan Tur, Dilek Hakkani-T\u00fcr, and Robert E. Schapire. 2005. Combining active and semi- supervised learning for spoken language under- standing. Speech Communication, 45(2):171-186.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Grammar as a foreign language", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2773--2781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Gram- mar as a foreign language. In Advances in neural information processing systems, pages 2773-2781.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Morpho-syntactic lexical generalization for CCG semantic parsing", "authors": [ { "first": "Adrienne", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1284--1295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrienne Wang, Tom Kwiatkowski, and Luke Zettle- moyer. 2014. Morpho-syntactic lexical generaliza- tion for CCG semantic parsing. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1284- 1295, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Building a semantic parser overnight", "authors": [ { "first": "Yushi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1332--1342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1332-1342.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Learning synchronous grammars for semantic parsing with lambda calculus", "authors": [ { "first": "Yuk", "middle": [ "Wah" ], "last": "Wong", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuk Wah Wong and Raymond Mooney. 2007. Learn- ing synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Annual Meeting of the Association of Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "960--967", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association of Computational Linguistics, pages 960-967.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.08144" ] }, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Dual supervised learning", "authors": [ { "first": "Yingce", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Nenghai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "3789--3798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learn- ing. In Proceedings of the 34th International Con- ference on Machine Learning, pages 3789-3798.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Sequence-based structured prediction for semantic parsing", "authors": [ { "first": "Chunyang", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Dymetman", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1341--1350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for se- mantic parsing. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341- 1350.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Dualgan: Unsupervised dual learning for image-to-image translation", "authors": [ { "first": "Zili", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "2849--2857", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. 2017. Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2849-2857.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "The value of semantic parse labeling for knowledge base question answering", "authors": [ { "first": "Matthew", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Meek", "suffix": "" }, { "first": "Jina", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Suh", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "201--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Matthew Richardson, Chris Meek, Ming- Wei Chang, and Jina Suh. 2016. The value of se- mantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201-206.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A syntactic neural model for general-purpose code generation", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "440--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440-450.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Tranx: A transition-based neural abstract syntax parser for semantic parsing and code generation", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "7--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin and Graham Neubig. 2018. Tranx: A transition-based neural abstract syntax parser for se- mantic parsing and code generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstra- tions, pages 7-12.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "StructVAE: Tree-structured latent variable models for semi-supervised semantic parsing", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Chunting", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "754--765", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Chunting Zhou, Junxian He, and Gra- ham Neubig. 2018. StructVAE: Tree-structured la- tent variable models for semi-supervised semantic parsing. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 754-765, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Learning to parse database queries using inductive logic programming", "authors": [ { "first": "M", "middle": [], "last": "John", "suffix": "" }, { "first": "Raymond J", "middle": [], "last": "Zelle", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the national conference on artificial intelligence", "volume": "", "issue": "", "pages": "1050--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "John M Zelle and Raymond J Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the national con- ference on artificial intelligence, pages 1050-1055.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Online learning of relaxed ccg grammars for parsing to logical form", "authors": [ { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to log- ical form. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL).", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", "authors": [ { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2005, "venue": "UAI", "volume": "", "issue": "", "pages": "658--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Struc- tured classification with probabilistic categorial grammars. In UAI, pages 658-666.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Type-driven incremental semantic parsing with polymorphism", "authors": [ { "first": "Kai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1416--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Zhao and Liang Huang. 2015. Type-driven in- cremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1416-1421.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "A hierarchical decoding model for spoken language understanding from unaligned data", "authors": [ { "first": "Zijian", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Su", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "7305--7309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zijian Zhao, Su Zhu, and Kai Yu. 2019. A hierar- chical decoding model for spoken language under- standing from unaligned data. In ICASSP 2019- 2019 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 7305-7309. IEEE.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Semantic parser enhancement for dialogue domain extension with little data", "authors": [ { "first": "Su", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Da", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2014, "venue": "Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "336--341", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Zhu, Lu Chen, Kai Sun, Da Zheng, and Kai Yu. 2014. Semantic parser enhancement for dia- logue domain extension with little data. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 336-341. IEEE.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Robust spoken language understanding with unsupervised asr-error adaptation", "authors": [ { "first": "Su", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ouyu", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "6179--6183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Zhu, Ouyu Lan, and Kai Yu. 2018. Robust spoken language understanding with unsupervised asr-error adaptation. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018, pages 6179-6183. IEEE.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Concept transfer learning for adaptive language understanding", "authors": [ { "first": "Su", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "391--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Zhu and Kai Yu. 2018. Concept transfer learning for adaptive language understanding. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 391-399. Association for Compu- tational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Synthesis of logical forms by replacement.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Test accuracies on ATIS. It varies the ratio of labeled data, and keeps the rest as unlabeled data.", "type_str": "figure", "uris": null }, "TABREF1": { "type_str": "table", "text": "", "content": "", "html": null, "num": null }, "TABREF3": { "type_str": "table", "text": "Test accuracies on OVERNIGHT compared with previous systems.", "content": "
", "html": null, "num": null }, "TABREF5": { "type_str": "table", "text": "Test accuracies on ATIS compared with previous systems.", "content": "
", "html": null, "num": null }, "TABREF8": { "type_str": "table", "text": "Dual Learning Framework for Semantic ParsingInput:1: Supervised dataset T = { x, y }; Unsupervised dataset for queries Q and logical forms LF; 2: Pre-trained models on T : Q2LF model P (y|x; \u0398 Q2LF ), LF 2Q model P (x|y; \u0398 LF 2Q ); 3: Pre-trained model on Q and queries of T : Language Model for queries LM q ; 4: Indicator performs surface and semantic check for a logical form: grammar_error_indicator(\u2022); 5: Beam search size k, hyper parameters \u03b1 and \u03b2, learning rate \u03b7 1 for Q2LF and \u03b7 2 for LF 2Q; Output: Parameters \u0398 Q2LF of Q2LF model", "content": "
A Detailed algorithm
Algorithm 2 6: repeat
7:Reinforcement learning process uses unlabeled data, also reuses labeled data
8:Sample a query x from Q \u222a T ;
9:Q2LF model generates k logical forms
y 1 , y 2 , \u2022 \u2022 \u2022 , y k via beam search;
10:
to model structured logical
forms.
", "html": null, "num": null }, "TABREF9": { "type_str": "table", "text": "Examples of synthesized logical forms on ATIS. SW.listValue ( call SW.getProperty ( ( lambda s ( call SW.filter ( var s ) ( string position ) ( string ! = ) en.position.point_guard ) ) ( call SW.domain ( string player ) ) ) ( string player ) ) ) new ( call SW.listValue ( call SW.getProperty ( ( lambda s ( call SW.filter ( var s ) ( string position ) ( string ! = ) en.position.forward ) ) ( call SW.domain ( string player ) ) ) ( string player ) ) ) Blo. pre ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.block ) ( string ! type ) ) ( string shape ) ( string = ) en.shape.pyramid ) ) new ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.block ) ( string ! type ) ) ( string shape ) ( string = ) en.shape.cube ) ) Cal. pre ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.location ) ( string ! type ) ) ( call SW.reverse ( string location ) ) ( string = ) en.meeting.weekly_standup ) ) new ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.location ) ( string ! type ) ) ( call SW.reverse ( string location ) ) ( string = ) en.meeting.annual_review ) ) Hou. pre ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.housing_unit ) ( string ! type ) ) ( string housing_type ) ( string = ) ( call SW.concat en.housing.apartment en.housing.condo ) ) ) new ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.housing_unit ) ( string ! type ) ) ( string housing_type ) ( string = ) ( call SW.concat en.housing.condo en.housing.apartment ) ) ) Pub. pre ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.article ) ( string ! type ) ) ( string author ) ( string = ) en.person.efron ) ) new ( call SW.listValue ( call SW.filter ( call SW.getProperty ( call SW.singleton en.article ) ( string ! type ) ) ( string author ) ( string = ) en.person.lakoff ) ) Rec. pre ( call SW.listValue ( call SW.getProperty en.recipe.rice_pudding ( string cuisine ) ) ) new ( call SW.listValue ( call SW.getProperty en.recipe.quiche ( string cuisine ) ) ) Res. pre ( call SW.listValue ( call SW.getProperty en.restaurant.thai_cafe ( string neighborhood ) ) ) new ( call SW.listValue ( call SW.getProperty en.restaurant.pizzeria_juno ( string neighborhood ) ) ) Soc. pre ( call SW.listValue ( call SW.getProperty ( ( lambda s ( call SW.filter ( var s ) ( string field_of_study ) ( string ! = ) en.field.computer_science ) ) ( call SW.domain ( string student ) ) ) ( string student ) ) ) new ( call SW.listValue ( call SW.getProperty ( ( lambda s ( call SW.filter ( var s ) ( string field_of_study ) ( string !", "content": "
DomainLogical Forms
Bas.pre( call
", "html": null, "num": null } } } }