ACL-OCL / Base_JSON /prefixS /json /starsem /2021.starsem-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:21.365574Z"
},
"title": "One Semantic Parser to Parse Them All: Sequence to Sequence Multi-Task Learning on Semantic Parsing Datasets",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Damonte",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Monti",
"suffix": "",
"affiliation": {},
"email": "monti@amazon.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic parsers map natural language utterances to meaning representations. The lack of a single standard for meaning representations led to the creation of a plethora of semantic parsing datasets. To unify different datasets and train a single model for them, we investigate the use of Multi-Task Learning (MTL) architectures. We experiment with five datasets (GEOQUERY, NLMAPS, TOP, OVERNIGHT, AMR). We find that an MTL architecture that shares the entire network across datasets yields competitive or better parsing accuracies than the single-task baselines, while reducing the total number of parameters by 68%. We further provide evidence that MTL has also better compositional generalization than singletask models. We also present a comparison of task sampling methods and propose a competitive alternative to widespread proportional sampling strategies.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic parsers map natural language utterances to meaning representations. The lack of a single standard for meaning representations led to the creation of a plethora of semantic parsing datasets. To unify different datasets and train a single model for them, we investigate the use of Multi-Task Learning (MTL) architectures. We experiment with five datasets (GEOQUERY, NLMAPS, TOP, OVERNIGHT, AMR). We find that an MTL architecture that shares the entire network across datasets yields competitive or better parsing accuracies than the single-task baselines, while reducing the total number of parameters by 68%. We further provide evidence that MTL has also better compositional generalization than singletask models. We also present a comparison of task sampling methods and propose a competitive alternative to widespread proportional sampling strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic parsing is the task of converting natural language into a meaning representation language (MRL). The commercial success of personal assistants, that are required to understand language, has contributed to a growing interest in semantic parsing. A typical use case for personal assistants is Question Answering (Q&A): the output of a semantic parser is a data structure that represents the underlying meaning of a given question. This data structure can be compiled into a query to retrieve the correct answer. The lack of a single standard for meaning representations resulted in the creation of a plethora of semantic parsing datasets, which differ in size, domain, style, complexity, and in the formalism used as an MRL. These datasets are expensive to create, as they normally require expert annotators. Consequently, the datasets are often limited in size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multi-task Learning (MTL; Caruana 1997) refers to jointly learning several tasks while sharing parameters between them. In this paper, we use MTL to demonstrate that it is possible to unify these smaller datasets together to train a single model that can be used to parse sentences in any of the MRLs that appear in the data. We experiment with several Q&A semantic parsing dataset for English: GEOQUERY (Zelle and Mooney, 1996) , NLMAPS V2 (Lawrence and Riezler, 2018b) , TOP (Gupta et al., 2018) , and OVERNIGHT (Wang et al., 2015b) . In order to investigate the impact of less related tasks, we also experiment on a non-Q&A semantic parsing dataset, targeting a broader coverage meaning representation: AMR (Banarescu et al., 2013) , which contains sentences from sources such as broadcasts, newswire, and discussion forums.",
"cite_spans": [
{
"start": 26,
"end": 39,
"text": "Caruana 1997)",
"ref_id": "BIBREF11"
},
{
"start": 404,
"end": 428,
"text": "(Zelle and Mooney, 1996)",
"ref_id": "BIBREF61"
},
{
"start": 441,
"end": 470,
"text": "(Lawrence and Riezler, 2018b)",
"ref_id": "BIBREF34"
},
{
"start": 477,
"end": 497,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 514,
"end": 534,
"text": "(Wang et al., 2015b)",
"ref_id": "BIBREF60"
},
{
"start": 710,
"end": 734,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our baseline parsing architecture is a reimplementation of the sequence to sequence model by Rongali et al. (2020) , which can be applied to any parsing task as long as the MRL can be expressed as a sequence. Inspired by Fan et al. (2017) , we experimented with two MTL architectures: 1-TO-N, where we share the encoder but not the decoder, and 1-TO-1, where we share the entire network. Previous work (Ruder, 2017; Collobert and Weston, 2008; Hershcovich et al., 2018) has focussed on a lesser degree of sharing more closely resembling the 1-TO-N architecture, but we found 1-TO-1 to consistently work better in our experiments.",
"cite_spans": [
{
"start": 93,
"end": 114,
"text": "Rongali et al. (2020)",
"ref_id": "BIBREF42"
},
{
"start": 221,
"end": 238,
"text": "Fan et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 402,
"end": 415,
"text": "(Ruder, 2017;",
"ref_id": "BIBREF43"
},
{
"start": 416,
"end": 443,
"text": "Collobert and Weston, 2008;",
"ref_id": "BIBREF12"
},
{
"start": 444,
"end": 469,
"text": "Hershcovich et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we demonstrate that the 1-TO-1 architecture can be used to achieve competitive parsing accuracies for our heterogeneous set of semantic parsing datasets, while reducing the total number of parameters by 68%, overfitting less, and improving on a compositional generalization benchmark (Keysers et al., 2019) .",
"cite_spans": [
{
"start": 298,
"end": 320,
"text": "(Keysers et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further perform an extensive analysis of alternative strategies to sample tasks during training. A number of methods to sample tasks proportionally to data sizes have been recently proposed (Wang et al., 2019b; Sanh et al., 2019; Wang et al., 2019a; Stickland and Murray, 2019) , which are often used as de facto standards for sampling strategies. These methods rely on the hypothesis that sampling proportionally to the task sizes avoids overfitting the smaller tasks. We show that this hypothesis is not generally verified by comparing proportional methods with an inversely proportional sampling method, and a method based on the per-task loss during training. Our comparison shows that there is not a method that is consistently superior to the others across architectures and datasets. We argue that the sampling method should be chosen as another hyper-parameter of the model, specific to a problem and a training setup.",
"cite_spans": [
{
"start": 193,
"end": 213,
"text": "(Wang et al., 2019b;",
"ref_id": "BIBREF58"
},
{
"start": 214,
"end": 232,
"text": "Sanh et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 233,
"end": 252,
"text": "Wang et al., 2019a;",
"ref_id": "BIBREF57"
},
{
"start": 253,
"end": 280,
"text": "Stickland and Murray, 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We finally run experiments on dataset pairs, resulting in 40 distinct settings, to investigate which datasets are most helpful to others. Surprisingly, we observe that AMR and GEOQUERY can work well as auxiliary tasks. AMR is the only graphstructured, non Q&A dataset, and was therefore not expected to help as much as more related Q&A datasets. GEOQUERY is the smallest dataset we tested, showing that low-resource datasets can help high-resource ones instead of, more intuitively, the other way around.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "MTL refers to machine learning models that sample training examples from multiple tasks and share parameters amongst them. During training, a batch is sampled from one of the tasks and the parameter update only impacts the part of the network relevant to that task. The architecture for sequence to sequence semantic parsing that we use in this paper consists of an encoder, which converts the input sentence into a latent representation, and a decoder, which converts the latent representation into the output MRL (Jia and Liang, 2016; Konstas et al., 2017; Rongali et al., 2020) . While the input to each task is always natural language utterances, each task is in general characterized by a different meaning representation formalism. It, therefore, follows that the input (natural language) varies considerably less than the output (the meaning representation). Parameter sharing can therefore more intuitively happen in the encoder, where we learn parameters that encode a representation of the natural language. Nevertheless, more sharing can also be allowed, by also sharing parts of the decoder (Fan et al., 2017) . In this work, we experiment with two MTL architectures, as shown in Figure 1 : 1-TO-N, where we share the encoder but not the decoder, and 1-TO-1, where we share the entire network. As different datasets normally use different MRLs, in the 1-TO-1 architecture we also need a mechanism to inform the network of which MRL to generate. We therefore augment the input with a special token that identifies the task, following .",
"cite_spans": [
{
"start": 515,
"end": 536,
"text": "(Jia and Liang, 2016;",
"ref_id": "BIBREF26"
},
{
"start": 537,
"end": 558,
"text": "Konstas et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 559,
"end": 580,
"text": "Rongali et al., 2020)",
"ref_id": "BIBREF42"
},
{
"start": 1103,
"end": 1121,
"text": "(Fan et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1192,
"end": 1200,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sequence to Sequence Multi-Task Learning",
"sec_num": "2"
},
{
"text": "In this section, we describe the datasets used, baseline architectures, and training details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "While we focussed on Q&A semantic parsing datasets, we further consider the AMR dataset in order to investigate the impact of MTL between considerably different datasets. Table 1 shows a training example from each dataset. The sizes of all datasets are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 262,
"end": 269,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Geoquery Questions and queries about US geography (Zelle and Mooney, 1996) . The best results on this dataset are reported by Kwiatkowski et al. (2013) via Combinatory Categorial Grammar (Steedman, 1996 (Steedman, , 2000 parsing.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "(Zelle and Mooney, 1996)",
"ref_id": "BIBREF61"
},
{
"start": 126,
"end": 151,
"text": "Kwiatkowski et al. (2013)",
"ref_id": "BIBREF31"
},
{
"start": 187,
"end": 202,
"text": "(Steedman, 1996",
"ref_id": "BIBREF48"
},
{
"start": 203,
"end": 220,
"text": "(Steedman, , 2000",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "NLMaps v2 Questions about geographical facts (Lawrence and Riezler, 2018b) , retrieved from OpenStreetMap (Haklay and Weber, 2008) . To our knowledge, we are the first to train a parser on the full dataset. Previous work trained a neural parser on a small subset of the dataset and used the rest to experiment with feedback data (Lawrence and Riezler, 2018a) . We note that there exists a previous version of the dataset (Haas and Riezler, 2016) , for which state-of-the-art results have been achieved with a sequence to sequence approach (Duong et al., 2017) . We use the latest version of the dataset due to its larger size.",
"cite_spans": [
{
"start": 45,
"end": 74,
"text": "(Lawrence and Riezler, 2018b)",
"ref_id": "BIBREF34"
},
{
"start": 106,
"end": 130,
"text": "(Haklay and Weber, 2008)",
"ref_id": "BIBREF23"
},
{
"start": 329,
"end": 358,
"text": "(Lawrence and Riezler, 2018a)",
"ref_id": "BIBREF33"
},
{
"start": 421,
"end": 445,
"text": "(Haas and Riezler, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 539,
"end": 559,
"text": "(Duong et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "TOP Navigation and event queries generated by crowdsourced workers (Gupta et al., 2018) . The queries are annotated to semantic frames comprising of intents and slots. The best results are achieved by a sequence to sequence model (Aghajanyan et al., 2020) . at the bottom 1-TO-1, where we also share the decoder and we add a special token at the beginning of the input sentence. Table 2 : Details of each dataset. \"Train\", \"Dev\", and \"Test\" are the number of examples (questions paired with MRLs) in the training, development, and test splits. \"Src Vocab\" is the vocabulary size for the input (natural language) and \"Tgt Vocab\" is the vocabulary size for the output (meaning representation).",
"cite_spans": [
{
"start": 67,
"end": 87,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 230,
"end": 255,
"text": "(Aghajanyan et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Overnight This dataset (Wang et al., 2015b) contains Lambda DCS (Liang, 2013) annotations divided into eight domains: calendar, blocks, housing, restaurants, publications, recipes, socialnetwork, and basketball. Due to the small size of the domains, we merged them together. The current state-of-the-art results, on single domains, are reported by Su and Yan (2017) , who frame the problem as a paraphrasing task. They use denotation (answer) accuracy as a metric, while we report parsing accuracies, a stricter metric.",
"cite_spans": [
{
"start": 23,
"end": 43,
"text": "(Wang et al., 2015b)",
"ref_id": "BIBREF60"
},
{
"start": 64,
"end": 77,
"text": "(Liang, 2013)",
"ref_id": "BIBREF35"
},
{
"start": 348,
"end": 365,
"text": "Su and Yan (2017)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "AMR AMR (Banarescu et al., 2013) has been widely adopted in the semantic parsing community (Artzi et al., 2015; Flanigan et al., 2014; Wang et al., 2015a; Damonte et al., 2017; Titov and Henderson, 2007; Zhang et al., 2019) . We used the latest version of the dataset (LDC2017T10), for which the best results were reported by Bevilacqua et al. (2021) . The AMR dataset is different from the other datasets, not only in that it is not Q&A, but also in the formalism used to express the meaning representations. While for the other datasets the output logical forms can be represented as trees, in AMR each sentence is annotated as a rooted, directed graph, due to explicit representation of pronominal coreference, coordination, and control structures.",
"cite_spans": [
{
"start": 8,
"end": 32,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 91,
"end": 111,
"text": "(Artzi et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 112,
"end": 134,
"text": "Flanigan et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 135,
"end": 154,
"text": "Wang et al., 2015a;",
"ref_id": "BIBREF59"
},
{
"start": 155,
"end": 176,
"text": "Damonte et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 177,
"end": 203,
"text": "Titov and Henderson, 2007;",
"ref_id": "BIBREF54"
},
{
"start": 204,
"end": 223,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF62"
},
{
"start": 326,
"end": 350,
"text": "Bevilacqua et al. (2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "In order to use sequence to sequence architectures on AMR, a preprocessing step is required to remove variables in the annotations and linearize the graphs. In this work, we followed the linearization method by van Noord and Bos (2017). 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "Our baseline parser is a reimplementation of Rongali et al. (2020): a single-task attentive sequence to sequence model (Bahdanau et al., 2015) with pointer network (Vinyals et al., 2015) . The input utterance is embedded with a pretrained ROBERTA encoder (Liu et al., 2019) , and subsequently fed into a TRANSFORMER (Vaswani et al., 2017) decoder. The encoder converts the input sequence of tokens x 1 , . . . , x n into a sequence of contextsensitive embeddings e 1 , . . . , e n . At each time step t, the decoder generates an action a t . There are two types of actions: output a symbol from the output vocabulary, or output a pointer to one of the input tokens x i . The final softmax layer provides a probability distribution, for a t , across all these possible actions. The probability with which we output a pointer to x i is determined by the attention score on x i . Finally, we use beam search to find the sequence of actions that maximize the overall output sequence probability.",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 164,
"end": 186,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF56"
},
{
"start": 255,
"end": 273,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 316,
"end": 338,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Parser",
"sec_num": "3.2"
},
{
"text": "All models were trained with Adam (Kingma and Ba, 2014) on P3 AWS machines with one Tesla V100 GPU. To prevent overfitting, we used an early stopping policy to terminate training once the loss on the development set stops decreasing. To account for the effect of the random seed used for initialization, we train three instances of each model with different random seeds. We then report the average and standard deviation on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "We evaluate all Q&A parsing models using the exact match metric, which is computed as the percentage of input sentences that are parsed without any mistake. AMR is instead evaluated using SMATCH (Cai and Knight, 2013) , which computes the F1 score of graphs' nodes and edges. 2 We tuned hyper-parameters for each model based on exact match accuracies on their development sets. While AMR is typically evaluated on SMATCH, to simplify the tuning of our models, we use exact match also for AMR and compute the SMATCH score only for the final models. We performed manual searches (5 trials) for the following hyper-parameters: batch size (10 to 200), learning rate (0.04 to 0.08), number of layers (2 to 6) and units in the decoder (256 to 1024), number of attention heads (1 to 16), and dropout ratio (0.03 to 0.3). For the baseline, we selected the sets of hyper-parameters that maximize performance on the development set of each dataset. To tune the MTL model for each dataset would be costly: we instead selected the set of parameters that maximizes performance on the combination of all development sets. For analogous reasons, when presenting results on MTL between the 40 combinations of dataset pairs, we do not re-tune the models. Final hyper-parameters are shown in Appendix A.",
"cite_spans": [
{
"start": 195,
"end": 217,
"text": "(Cai and Knight, 2013)",
"ref_id": "BIBREF10"
},
{
"start": 276,
"end": 277,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "In Section 4.1, we compare several sampling methods for the 1-TO-1 and 1-TO-N architectures. In Section 4.2 we then compare the MTL models with the single-task baselines. We turn to the issue of generalization in Section 4.3, where we use a recently introduced benchmark to evaluate the compositional generalization of our models. Finally, in Section 4.4 we report experiments between dataset pairs to find good auxiliary tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "As discussed in Section 2, each training batch is sampled from one of the tasks. A simple sampling strategy is to pick the task uniformly, i.e., a training batch is extracted from task t with probability p t = 1/N , where N is the number of tasks. Due to the considerable differences in the sizes of our datasets, we further investigate the impact of previously proposed sampling strategies that take dataset sizes into account:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "\u2022 PROPORTIONAL (Wang et al., 2019b; Sanh et al., 2019) , where p t is proportional to the size of the training set of task t: D t . That is:",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "(Wang et al., 2019b;",
"ref_id": "BIBREF58"
},
{
"start": 36,
"end": 54,
"text": "Sanh et al., 2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "p t = D t /( t D t );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "\u2022 LOGPROPORTIONAL (Wang et al., 2019a) , where p t is proportional to log(D t );",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "(Wang et al., 2019a)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "\u2022 SQUAREROOT (Stickland and Murray, 2019) , where p t is proportional to \u221a D t ;",
"cite_spans": [
{
"start": 13,
"end": 41,
"text": "(Stickland and Murray, 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "\u2022 POWER (Wang et al., 2019a) , where p t is proportional to D 0.75 t ;",
"cite_spans": [
{
"start": 8,
"end": 28,
"text": "(Wang et al., 2019a)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "\u2022 ANNEALED (Stickland and Murray, 2019) , where p t is proportional to D \u03b1 t , with \u03b1 decreasing at each epoch. When using proportional sampling methods, smaller tasks can be forgotten or interfered with, especially in the final epochs and when the final layers are shared (Stickland and Murray, 2019) . The method can therefore be particularly useful for the 1-TO-1 architecture, where the decoder is shared.",
"cite_spans": [
{
"start": 11,
"end": 39,
"text": "(Stickland and Murray, 2019)",
"ref_id": "BIBREF50"
},
{
"start": 273,
"end": 301,
"text": "(Stickland and Murray, 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "We further test two additional sampling strategies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "\u2022 INVERSE, where p t is proportional to 1/D t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "The idea behind proportional sampling methods is to avoid overfitting smaller tasks and underfitting larger tasks. However, to the best of our knowledge, this intuitive hypothesis has not been explicitly tested. We test the opposite strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "\u2022 LOSS, where p t is proportional to L t , the loss on the development set for task t. This strategy therefore assigns higher sampling probabilities to harder tasks. This strategy is reminiscent of the active learning-inspired sampling method by Sharma et al. (2017) .",
"cite_spans": [
{
"start": 246,
"end": 266,
"text": "Sharma et al. (2017)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "The results are shown in Table 3 for 1-TO-N and in Table 4 for 1-TO-1. We note that the choice of a sampling method depends on the MTL architecture and the dataset we want to optimize. The choice appears to be more critical for 1-TO-N than for 1-TO-1: for instance, in the case of NLMAPS, the difference between the best sampling method and the worst is 4.3 for 1-TO-N and only 1.3 for 1-TO-1. This suggests that sampling methods are more relevant to train the dedicated layers. 1-TO-1 appears to work well also with PROPORTIONAL, which is expected to suffer for interference when sharing the final layers (Stickland and Murray, 2019) . As expected, ANNEALED, which explicitly addresses interference, works particularly well for 1-TO-1.",
"cite_spans": [
{
"start": 606,
"end": 634,
"text": "(Stickland and Murray, 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 51,
"end": 58,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "We presented INVERSE as a way to test the intuition behind proportional strategies. Given the widespread use of proportional methods, we would expect PROPORTIONAL to largely outperform UNI-FORM and INVERSE. We instead observe that in most cases it does not outperform INVERSE, and in some cases underperforms it. For 1-TO-1, it does not even match the results of UNIFORM. These results further suggest that there is not a generally superior sampling method, which should instead be picked as an additional hyper-parameter. They also highlight the need to further investigate sampling methods in MTL. The proposed LOSS method is faster and performs particularly well for 1-TO-N. Henceforth, we use LOSS for 1-TO-N and AN-NEALED for 1-TO-1, which maximize the average accuracies across datasets. Table 5 compares the MTL results for the chosen sampling methods with the single-task baselines. We also report state-of-the-art parsing accuracies of each dataset for reference. Note that 1-TO-1 has more parameters than 1-TO-N. This is due to the fact that the increased sharing of 1-TO-1 allowed us to train a larger model with 1024 hidden units instead of 512. In order to more directly compare the two MTL architectures, we also train a smaller 1-TO-1 model (1-TO-1-SMALL), which uses the same number of units as 1-TO-N. The re- sults indicate that sharing also the decoder provides generally better results, even for the smaller model. Remarkably, compared to the single-task baseline, 1-TO-1 achieves a 68% reduction in the number of learnable parameters. Smaller models can have positive practical impacts as they decrease memory consumption hence reducing costs and carbon footprint (Schwartz et al., 2019) . We accomplish this without sacrificing parsing accuracies, which are competitive and in some cases higher than the baselines. This result is particularly promising, as we purposedly included a heterogeneous set of tasks and we use the same set of hyper-parameters for all of them. We can therefore train a single model with accurate parsing for a wide range of datasets, with fewer parameters. Table 5 also shows that MTL models are slower to converge. This is due to the regularization effect of training multiple tasks (Ruder, 2017) : as the loss on the development set keeps improving, the early stopping policy allows the MTL models to be trained for more epochs, resulting in longer training times. This regularization effect allows MTL to have better generalization (Caruana, 1997; Ruder, 2017) . In Figure 2 we compare the single-task TOP baseline against the 1-TO-1 model trained on all datasets and evaluated on TOP. We show training and development accuracies as a function of the epochs. We observe that the baseline overfits earlier (early stopping is triggered earlier) and generalizes less (the gap between dev set and training set is larger) compared to the MTL model.",
"cite_spans": [
{
"start": 1685,
"end": 1708,
"text": "(Schwartz et al., 2019)",
"ref_id": null
},
{
"start": 2232,
"end": 2245,
"text": "(Ruder, 2017)",
"ref_id": "BIBREF43"
},
{
"start": 2483,
"end": 2498,
"text": "(Caruana, 1997;",
"ref_id": "BIBREF11"
},
{
"start": 2499,
"end": 2511,
"text": "Ruder, 2017)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 794,
"end": 801,
"text": "Table 5",
"ref_id": null
},
{
"start": 2105,
"end": 2112,
"text": "Table 5",
"ref_id": null
},
{
"start": 2517,
"end": 2525,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Task Sampling",
"sec_num": "4.1"
},
{
"text": "We further evaluate our models on the CFQ dataset (Keysers et al., 2019) , designed to test compositional generalization. The idea behind datasets such as CFQ is to include test examples that contain unseen compositions of primitive elements (such as predicates, entities, and question types). To achieve this, a test set is sampled to maximize the compound divergence with the training set, hence containing unseen compositions (MCD). The dataset also contains a second test set, obtained with a random split. A parser that generalizes well is expected to achieve good results on both test sets. .3) 26h(\u00b11h) 231M 1-TO-1-SMALL 76.7(\u00b11.4) 85.0(\u00b10.8) 85.9(\u00b10.2) 69.7(\u00b10.8) 64.9(\u00b11.3) 20h(\u00b15h) 169M Table 5 : Results of multitasking between all five datasets, compared to the baseline single-task parsers and stateof-the-art results (SOTA) on these datasets. PARS indicates the total number of parameters (in millions). Results marked with * are not directly comparable, as discussed in Section 3.1. adding CFQ as the sixth task. 3 We consider the relative improvements for MCD and RANDOM, as the baseline values are considerably different. We note larger improvements on MCD (+27%) than on RANDOM (+13%) when MTL is used. The results provide initial evidence that the MTL models result in better compositional generalization than the single-task baselines.",
"cite_spans": [
{
"start": 50,
"end": 72,
"text": "(Keysers et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 1028,
"end": 1029,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 697,
"end": 704,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalization",
"sec_num": "4.3"
},
{
"text": "Finally, we trained MTL models on dataset pairs to find what datasets are good auxiliary tasks (i.e., tasks that are helpful to other tasks). Note that we do not tune the hyper-parameters of each pairwise model, as we would need to do a costly hyperparameter search over 40 models. The results are shown in Table 7 . The problem of choosing auxiliary tasks has been shown to be challenging (Alonso and Plank, 2016; Bingel and S\u00f8gaard,",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Auxiliary Tasks",
"sec_num": "4.4"
},
{
"text": "MCD Random KEYSERS 17.9 (\u00b10.9) 98.5 (\u00b10.2) BASELINE 14.9 (\u00b11.5) 84.9 (\u00b10.7) 1-TO-N 16.8 (\u00b10.6) 95.9 (\u00b10.0) 1-TO-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "18.9 (\u00b10.8) 95.6 (\u00b10.1) 2017; Hershcovich et al., 2018) . Similar to task sampling methods, there is not an easy recipe to choose the auxiliary tasks. However, our results elicit the following surprising observations:",
"cite_spans": [
{
"start": 30,
"end": 55,
"text": "Hershcovich et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "1. AMR is the only dataset to use graphstructured MRL, due to explicit representation of pronominal coreference, coordination, and control structures. It is also the only non-Q&A dataset. Nevertheless, we note that AMR is a competitive auxiliary task, possibly due to its large size and scope. It is also surprising that AMR is often more helpful in the 1-TO-1 setup, where the whole network is shared and more related tasks are expected to be preferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "2. Transfer learning is often used to provide lowresource tasks with additional data from a higher-resource task. However, in our experiments, GEOQUERY, our smallest dataset, appears to be helpful for the larger TOP dataset. Table 7 : Experiments on dataset pairs. The rows are the auxiliary tasks and the columns are the main tasks. (Abzianidze et al., 2017) and UCCA (Abend and Rappoport, 2013) , to domain-specific datasets such as LCQUAD (Dubey et al., 2019) and KQA Pro (Shi et al., 2020) . Following previous work on semantic parsing (Jia and Liang, 2016; Konstas et al., 2017; Fan et al., 2017; Hershcovich et al., 2018; Rongali et al., 2020) , the baseline parser used in this work is based on the popular attentive sequence to sequence framework (Sutskever et al., 2014; Bahdanau et al., 2015) . Pointer networks (Vinyals et al., 2015) have demonstrated the importance of decoupling the job of generating new output tokens from that of copying tokens from the input. To achieve this, our models use copy mechanisms, following previous work on semantic parsing (Rongali et al., 2020) . We further rely on pre-trained embeddings (Liu et al., 2019) .",
"cite_spans": [
{
"start": 334,
"end": 359,
"text": "(Abzianidze et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 369,
"end": 396,
"text": "(Abend and Rappoport, 2013)",
"ref_id": "BIBREF0"
},
{
"start": 442,
"end": 462,
"text": "(Dubey et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 475,
"end": 493,
"text": "(Shi et al., 2020)",
"ref_id": null
},
{
"start": 540,
"end": 561,
"text": "(Jia and Liang, 2016;",
"ref_id": "BIBREF26"
},
{
"start": 562,
"end": 583,
"text": "Konstas et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 584,
"end": 601,
"text": "Fan et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 602,
"end": 627,
"text": "Hershcovich et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 628,
"end": 649,
"text": "Rongali et al., 2020)",
"ref_id": "BIBREF42"
},
{
"start": 755,
"end": 779,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF52"
},
{
"start": 780,
"end": 802,
"text": "Bahdanau et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 822,
"end": 844,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF56"
},
{
"start": 1069,
"end": 1091,
"text": "(Rongali et al., 2020)",
"ref_id": "BIBREF42"
},
{
"start": 1136,
"end": 1154,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 225,
"end": 232,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Compositional generalization has recently attracted attention (Neyshabur et al., 2017; Lake and Baroni, 2018; Finegan-Dollak et al., 2018; Hupkes et al., 2018; Keysers et al., 2019) . We used the CFQ dataset (Keysers et al., 2019) , with the purpose of assessing their compositional generalization.",
"cite_spans": [
{
"start": 62,
"end": 86,
"text": "(Neyshabur et al., 2017;",
"ref_id": "BIBREF39"
},
{
"start": 87,
"end": 109,
"text": "Lake and Baroni, 2018;",
"ref_id": "BIBREF32"
},
{
"start": 110,
"end": 138,
"text": "Finegan-Dollak et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 139,
"end": 159,
"text": "Hupkes et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 160,
"end": 181,
"text": "Keysers et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 208,
"end": 230,
"text": "(Keysers et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "MTL (Caruana, 1997; Ruder, 2017) based on sequence to sequence models has been used to address several NLP problems such as syntactic parsing (Luong et al., 2016) and Machine Translation (Dong et al., 2015; Luong et al., 2016) . For the task of semantic parsing, MTL has been employed as a way to transfer learning between domains (Damonte et al., 2019) and datasets (Fan et al., 2017; Lindemann et al., 2019; Hershcovich et al., 2018; Lindemann et al., 2019) . A shared task on multiframework semantic parsing with a particular focus on MTL has been recently introduced (Oepen et al., 2019) . The 1-TO-N and 1-TO-1 models have been previously experimented with by Fan et al. (2017) , with the latter being an MTL variant of the models used for multilingual parsing by . An alternative to MTL for transfer learning is based on pre-training on a task and fine-tuning on related tasks (Thrun, 1996) . It has been investigated mostly for machine translation tasks (Zoph et al., 2016; Bansal et al., 2019) but also for semantic parsing (Damonte et al., 2019) .",
"cite_spans": [
{
"start": 4,
"end": 19,
"text": "(Caruana, 1997;",
"ref_id": "BIBREF11"
},
{
"start": 20,
"end": 32,
"text": "Ruder, 2017)",
"ref_id": "BIBREF43"
},
{
"start": 142,
"end": 162,
"text": "(Luong et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 187,
"end": 206,
"text": "(Dong et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 207,
"end": 226,
"text": "Luong et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 367,
"end": 385,
"text": "(Fan et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 386,
"end": 409,
"text": "Lindemann et al., 2019;",
"ref_id": "BIBREF36"
},
{
"start": 410,
"end": 435,
"text": "Hershcovich et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 436,
"end": 459,
"text": "Lindemann et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 571,
"end": 591,
"text": "(Oepen et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 665,
"end": 682,
"text": "Fan et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 883,
"end": 896,
"text": "(Thrun, 1996)",
"ref_id": "BIBREF53"
},
{
"start": 961,
"end": 980,
"text": "(Zoph et al., 2016;",
"ref_id": "BIBREF63"
},
{
"start": 981,
"end": 1001,
"text": "Bansal et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1032,
"end": 1054,
"text": "(Damonte et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We used MTL to train joint models for a wide range of semantic parsing datasets. We showed that MTL provides large parameter count reduction while maintaining competitive parsing accuracies, even for inherently different datasets. We further discussed how generalization is another advantage of MTL and we used the CFQ dataset to suggest that MTL achieves better compositional generalization. We leave it to future work to further investigate this type of generalization in the context of MTL. We compared several sampling methods, indicating that proportional sampling is not always optimal, showing room for improvements, and introducing a loss-based sampling method as a competitive and promising alternative. We were surprised to see the positive impact of low-resource (GEOQUERY) and less-related (AMR) datasets can have as auxiliary tasks. Challenges in finding optimal sampling strategies and auxiliary tasks suggest that they should be treated as hyper-parameters to be tuned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "https://github.com/RikVN/ AMRwithdefaultsettings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/snowblink14/smatch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For comparison withKeysers et al. (2019), we report mean and 95%-confidence interval radius of 5 runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the three anonymous reviewers for their comments and the Amazon Alexa AI team members for their feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Universal conceptual cognitive annotation (ucca)",
"authors": [
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "228--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omri Abend and Ari Rappoport. 2013. Universal con- ceptual cognitive annotation (ucca). In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 228-238.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The parallel meaning bank: Towards a multilingual corpus of translations annotated with compositional meaning representations",
"authors": [
{
"first": "Lasha",
"middle": [],
"last": "Abzianidze",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Evang",
"suffix": ""
},
{
"first": "Hessel",
"middle": [],
"last": "Haagsma",
"suffix": ""
},
{
"first": "Rik",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Ludmann",
"suffix": ""
},
{
"first": "Duc-Duy",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.03964"
]
},
"num": null,
"urls": [],
"raw_text": "Lasha Abzianidze, Johannes Bjerva, Kilian Evang, Hessel Haagsma, Rik Van Noord, Pierre Ludmann, Duc-Duy Nguyen, and Johan Bos. 2017. The paral- lel meaning bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. arXiv preprint arXiv:1702.03964.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Conversational semantic parsing",
"authors": [
{
"first": "Armen",
"middle": [],
"last": "Aghajanyan",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Maillard",
"suffix": ""
},
{
"first": "Akshat",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Diedrick",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Haeger",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.13655"
]
},
"num": null,
"urls": [],
"raw_text": "Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, et al. 2020. Conversational semantic parsing. arXiv preprint arXiv:2009.13655.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "When is multitask learning effective? semantic sequence prediction under varying data conditions",
"authors": [
{
"first": "Alonso",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.02251"
]
},
"num": null,
"urls": [],
"raw_text": "H\u00e9ctor Mart\u00ednez Alonso and Barbara Plank. 2016. When is multitask learning effective? semantic se- quence prediction under varying data conditions. arXiv preprint arXiv:1612.02251.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Broad-coverage CCG semantic parsing with AMR",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. Proceedings of EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Abstract meaning representation for sembanking. Linguistic Annotation Workshop",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. Linguistic Annotation Workshop.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pretraining on high-resource speech recognition improves low-resource speech-to-text translation",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "58--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2019. Pre- training on high-resource speech recognition im- proves low-resource speech-to-text translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 58-68.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "One spring to rule them both: Symmetric amr semantic parsing and generation without a complex pipeline",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Bevilacqua",
"suffix": ""
},
{
"first": "Rexhina",
"middle": [],
"last": "Blloshmi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One spring to rule them both: Sym- metric amr semantic parsing and generation without a complex pipeline. In Proceedings of AAAI.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Identifying beneficial task relations for multi-task learning in deep neural networks",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Bingel and Anders S\u00f8gaard. 2017. Identify- ing beneficial task relations for multi-task learning in deep neural networks. In Proceedings of EACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Smatch: an evaluation metric for semantic feature structures",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. Proceed- ings of ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multitask learning. Machine learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceed- ings of ICML.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An incremental parser for abstract meaning representation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Damonte",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Damonte, Shay B Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of EACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Practical semantic parsing for spoken language understanding",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Damonte",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Tagyoung",
"middle": [],
"last": "Chung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Damonte, Rahul Goel, and Tagyoung Chung. 2019. Practical semantic parsing for spoken lan- guage understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 16-23.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-task learning for multiple language translation",
"authors": [
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for mul- tiple language translation. In Proceedings of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia",
"authors": [
{
"first": "Mohnish",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Debayan",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": 2019,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohnish Dubey, Debayan Banerjee, Abdelrahman Ab- delkawi, and Jens Lehmann. 2019. Lc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia. In International Semantic Web Conference, pages 69-78. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multilingual semantic parsing and code-switching",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Afshar",
"suffix": ""
},
{
"first": "Dominique",
"middle": [],
"last": "Estival",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Pink",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2017. Mul- tilingual semantic parsing and code-switching. In Proceedings of CoNLL 2017.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Transfer learning for neural semantic parsing",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Monti",
"suffix": ""
},
{
"first": "Lambert",
"middle": [],
"last": "Mathias",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural seman- tic parsing. In Proceedings of the 2nd Workshop on Representation Learning for NLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving text-to-sql evaluation methodology",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Finegan-Dollak",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Ramanathan",
"suffix": ""
},
{
"first": "Sesh",
"middle": [],
"last": "Sadasivam",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "351--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-sql evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351-360.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A discriminative graph-based parser for the abstract meaning representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime G Carbonell, Chris Dyer, and Noah A Smith. 2014. A discrimi- native graph-based parser for the abstract meaning representation. Proceedings of ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semantic parsing for task oriented dialog using hierarchical representations",
"authors": [
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Rushin",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Mrinal",
"middle": [],
"last": "Mohit",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2787--2792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Ku- mar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representa- tions. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A corpus and semantic parser for multilingual natural language querying of openstreetmap",
"authors": [
{
"first": "Carolin",
"middle": [],
"last": "Haas",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carolin Haas and Stefan Riezler. 2016. A corpus and semantic parser for multilingual natural language querying of openstreetmap. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 740-750.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Openstreetmap: User-generated street maps",
"authors": [
{
"first": "Mordechai",
"middle": [],
"last": "Haklay",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2008,
"venue": "Ieee Pervas Comput",
"volume": "7",
"issue": "4",
"pages": "12--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mordechai Haklay and Patrick Weber. 2008. Open- streetmap: User-generated street maps. Ieee Pervas Comput, 7(4):12-18.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multitask parsing across semantic representations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "373--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018. Multitask parsing across semantic representa- tions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 373-385.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning compositionally through attentive guidance",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Korrel",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.09657"
]
},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Anand Singh, Kris Korrel, German Kruszewski, and Elia Bruni. 2018. Learning compo- sitionally through attentive guidance. arXiv preprint arXiv:1805.09657.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Data recombination for neural semantic parsing",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.03622"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2016. Data recombina- tion for neural semantic parsing. arXiv preprint arXiv:1606.03622.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Measuring compositional generalization: A comprehensive method on realistic data",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Keysers",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Sch\u00e4rli",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Scales",
"suffix": ""
},
{
"first": "Hylke",
"middle": [],
"last": "Buisman",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Furrer",
"suffix": ""
},
{
"first": "Sergii",
"middle": [],
"last": "Kashubin",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Momchev",
"suffix": ""
},
{
"first": "Danila",
"middle": [],
"last": "Sinopalnikov",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Stafiniak",
"suffix": ""
},
{
"first": "Tibor",
"middle": [],
"last": "Tihon",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.09713"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Keysers, Nathanael Sch\u00e4rli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, et al. 2019. Measuring com- positional generalization: A comprehensive method on realistic data. arXiv preprint arXiv:1912.09713.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Neural amr: Sequence-to-sequence models for parsing and generation",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and gen- eration. Proceedings of ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Scaling semantic parsers with on-the-fly ontology matching",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1545--1556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1545-1556.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks",
"authors": [
{
"first": "Brenden",
"middle": [],
"last": "Lake",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2873--2882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In In- ternational Conference on Machine Learning, pages 2873-2882.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Counterfactual learning from human proofreading feedback for semantic parsing",
"authors": [
{
"first": "Carolin",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.12239"
]
},
"num": null,
"urls": [],
"raw_text": "Carolin Lawrence and Stefan Riezler. 2018a. Coun- terfactual learning from human proofreading feed- back for semantic parsing. arXiv preprint arXiv:1811.12239.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Improving a neural semantic parser by counterfactual learning from human bandit feedback",
"authors": [
{
"first": "Carolin",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carolin Lawrence and Stefan Riezler. 2018b. Improv- ing a neural semantic parser by counterfactual learn- ing from human bandit feedback. Institute for Com- putational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Lambda dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4408"
]
},
"num": null,
"urls": [],
"raw_text": "Percy Liang. 2013. Lambda dependency-based compo- sitional semantics. arXiv preprint arXiv:1309.4408.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Compositional semantic parsing across graphbanks",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Lindemann",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Groschwitz",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4576--4585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Lindemann, Jonas Groschwitz, and Alexan- der Koller. 2019. Compositional semantic parsing across graphbanks. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4576-4585.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Multi-task sequence to sequence learning",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Exploring generalization in deep learning",
"authors": [
{
"first": "Srinadh",
"middle": [],
"last": "Behnam Neyshabur",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bhojanapalli",
"suffix": ""
},
{
"first": "Nati",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Srebro",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. 2017. Exploring gener- alization in deep learning. In Proceedings of NIPS.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Dealing with coreference in neural semantic parsing",
"authors": [
{
"first": "Rik",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Semantic Deep Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rik van Noord and Johan Bos. 2017. Dealing with co- reference in neural semantic parsing. In Proceed- ings of the 2nd Workshop on Semantic Deep Learn- ing.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Mrp 2019: Cross-framework meaning representation parsing",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Jayeol",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Chun",
"suffix": ""
},
{
"first": "Zdenka",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ure\u0161ov\u00e1",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, Omri Abend, Jan Hajic, Daniel Her- shcovich, Marco Kuhlmann, Tim O'Gorman, Nian- wen Xue, Jayeol Chun, Milan Straka, and Zdenka Ure\u0161ov\u00e1. 2019. Mrp 2019: Cross-framework mean- ing representation parsing. In Proceedings of CoNLL.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing",
"authors": [
{
"first": "Subendhu",
"middle": [],
"last": "Rongali",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Soldaini",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Monti",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don't parse, generate! a se- quence to sequence architecture for task-oriented se- mantic parsing. Proceedings of The Web Confer- ence.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "An overview of multi-task learning in",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2017,
"venue": "deep neural networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.05098"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A hierarchical multi-task approach for learning embeddings from semantic tasks",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6949--6956",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning em- beddings from semantic tasks. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6949-6956.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Learning to multi-task by active sampling",
"authors": [
{
"first": "Sahil",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Parikshit",
"middle": [],
"last": "Hegde",
"suffix": ""
},
{
"first": "Balaraman",
"middle": [],
"last": "Ravindran",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.06053"
]
},
"num": null,
"urls": [],
"raw_text": "Sahil Sharma, Ashutosh Jha, Parikshit Hegde, and Balaraman Ravindran. 2017. Learning to multi-task by active sampling. arXiv preprint arXiv:1702.06053.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Hanwang Zhang, and Bin He. 2020. Kqa pro: A large diagnostic dataset for complex question answering over knowledge base",
"authors": [
{
"first": "Jiaxin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Shulin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Liangming",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yutong",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.03875"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaxin Shi, Shulin Cao, Liangming Pan, Yutong Xi- ang, Lei Hou, Juanzi Li, Hanwang Zhang, and Bin He. 2020. Kqa pro: A large diagnostic dataset for complex question answering over knowledge base. arXiv preprint arXiv:2007.03875.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Surface structure and interpretation",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 1996. Surface structure and interpre- tation. The MIT Press.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "The syntactic process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The syntactic process. The MIT Press.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Bert and pals: Projected attention layers for efficient adaptation in multi-task learning",
"authors": [
{
"first": "Asa",
"middle": [
"Cooper"
],
"last": "Stickland",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Murray",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "5986--5995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asa Cooper Stickland and Iain Murray. 2019. Bert and pals: Projected attention layers for efficient adapta- tion in multi-task learning. In International Confer- ence on Machine Learning, pages 5986-5995.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Cross-domain semantic parsing via paraphrasing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Su and Xifeng Yan. 2017. Cross-domain seman- tic parsing via paraphrasing. In Proceedings of EMNLP.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Is learning the n-th thing any easier than learning the first?",
"authors": [
{
"first": "",
"middle": [],
"last": "Sebastian Thrun",
"suffix": ""
}
],
"year": 1996,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "640--646",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Thrun. 1996. Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, pages 640-646.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "A latent variable model for generative dependency parsing",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and James Henderson. 2007. A latent vari- able model for generative dependency parsing. In Proceedings of IWPT.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of NIPS.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hula",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Raghavendra",
"middle": [],
"last": "Pappagari",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Yinghui",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Katherin",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4465--4476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pap- pagari, R Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, et al. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4465-4476.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2019b. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th Inter- national Conference on Learning Representations, ICLR 2019.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "A transition-based algorithm for AMR parsing",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. A transition-based algorithm for AMR pars- ing. Proceedings of NAACL.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Building a semantic parser overnight",
"authors": [
{
"first": "Yushi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yushi Wang, Jonathan Berant, and Percy Liang. 2015b. Building a semantic parser overnight. In Proceed- ings of ACL.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Zelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the national conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M Zelle and Raymond J Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the national con- ference on artificial intelligence.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Broad-coverage semantic parsing as transduction",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3777--3789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. Broad-coverage semantic pars- ing as transduction. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3777-3789.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "A Hyper-parameters",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Hyper-parameters",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Table 8 reports the final hyper-parameters used for our experiments",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 8 reports the final hyper-parameters used for our experiments.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Two MTL architectures for two tasks (A and B): at the top 1-TO-N, where only the encoder is shared;",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Accuracies on training and dev split at each epoch for the TOP baseline and 1-TO-1 MTL parser trained on all datasets and evaluated on TOP.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"html": null,
"text": "Training examples from each of the datasets used in our experiments. The output logical forms were simplified for the sake of readability.",
"type_str": "table",
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Train Dev</td><td colspan=\"3\">Test Src Vocab Tgt Vocab</td></tr><tr><td>GEOQUERY</td><td>540</td><td>60</td><td>280</td><td>279</td><td>103</td></tr><tr><td>NLMAPS</td><td colspan=\"3\">16172 1843 10594</td><td>8628</td><td>1012</td></tr><tr><td>TOP</td><td colspan=\"3\">28414 4032 8241</td><td>11873</td><td>116</td></tr><tr><td colspan=\"4\">OVERNIGHT 18781 2093 5224</td><td>1921</td><td>311</td></tr><tr><td>AMR</td><td colspan=\"3\">36521 1368 1371</td><td>30169</td><td>28880</td></tr></table>",
"num": null
},
"TABREF3": {
"html": null,
"text": "Comparison of sampling strategies for the 1-TO-N architecture. We report the average over three runs with different random seeds. The standard deviation is in parentheses. All values reported are exact match, except for AMR, where SMATCH is reported. We also report training times (in hours).",
"type_str": "table",
"content": "<table><tr><td>Sampling</td><td>Geoquery</td><td>NLMaps</td><td>TOP</td><td>Overnight</td><td>AMR</td><td>Time</td></tr><tr><td>UNIFORM</td><td colspan=\"6\">78.5 (\u00b11.4) 87.2 (\u00b10.2) 86.8 (\u00b10.2) 71.1 (\u00b10.2) 66.7 (\u00b10.5) 21h (\u00b14h)</td></tr><tr><td>PROP.</td><td colspan=\"6\">77.7 (\u00b11.0) 86.2 (\u00b10.2) 86.5 (\u00b10.2) 70.6 (\u00b10.2) 65.7 (\u00b10.6) 16h (\u00b11h)</td></tr><tr><td>LOGPROP.</td><td colspan=\"6\">78.8 (\u00b11.5) 87.2 (\u00b10.1) 86.6 (\u00b10.1) 71.0 (\u00b10.3) 67.3 (\u00b10.5) 23h (\u00b13h)</td></tr><tr><td colspan=\"7\">SQUAREROOT 78.9 (\u00b11.5) 86.8 (\u00b10.1) 86.7 (\u00b10.2) 70.9 (\u00b10.0) 66.4 (\u00b10.3) 17h (\u00b10h)</td></tr><tr><td>POWER</td><td colspan=\"6\">78.9 (\u00b10.6) 86.9 (\u00b10.3) 86.6 (\u00b10.1) 71.2 (\u00b10.6) 67.2 (\u00b10.5) 23h (\u00b12h)</td></tr><tr><td>ANNEALED</td><td colspan=\"6\">79.8 (\u00b10.7) 87.1 (\u00b10.1) 86.4 (\u00b10.2) 70.8 (\u00b10.4) 67.7 (\u00b10.3) 26h (\u00b11h)</td></tr><tr><td>INVERSE</td><td colspan=\"6\">75.0 (\u00b12.3) 87.3 (\u00b10.4) 86.5 (\u00b10.1) 71.2 (\u00b10.5) 66.5 (\u00b10.7) 20h (\u00b13h)</td></tr><tr><td>LOSS</td><td colspan=\"6\">76.5 (\u00b11.4) 87.5 (\u00b10.2) 86.5 (\u00b10.1) 71.1 (\u00b10.1) 64.8 (\u00b10.2) 11h (\u00b13h)</td></tr></table>",
"num": null
},
"TABREF4": {
"html": null,
"text": "",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF5": {
"html": null,
"text": "shows the results of our MTL model when",
"type_str": "table",
"content": "<table><tr><td>Model</td><td colspan=\"2\">Geoquery NLMaps</td><td>TOP</td><td>Overnight</td><td>AMR</td><td>Time</td><td>Pars</td></tr><tr><td>SOTA</td><td>89.0</td><td>64.4(\u00b10.1) *</td><td>87.1</td><td>80.6 *</td><td>84.5</td><td/></tr><tr><td>BASELINE</td><td colspan=\"7\">77.6(\u00b12.2) 87.2(\u00b10.7) 85.3(\u00b10.4) 70.2(\u00b10.9) 67.2(\u00b10.3) 7h(\u00b10h) 721M</td></tr><tr><td>1-TO-N</td><td colspan=\"7\">73.3(\u00b11.9) 85.7(\u00b10.0) 85.2(\u00b10.1) 68.9(\u00b10.2) 64.2(\u00b10.4) 15h(\u00b12h) 203M</td></tr><tr><td>1-TO-1</td><td colspan=\"5\">79.8(\u00b10.7) 87.1(\u00b10.1) 86.4(\u00b10.2) 70.8(\u00b10.4) 67.7(\u00b10</td><td/></tr></table>",
"num": null
},
"TABREF6": {
"html": null,
"text": "Results on the CFQ dataset. KEYSERS refers to the results reported byKeysers et al. (2019) for the TRANSFORMER model. MCD reports the average of the three released MCD test sets.",
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}