| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:30:31.743911Z" |
| }, |
| "title": "Error-Aware Interactive Semantic Parsing of OpenStreetMap", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Staniek", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Computational Linguistics Heidelberg University", |
| "location": {} |
| }, |
| "email": "staniek@cl.uni-heidelberg.de" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Heidelberg University", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "riezler@cl.uni-heidelberg.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In semantic parsing of geographical queries against real-world databases such as Open-StreetMap (OSM), unique correct answers do not necessarily exist. Instead, the truth might be lying in the eye of the user, who needs to enter an interactive setup where ambiguities can be resolved and parsing mistakes can be corrected. Our work presents an approach to interactive semantic parsing where an explicit error detection is performed, and a clarification question is generated that pinpoints the suspected source of ambiguity or error and communicates it to the human user. Our experimental results show that a combination of entropy-based uncertainty detection and beam search, together with multi-source training on clarification question, initial parse, and user answer, results in improvements of 1.2% F1 score on a parser that already performs at 90.26% on the NLMaps dataset for OSM semantic parsing.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In semantic parsing of geographical queries against real-world databases such as Open-StreetMap (OSM), unique correct answers do not necessarily exist. Instead, the truth might be lying in the eye of the user, who needs to enter an interactive setup where ambiguities can be resolved and parsing mistakes can be corrected. Our work presents an approach to interactive semantic parsing where an explicit error detection is performed, and a clarification question is generated that pinpoints the suspected source of ambiguity or error and communicates it to the human user. Our experimental results show that a combination of entropy-based uncertainty detection and beam search, together with multi-source training on clarification question, initial parse, and user answer, results in improvements of 1.2% F1 score on a parser that already performs at 90.26% on the NLMaps dataset for OSM semantic parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Semantic Parsing has the goal of mapping natural language questions into formal representations that can be executed against a database. If realworld large-scale databases such as OpenStreetMap (OSM) 1 need to be accessed, the creation of gold standard parses by humans can be complicated and requires expert knowledge, and even reinforcement learning from answers might be impossible since unique correct answers to OSM queries do not necessarily exist. Instead, uncertainties can arise due to open-ended lists (e.g., of restaurants), fuzzily defined geo-positional objects (e.g., objects \"near\" or \"in walking distance\" of other objects), or by ambiguous mappings of natural language to OSM tags 2 , with the truth lying in the eye of the beholder who asked the original question. Semantic parsing against OSM thus asks for an interactive setup where an end-user inter-operates with a semantic parsing system in order to negotiate a correct answer, or to resolve parsing ambiguities and to correct parsing mistakes, in a dialogical process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Previous work on interactive semantic parsing (Labutov et al., 2018; Yao et al., 2019; Elgohary et al., 2020) has put forward the following dialogue structure: i) the user poses a natural language question to the system, ii) the system parses the user question and explains or visualizes the parse to the user, iii) the user generates natural language feedback, iv) the parser tries to utilize the user feedback to improve the parse of the original user question. In most cases, the \"explanation\" produced by the system is restricted to a rule-based reformulation of the parse in a human intelligible form, whereas the human user has to take guesses about where the parse went wrong or is ambiguous.", |
| "cite_spans": [ |
| { |
| "start": 46, |
| "end": 68, |
| "text": "(Labutov et al., 2018;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 69, |
| "end": 86, |
| "text": "Yao et al., 2019;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 87, |
| "end": 109, |
| "text": "Elgohary et al., 2020)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of our paper is to add an explicit step of error detection on the parser side, resulting in an automatically produced clarification question that pinpoints the suspected source of ambiguity or error and communicates it to the human user. Our experimental results show that a combination of entropy-based uncertainty detection and beam search for differences to the top parse yield concise clarification questions. We create a dataset of 15k clarification questions that are answered by extracting information from gold standard parses, and complement this with a dataset of 960 examples where human users answer the automatically generated questions. Supervised training of a multisource neural network that adds clarification questions, initial parses, and user answers to the input results in improvements of 1.2% F1 score on map to tags bar and pub that differ in that only the latter sells food; off-license shops can have licenses to sell only wine or all kinds of alcohol. a parser that already performs at 90.26% on the NLMaps dataset for OSM semantic parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Yao et al. (2019) interpret interactive semantic parsing as a slot filling task, and present a hierarchical reinforcement learning model to learn which slots to fill in which order. They claim the automatic production of clarification questions by the agent as a main feature of their approach, however, what is actually used in their work is a set of 4 predefined templates. Elgohary et al. (2020) show an interpretation of the parse that is understandable for laypeople with a template-based approach, and present different approaches to utilize the user response to improve the parser. In their work, the explantion on the parser side is purely templatebased, whereas our work explicitly informs the clarification question by possible sources of parse ambiguities or errors.", |
| "cite_spans": [ |
| { |
| "start": 376, |
| "end": 398, |
| "text": "Elgohary et al. (2020)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Considerable effort has been invested in the creation of large datasets for parsing into SQL representations. Yu et al. (2018) created a dataset called Spider which is a complex, cross-domain semantic parsing and text-to-SQL dataset. Their annotation process was very extensive, and involved 11 computer science students who invested a total of 1,000 hours into asking natural language queries and creating the corresponding SQL query. Extensions of the Spider dataset, SParC (Yu et al., 2019b) , or Co-SQL (Yu et al., 2019a) involved even more computer science students. Our work attempts an automatic construction of concise clarification questions, allowing for faster dataset construction.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 126, |
| "text": "Yu et al. (2018)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 476, |
| "end": 494, |
| "text": "(Yu et al., 2019b)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 507, |
| "end": 525, |
| "text": "(Yu et al., 2019a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our work employs as a semantic parser a sequenceto-sequence neural network (Sutskever et al., 2014) that is based on an recurrent encoder and decoder architecture with attention (Bahdanau et al., 2015) . Given a corpus of aligned data D = {(x n , y n )} N n=1 of user queries x and semantic parses y, standard supervised training is performed by minimizing a Cross-Entropy objec-", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 99, |
| "text": "(Sutskever et al., 2014)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 178, |
| "end": 201, |
| "text": "(Bahdanau et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(Multi-Source) Neural Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "tive \u2212 1 N N n=1 T t=1 log p(y n,t |y n,<t , x n ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(Multi-Source) Neural Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where the probability of the full output sequence y = y 1 , y 2 , ..., y n is calculated by the product of the probability for every timestep where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(Multi-Source) Neural Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "p(y|x) = T t=1 p(y t |y <t , x).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(Multi-Source) Neural Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This model can be easily extended to multisource learning (Zoph and Knight, 2016) by using not only one, but multiple encoders. This means that there are actually multiple sequences of hidden states. The decoder hidden state is consequently initialized by a linear projection of the average of the last hidden states of all encoders c = 1 N N i=1 h i W l , and needs to implement a separate attention mechanism for every encoder.", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 81, |
| "text": "(Zoph and Knight, 2016)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(Multi-Source) Neural Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To be able to fine-tune a model with feedback from a user, the standard cross-entropy objective cannot be used because the desired target is not a gold parse, but a parse\u1ef9 predicted by the system, that has been annotated with positive and negative markings by a human user. This can be formalized as assigning a reward \u03b4 t that is either positive or negative to every token in the parse (\u03b4 t+ = 0.5 and \u03b4 t\u2212 = \u22120.5). It is then possible to maximize the likelihood of the correct parts of the parse by optimizing a weighted supervised learning objective", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(Multi-Source) Neural Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x,\u1ef9 T t=1 \u03b4 t log p(\u1ef9 t |x, y <t ). (Petrushkov et al., 2018) 4 Neural Semantic Parsing of OSM", |
| "cite_spans": [ |
| { |
| "start": 36, |
| "end": 61, |
| "text": "(Petrushkov et al., 2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(Multi-Source) Neural Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our work is based on the NLmaps v2 dataset. 3 NLmaps builds on the Overpass API which allows the querying of the OSM database with natural language queries. This dataset includes templatebased expansions leading to duplicates in train and test sets. However, these expansions introduced problematic features into the data in that OSM tags were inserted which, according to the documentation in the OSM developer wiki, should not be used:", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 45, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Is there Recreation Grounds in Marseille \u2192 query(area(keyval('name','Marseille')), nwr(keyval('leisure','recreation ground'), qtype(least(topx(1))))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Recreation Ground in Frankfurt am Main \u2192 query(area(keyval('name','Frankfurt am Main')), nwr(keyval('landuse','recreation ground')), qtype(latlong))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "While leisure=recreation ground certainly exists as a tag 4 , its use is heavily discouraged 5 . Furthermore, several mistakes were introduced in the data by the augmentation with the help of a wordlist. For example, an automatically generated natural language question based on this wordlist asks for bars, whereas the gold parse associated to that question asks for pubs instead:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Where Bars in Bradford \u2192 query(area(keyval('name','Bradford')), nwr(keyval('amenity','pub')), qtype(latlong))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Conceptually, bars and pubs may not be that different to each other, but OSM advises a strict distinction between bars and pubs 6 . While a pub sells alcohol on premise, a pub also sells food, the athmosphere is more relaxed and the music is quieter compared to a bar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Lastly, ambiguity was introduced because natural language words now map to multiple different OSM tags. This leads to the following data occurrences:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 shop Off Licenses in Birmingham \u2192 query(area(keyval('name','Birmingham')), nwr(keyval('shop','alcohol')), qtype(findkey('shop')))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 How many closest Off License from Wall Street in Glasgow \u2192 query(around(center(area( keyval('name','Glasgow')), nwr(keyval('name','Wall Street'))), search(nwr(keyval('shop','wine'))), maxdist(DIST INTOWN), topx(1)),qtype(count))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The previous examples show that for the same keyword \"Off License\" both shop=alcohol and shop=wine are valid interpretations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Finally, since the data was augmented first, and only afterwards split into train, development and test sets, there is a lot of overlap between the train and test data. This is problematic because a proper evaluation should also test for overfitting, which does not work if data is shared between different splits, as shown in the following examples:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 Train: cinema in Nantes location (e.g., Paris) and POI (e.g., cinema) are masked. This results in the dataset described in table 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We use the Joey NMT (Kreutzer et al., 2019) as framework to build a baseline parser. The basic Joey NMT architecture is modified to allow for a multi-source setup (see Figure 3 in the appendix) and for learning from markings. 7 As evaluation metrics we use exact match accuracy, defined as 1 N N n=1 \u03b4(predicted, gold) of a predicted parse and the gold parse. Furthermore, we report F1 score as harmonic mean of recall, defined as the percentage of fully correct answers divided by the set size, and precision, defined as the percentage of correct answers out of the set of answers with non-empty strings.", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 43, |
| "text": "(Kreutzer et al., 2019)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 226, |
| "end": 227, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 168, |
| "end": 176, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Parsing", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "A character-based Joey NMT semantic parser is able to improve the results reported in Lawrence and Riezler (2018) on the dataset without deduplication, as shown in Table 1 . All results presented in the following are relative improvements over our own baseline parser, reported on the deduplicated dataset for which no external baseline is available.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 171, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Parsing", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "On of the goals of error-aware interactive semantic parsing is to alert to user about suspected sources of ambiguity and error by initiating a dialogue. The parser thus needs to detect uncertainty in its output, and generate a clarification questions on the detected source of uncertainty. We use entropy-based uncertainty measures. Firstly, entropy per timestep t is measured as \u2212 \u1ef9t p(\u1ef9 t |x, y <t ) log p(\u1ef9 t |x, y <t ). This is employed to calculate the entropy of a token as the mean of the character entropies for each of a token's characters. 8 Based on entropy information, we generate simple questions by employing a template-based method which incorporates the least certain token: \"Did you mean $token?\". Furthermore, we offer alternative answers for the user based on beam search of size 2. This heuristic is justified experimentally since always taking the first beam yields an accuracy of 92.7%, while another 5% of accuracy can be gained by choosing the second beam. This verifies the usefulness of proposing entries in the second beam as alternative in clarification questions: \"Did you mean $token or $alternative?\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generation of Clarification Questions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "8 A visualization of entropy is reported in the appendix. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generation of Clarification Questions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In a first experiment, we generated entire dialogues synthetically, that is, the clarification question from the parser and synthetic user answers. The latter were constructed by checking if either the original token or the alternative is contained in the given gold parse. Dataset statistics for train, development and test splits are given in Table 3 . Model training is performed by extending the character-based baseline model by additional encoders for the dialogue (question and answer) and the predicted parse hypothesis. Experiments show that the character-based multi-source model including hypothesis and dialogue as additional input (line 4) outperforms the baseline (line 1) by more than 1 point in accuracy and F1 score (Table 2) . This difference is statistically significant with a p-value of 0.0483 determined by approximate randomization.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 345, |
| "end": 352, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 733, |
| "end": 742, |
| "text": "(Table 2)", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments on Synthetic Dialogues", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We furthermore performed a small field study where human users interacted with the system. Parses for queries from both train and development parts of the dataset were generated and augmented with automatically created clarification questions based on the uncertainty model. Examples were then filtered to keep only those parses that contained a parse mistake or parse ambiguity. This resulted in a total of 930 annotation tasks. 9 The annotation interface shown in Figure 1 illustrates the system-user interaction: Human annotators are presented with a natural language query (\"closest Off License from Lyon\"), the parse (shown below in linearized form), and the result of the generated parse (show as the map extract on top of the figure) . In addition to the linearized form of the predicted parse, a human-intelligible list format of the key-value pairs in the parse 10 is presented, following the annotation interface of Lawrence and Riezler (2018). The task of the human users is to mark the errors in the list of keys and values, and to answer or correct the clarification question. The markings are used as feedback in the weighted finetuning objective of Petrushkov et al. (2018) . As the outputs of the model are on character-level, the token-level reward of the annotations is distributed onto them for training. The final model is trained on the weighted objective in a multi-source fashion, taking parse hypothesis, clarification question, and logged user answer as additional inputs. Line 5 in Table 2 shows that fine-tuning a multi-source model that takes hypothesis, dialogue, and logged answer as additional input increases the sequence accuracy by another 0.15%. This difference is statistically significant with a p-value of 0.0027 determined by approximate randomization. The interaction process can be seen in Figure 2 . 11", |
| "cite_spans": [ |
| { |
| "start": 430, |
| "end": 431, |
| "text": "9", |
| "ref_id": null |
| }, |
| { |
| "start": 1164, |
| "end": 1188, |
| "text": "Petrushkov et al. (2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 466, |
| "end": 474, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 733, |
| "end": 740, |
| "text": "figure)", |
| "ref_id": null |
| }, |
| { |
| "start": 1508, |
| "end": 1515, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1831, |
| "end": 1839, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Human Interaction Study", |
| "sec_num": "7" |
| }, |
| { |
| "text": "11 Additional experiments using the human annotations as test data are reported in the appendix.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Human Interaction Study", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Ambiguities or errors in real-world semantic OSM parsing arise because of different tagging preferences of developers and users, an issue that can only be solved by an interactive setup where a parser is aware of its errors, and a satisfactory answer is found by the user marking parse errors and communicating alternatives. Our current work is a first step towards precise communication and offline learning in interactive semantic parsing. An interesting future direction of work is to move to online learning in interactive semantic parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Error-Aware Interactive Semantic Parsing\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Supplementary Material for \"Towards", |
| "sec_num": null |
| }, |
| { |
| "text": "A.1 Hyperparameter Settings", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Supplementary Material for \"Towards", |
| "sec_num": null |
| }, |
| { |
| "text": "In an additional experiment, we evaluated the models that were trained on the synthetically generated dataset on the data resulting from the human interaction study. The result of comparing the baseline model with the multi-source model trained on parse hypothesis and synthetic dialogue as additional inputs is shown in Table 5 . The astonishing gains of over 15% in F1 score can be explained by the fact that the data for human annotation set were filtered to include only examples for which the baseline parser did not match the gold standard parse (thus producing an accuracy score of 0). ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 321, |
| "end": 328, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A.2 Evaluation on the human annotated data", |
| "sec_num": null |
| }, |
| { |
| "text": "The entropy of the parse of the sentence \"How many Off License in Heidelberg\" can be seen in Figure 4 . The character-based model shows uncertainty with respect to the token wine. This is the desired result because the alternative for this position would be alcohol.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 93, |
| "end": 101, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A.3 Entropy visualization", |
| "sec_num": null |
| }, |
| { |
| "text": "www.openstreetmap.org 2 For example, recreation grounds can map to tags reserved for leisure purposes or for official landuse registration; bars", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "www.cl.uni-heidelberg.de/ statnlpgroup/nlmaps/ 4 https://wiki.openstreetmap.org/wiki/ Tag:leisure%3Drecreation_ground.5 https://wiki.openstreetmap.org/wiki/ Tag:landuse%3Drecreation_ground.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Meta-parameter settings are reported in the appendix.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Both synthetic and user data will be publicly released. 10 https://wiki.openstreetmap.org/wiki/ Map_Features", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Christian Buck and Massimiliano Ciaramita for initial fruitful discussions about this work. We would like to thank Christian Buck and Massimiliano Ciaramita for initial fruitful discussions about this work. The research reported in this paper was supported by a Google Focused Research Award on \"Learning to Negotiate Answers in Multi-Pass Semantic Parsing\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Neural Machine Translation by Jointly Learning to Align and Translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 3rd International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Rep- resentations.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Speak to your Parser: Interactive Text-to-SQL with Natural Language Feedback", |
| "authors": [ |
| { |
| "first": "Ahmed", |
| "middle": [], |
| "last": "Elgohary", |
| "suffix": "" |
| }, |
| { |
| "first": "Saghar", |
| "middle": [], |
| "last": "Hosseini", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [ |
| "Hassan" |
| ], |
| "last": "Awadallah", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2065--2077", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ahmed Elgohary, Saghar Hosseini, and Ahmed Has- san Awadallah. 2020. Speak to your Parser: In- teractive Text-to-SQL with Natural Language Feed- back. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2065-2077.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Joey NMT: A Minimalist NMT Toolkit for Novices", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Kreutzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jasmijn", |
| "middle": [], |
| "last": "Bastings", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "109--114", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Kreutzer, Jasmijn Bastings, and Stefan Riezler. 2019. Joey NMT: A Minimalist NMT Toolkit for Novices. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing: System Demonstrations, pages 109-114.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Learning to Learn Semantic Parsers from Natural Language Supervision", |
| "authors": [ |
| { |
| "first": "Igor", |
| "middle": [], |
| "last": "Labutov", |
| "suffix": "" |
| }, |
| { |
| "first": "Bishan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1676--1690", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Igor Labutov, Bishan Yang, and Tom Mitchell. 2018. Learning to Learn Semantic Parsers from Natural Language Supervision. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 1676-1690.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Response-Based and Counterfactual Learning for Sequence-to-Sequence Tasks in NLP", |
| "authors": [ |
| { |
| "first": "Carolin", |
| "middle": [], |
| "last": "Lawrence", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carolin Lawrence. 2018. Response-Based and Coun- terfactual Learning for Sequence-to-Sequence Tasks in NLP. Universit\u00e4tsbibliothek Heidelberg, Heidel- berg, Germany.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback", |
| "authors": [ |
| { |
| "first": "Carolin", |
| "middle": [], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1820--1830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carolin Lawrence and Stefan Riezler. 2018. Improving a Neural Semantic Parser by Counterfactual Learn- ing from Human Bandit Feedback. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1820- 1830.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Learning from chunk-based feedback in neural machine translation", |
| "authors": [ |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Petrushkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Shahram", |
| "middle": [], |
| "last": "Khadivi", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeny", |
| "middle": [], |
| "last": "Matusov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pavel Petrushkov, Shahram Khadivi, and Evgeny Ma- tusov. 2018. Learning from chunk-based feedback in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (ACL), Melbourne, Australia.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Sequence to Sequence Learning with Neural Networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "27", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems, volume 27, pages 3104-3112. Curran Associates, Inc., Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning", |
| "authors": [ |
| { |
| "first": "Ziyu", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiujun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [ |
| "M" |
| ], |
| "last": "Sadler", |
| "suffix": "" |
| }, |
| { |
| "first": "Huan", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 33rd Conference on Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "2547--2554", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ziyu Yao, Xiujun Li, Jianfeng Gao, Brian M. Sadler, and Huan Sun. 2019. Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning. In Proceedings of the 33rd Conference on Artificial Intelligence (AAAI), pages 2547-2554.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases", |
| "authors": [ |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Suyi", |
| "middle": [], |
| "last": "Er", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Victoria", |
| "middle": [], |
| "last": "Xi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tianze", |
| "middle": [], |
| "last": "Chern Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Zihan", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Youxuan", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Michihiro", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungrok", |
| "middle": [], |
| "last": "Yasunaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Shim", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zifan", |
| "middle": [], |
| "last": "Fabbri", |
| "suffix": "" |
| }, |
| { |
| "first": "Luyao", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuwen", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Shreya", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Dixit", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Dragomir", |
| "middle": [], |
| "last": "Lasecki", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1962--1979", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tao Yu, Rui Zhang, He Yang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Ya- sunaga, Sungrok Shim, Tao Chen, Alexander Fab- bri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019a. CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Inter- faces to Databases. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, pages 1962-1979.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task", |
| "authors": [ |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michihiro", |
| "middle": [], |
| "last": "Yasunaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongxu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zifan", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Qingning", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Shanelle", |
| "middle": [], |
| "last": "Roman", |
| "suffix": "" |
| }, |
| { |
| "first": "Zilin", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Dragomir", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "3911--3921", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross- Domain Semantic Parsing and Text-to-SQL Task. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 3911-3921.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "SParC: Cross-Domain Semantic Parsing in Context", |
| "authors": [ |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michihiro", |
| "middle": [], |
| "last": "Yasunaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Victoria", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Suyi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Heyang", |
| "middle": [], |
| "last": "Er", |
| "suffix": "" |
| }, |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Shreya", |
| "middle": [], |
| "last": "Dixit", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Proctor", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungrok", |
| "middle": [], |
| "last": "Shim", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Kraft", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Dragomir", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "4511--4523", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Tan, Victo- ria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sun- grok Shim, Jonathan Kraft, Vincent Zhang, Caim- ing Xiong, Richard Socher, and Dragomir Radev. 2019b. SParC: Cross-Domain Semantic Parsing in Context. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4511-4523.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Multi-Source Neural Translation", |
| "authors": [ |
| { |
| "first": "Barret", |
| "middle": [], |
| "last": "Zoph", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "30--34", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barret Zoph and Kevin Knight. 2016. Multi-Source Neural Translation. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 30-34.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Annotation setup for human interaction study.", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "Workflow for the interaction process.", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "Multi-source semantic parsing.", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "text": "Character entropy for parse of query \"How many Off License in Heidelberg\".", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "text": "Results of the multi-source models compared to the single-source model taking only the source into account on the modified test data.", |
| "content": "<table/>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "text": "Statistics of dialogue-enriched data.", |
| "content": "<table/>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "text": "Parameter overview compared toLawrence and Riezler (2018).", |
| "content": "<table/>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "text": "Test results on human-annotated data.", |
| "content": "<table/>", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |