| { |
| "File Number": "1003", |
| "Title": "Covering Uncommon Ground: Gap-Focused Question Generation for Answer Assessment", |
| "3 A1. Did you describe the limitations of your work?": "Limitations section (after Section 6: Conclusions)", |
| "abstractText": "Human communication often involves information gaps between the interlocutors. For example, in an educational dialogue, a student often provides an answer that is incomplete, and there is a gap between this answer and the perfect one expected by the teacher. Successful dialogue then hinges on the teacher asking about this gap in an effective manner, thus creating a rich and interactive educational experience. We focus on the problem of generating such gap-focused questions (GFQs) automatically. We define the task, highlight key desired aspects of a good GFQ, and propose a model that satisfies these. Finally, we provide an evaluation by human annotators of our generated questions compared against human generated ones, demonstrating competitive performance.", |
| "1 Introduction": "Natural language dialogues are often driven by information gaps. Formally, these are gaps between the epistemic states of the interlocutors. Namely, one knows something that the other does not, and the conversation revolves around reducing this gap. An important example is the education setting where teachers ask students questions, and receive answers that may be incomplete. With the expectation of what a complete answer should contain, the teacher then engages in a gap-focused dialogue to help the student to arrive at a complete answer. There are multiple other application settings of information gaps, including support-line bots, longform Q&A, and automated fact checking.\nThe core challenge in this setting is how to generate effective questions about the information gap. In terms of formal semantics and pragmatics, this gap can be viewed as the complementary of the common-ground (Stalnaker, 2002) held by the interlocutors. Somewhat surprisingly, despite much work on dialogue learning (Ni et al., 2022; Zhang et al., 2020) and question generation (Michael et al.,\n2018; Pyatkin et al., 2020, 2021; Ko et al., 2020), little attention has been given to generating questions that focus on such information gaps.\nThe formal traditional approach to representing the dialogic information gap is via the set of propositions that are known to one side but not the other (Stalnaker, 2002). However, this set can be quite large, and it is also unclear how to turn these propositions into dialogue utterances. We propose an arguably more natural representation: a generated set of natural language questions whose answers represent the information that the teacher needs to ask about to reduce the gap. We call these gapfocused questions (GFQs). A key advantage of this representation is that the generated questions can be used directly in the teacher-student dialogue.\nGiven a complete teacher answer and a partial student answer, there are many questions that could be asked, but some are more natural than others. For example, consider the complete answer “A man is wearing a blue hat and a red shirt and is playing a guitar”, and a student response “There is a man playing the guitar”. Two candidate questions could be “What color hat is the man wearing?” and “What is the man wearing?”. The second question is arguably more natural as it does not reveal information that is not in the teacher-student common ground, namely that a hat is being worn.\nThe above demonstrates some of the complexity of generating effective GFQs, and the need to rely on certain discourse desiderata. In this work we define the GFQ challenge, a novel question generation task, and we detail the desired properties of the generated questions. Subsequently, we provide a model for GFQ generation that aims to satisfy these desiderata, and demonstrate its competitiveness via a task of generating questions to fill the gap between premises and hypotheses in a standard natural language inference (NLI) setup.\nIn designing desired properties for GFQs, we take inspiration from theories of collaborative\n215\ncommunication, and in particular Grice’s maxims (Grice, 1975). For example, the maxim of quantity states that speakers are economic and do not communicate what is already known. Thus, the teacher should not ask about what is already in the common ground with the student. In the above example, this means not asking “What is the man playing?”. We describe additional desiderata in §3.\nTo tackle the GFQ challenge, we show how general-purpose NLP models (question generation, question answering, and constituency parsing) can be used to generate GFQs that satisfy the discourse desiderata. See Figure 1 for an outline of the process. To assess our model, we consider pairs of texts that contain information gaps, and evaluate our ability to capture these gaps using GFQs. Such texts are readily available in NLI datasets that contain pairs of a premise and an entailed hypothesis with less information. We consider the SNLI dataset (Bowman et al., 2015), and use human annotators to evaluate the merit of our approach relative to GFQs generated by humans.\nOur contribution is three-fold. First, we propose the novel setup of gap-focused questions, a key element of a student-teacher discourse as well as other settings such as automated fact checking. Second, we identify desiderata inspired by conversational maxims, and provide a model for generating questions that satisfy them. Third, we demonstrate the merit of our model on an NLI dataset.", |
| "2 Related work": "Natural dialogue is a key goal of modern NLP and, despite substantial progress, there is still a considerable difference between humans and models. In this work we focus on dialogues where the bot (teacher) knows more than the user (student), and the goal is to gradually decrease this knowledge gap via gap-focused follow-up questions.\nSeveral works have focused on the problem of follow-up question generation in dialogues. However, to the best of our knowledge, none of these focus on information gaps as we do. Ko et al. (2020) introduce the problem of inquisitive question generation, where the goal is to generate questions about facts that are not in the text. This is not done in reference to a complete text, and is thus principally different from our goal. In fact, in our settings, an inquisitive question would typically be a bad GFQ, since it refers to information that is outside the knowledge of both teacher and student. Prior works considered a related task referred to as answer-agnostic question generation (Scialom et al., 2019), but with a focus on factual questions, whereas the inquistive setting is broader.\nAnother class of follow-up questions are clarification ones (Rao and Daumé III, 2018), which can also be viewed as a special case of inquistive questions. Again, there is no reference to a complete text that defines the information gap. Finally, there are works on follow-up questions guided by rules as in the SHARC dataset (Saeidi et al., 2018).\nOur GFQ setting is also related to the challenge of explainable NLI (Kalouli et al., 2020), namely the task of explaining why a certain sentence entails another. The GFQ output can be viewed as a novel explanation mechanism of why the student text is entailed by the source text, as it explicitly refers to the gap between these texts.\nOur work is inspired by novel uses of question generation models, particularly in the context of evaluating model consistency (Honovich et al., 2021). In these, question generation is used to find “LLM hallucinations” where the generated text is not grounded in a given reference text. Our task can be viewed as the inverse of the knowledge grounding task, and our particular focus is on the questions generated rather than just pointing to information gaps. An additional line of work in this vein is QA-based semantics, where text semantics are represented via a set of questions rather than a formal graph (e.g., see Michael et al., 2018).", |
| "3 Criteria for Gap-Focused Questions": "Given a complete source text TC and a student text TS , our goal is to construct a model that takes TS and TC as input and produces a set of one or more questions Q that ask about the information gap between TC and TS . If one takes the term “information gap” literally, there are many such possible questions (e.g., which word appears in TC but not in TS). In a natural language setting we are obviously interested in questions that are natural, that is, would likely be asked by a human who knows TC and has heard the student description TS . When defining the desiderata for the generated questions, we consider what knowledge is held by the teacher and the student and what information is inside and outside their common ground (see Figure 2). We next identify desired properties for the generated questions, followed by a description of our model for generating gap-focused questions that satisfy these desiderata.\nThe following desired properties of an effective GFQ are loosely based on collaborative communication concepts (Grice, 1975):\n• P1: Answerability: Only ask questions that can be answered based on the complete text TC (areas A ∪ B in Figure 2). This follows from Grice’s maxim of relevance; speakers say things that are pertinent to the discussion.\n• P2: Answers should not be in the common ground: If the student has already demonstrated knowing a fact in TS , there is no reason to ask about it again. Namely, in Figure 2, we don’t want to ask about information in B. This pertains to Grice’s maxim of quantity; speakers are economic, they do not utter information beyond the bare minimum that is necessary to ask the question, and they will refrain from repeating already-known information.\n• P3: Questions should only use information known to the user: The question itself should rely only on information in TS and not in TC . For example if TC is “A Woman is wearing a blue hat” and TS is “A woman is wearing something”, it is preferable not to ask “What color is the hat?” as it refers to information that did not appear in TS (i.e., that the woman is wearing a hat). This is loosely related to the Grice maxim of manner, where one tries to be clear, brief, and orderly. If we were to ask questions using\ninformation unknown to the user (in area A in figure 2), we may introduce unnecessary details and obscurity into the discussion.1", |
| "4 The GFQs Generation approach": "We next describe our modeling approach for the GFQ generation problem, with the goal of capturing the properties described above. Before describing our GFQs generation approach, we briefly outline the NLP components we rely on in the question generation process:\nA question generation model G that, given an input text T and a span X ⊂ T , generates questions about T whose answer is X .\nA question answering model A, that takes as input a text T and a question Q about the text, and returns the answer or an indication that the question is unanswerable from the text.\nA constituency parser P , that takes a text X , breaks it down into sub-phrases (constituents), and returns a parse tree.\nAdditional details about these components can be found in appendix C.\nWe are now ready to describe our approach for generating GFQs. The model generates an ordered set of possible follow-up questions QG via the following steps, which roughly correspond to the desired criteria described in §3:\nStep 1: Generate answerable questions (P1). Using the constituency parser P , we extract the\n1Note that in some cases this may only be partially possible and a “hint” must be provided in order to be able to phrase a grammatically correct and semantically sensible question.\nspans of all the constituents in the source text TC , except for those spanning the entire sentence, and single word spans containing functional elements (e.g., prepositions). For each span X ⊂ TC , we use the question generation model G to generate a set of questions whose answer should be X , thus creating a set of questions that satisfy the answerablity property. We denote this set QT and assign QG = QT .\nStep 2: Filter questions whose answers are in the common ground. (P2). We next wish to remove questions that are answerable by the student text TS . To that end, we use the question answering model A, and for each q ∈ QG if A(TS , q) 6= “UNANSWERABLE”, we set QG = QG \\ {q}.2\nStep 3: Prefer questions which only use information known to the user (P3). We prefer questions that do not reveal information beyond what is known to the user. This is not always strictly possible and thus, instead of filtering, we rank questions according to the (possibly zero) amount of additional information they reveal. To do so, let R be all the answers to the questions in QG. By construction R contains spans from TC that the student didn’t mention, i.e. these are spans that we would prefer not to appear in the generated questions. For each q ∈ QG, we count the number of items in R included in q. We sort QG in ascending order by this number and return the first element. We thus return a question that uses the least number of facts unknown to the student.", |
| "5 Experiments": "We next describe an evaluation of our GFQ model.\nData: We use the SNLI Dataset (Bowman et al., 2015) where a Natural language inference (NLI) pair contains two sentences denoting a premise and a hypothesis, and the relation between them can be entailment, contradiction and neutral. We focus on pairs labeled as entailment, and filter out those with bi-directional entailment, so that there is a gap between hypothesis and premise. We do not use any data for training, and apply our model to the test partition of the SNLI dataset.\nEvaluation Benchmark: In order to compare the quality of our automatically generated ques-\n2Note that Step 2 will also filter out questions that the student answered incorrectly. This would be an area for improvement in future models.\ntions to manually generated ones, we asked human annotators to generate questions for 200 instances of the SNLI test set (see Appendix A for the annotator instructions). We emphasize that these questions were only used for evaluation, as explained below, and not for training the model. They were collected after model design was completed. We release this evaluation dataset to the public, it is available here. See additional details about this dataset in appendix E.\nAnnotator Evaluation of Generated Questions: As with other generative settings, offline evaluation is challenging. In fact, even if we had human generated questions for all SNLI, using those for evaluation would need to assume that they are exhaustive (otherwise the model can generate a good question but be penalized because it is not in the set generated by humans). Instead, as is commonly done (Ko et al., 2020), we rely on human evaluation. We present annotators with TC , TS and a candidate GFQ q and ask them to provide a 1− 5 score of how well q functions as a follow-up question (see Appendix A for annotators instructions). We use 3 annotators per question.\nCompared Models: We compare four generation approaches: Human: Questions generated by human annotators; Step 1: This model selects a random question out of those generated by the question generation model (i.e., Step 1 in §4). We note that this is already a strong baseline because its questions are based on the source text. Step 2: The outcome of Step 2 in §4 where only questions not answerable by the student text are kept. Step 3: The outcome of Step 3, where we additionally aim for questions which use information known to the user.\nResults: Table 1 provides the average scores for each of the considered models and the human generated questions. It can be seen that each step contributes to the score, and human generated questions are somewhat better than our final model\n(Step 3). Using the Wilcoxon signed-rank test for paired differences, we found that all differences were significant at p-value ≤ 0.05.\nExamples: Figure 3 shows an example of the three stages, and a human generated question. Appendix F provides more examples.\nError Analysis: We analyze cases where our final model (Step 3) received low scores from the annotators (an average score of 3 and lower). In our analysis we have observed three main loss patterns (sometimes appearing together): (1) Poor question phrasing — these are questions whose structure or choice of words is less natural than if a person were to ask the same question. See example in the first row in Table 2. (2) Questions which include information outside of the teacher-student common ground. These are cases where the minimum criterion defined in Step 3 still results in a question with some information unknown to the user. See examples in the first 2 rows in Table 2. (3) Questions including information outside the complete source text. In rare cases we have found that the question generation model generates questions that include\n“hallucinations” or point to issues in the semantic understanding of the complete source text. See the third example in Table 2.", |
| "6 Conclusion": "We consider the task of question generation in a novel setting where there is an information gap between speakers, and the gap-focused questions (GFQs) aim to reduce this gap. Building on advances in question generation and question answering, we show how to generate useful GFQs that meet several natural criteria inspired by theories cooperative conversation. It is natural to ask whether one can employ a fully generative approach for GFQs using LLMs. This is a natural direction for future study, and we believe that the criteria and design choices we studied here will be significant in defining and evaluating such future work.\nLimitations\nWe present the first study of generating questions for filling in information gaps. Our method is limited in several ways. First, it focuses on information that is explicitly missing, and does not discuss information that is inaccurate or incomplete in other ways. Second, it only asks one follow-up question and does not address multi-turn dialogue about a student answer, or multiple student answers. Finally, our approach makes somewhat restricted use of the student answer, and it will be better to generate questions that directly uptake information from the student text (Demszky et al., 2021). We leave the deep investigation of these for future work.", |
| "Acknowledgments": "We thank Avi Caciularu for constructive feedback on this work.\nEthics and Impact\nRegarding risks, as with any NLP model, care must be taken in application, so that it generates truthful information, and does not introduce biases. However, we think this is not a major concern in our case as our modeling will generate text directly related to the source and student texts. In terms of impact, our approach can be used to improve a wide array of applications, including educational dialogue (e.g., reading comprehension), supportline bots, and automated fact checking.", |
| "A Annotating Guidelines": "Here we provide all the guidelines to annotators, for both human question generation and human rating of questions generated by the model.\nGuidelines for the human annotator task of writing follow-up questions: We depict the guidelines and the examples for the writing followup questions task in Figure 4, and the task design in Figure 5.\nGuidelines for the human annotator task of rating follow-up questions: We depict the guidelines of the task of rating the follow-up questions in Figure 6, the examples in Figure 7, and the task design in Figure 8.", |
| "B Annotator Related Information": "Annotators were paid by the hour, and recruited as contractors for a variety of annotating projects by our team and related teams. The annotators are all native English speakers (from Canada and the US). They are also aware of the way in which the information will be used. There are no special ethical sensitivities in the collection process and thus it was exempt from an ethics review board.\nC Implementation Details\nQuestion Generation Model: As our question generation model G, we use the T5-xxl model (Raffel et al., 2020) fine-tuned on SQuAD1.1 (Rajpurkar et al., 2016). We also use beam search and question filtering, similarly to Honovich et al. (2021, Section 2), see this work for further details.\nQuestion Answering Model: For our question answering model A, we use the T5-xxl model (Raffel et al., 2020) fine-tuned on SQuAD2.0 (Rajpurkar et al., 2018).\nConstituency Parser: We use the Berkeley Neural Parser (Kitaev and Klein, 2018), implemented in the spaCy package.3\nSNLI Filtering: We consider the subset of SNLI with an “entailed” label. Since we are not interested in the case of equivalent hypothesis and premise, we filter out bi-directional entailments using an NLI model (similar to (Honovich et al., 2022)). In the resulting set of one-directional entailments, the information in the premise (TC) is strictly greater\n3We used spaCy3.0 – https://spacy.io/.\nthan the information in the hypothesis (TS), which is our case of interest.", |
| "D Computational Resources Details": "In terms of computational resources, the project is lightweight, as it required no training at all, and just running inference steps of pre-trained models (question answering, question generation and parsing), all of which run in several minutes on standard GPUs.", |
| "E GFQ test released dataset": "We release a benchmarking dataset of 200 examples from SNLI test with a human generated gapfocused question. The data is available here.\nDetails about the dataset We asked 3 annotators to write questions for each SNLI pair (see guidelines in appendix A) and used a heuristic to select a single GFQ. When selecting this single question our goal is to prefer GFQs where multiple annotators chose to write a question about the same topic. We therefore apply the following heuristic: for each human written question q we used our question answering model A and define a as the answer to this question given Tc: a = A(Tc, q). We then count n: the number of annotators which produced questions leading to the same answer a, we look at the questions for which n is maximal and choose a random question from there.\nLicense This data as well as the underlying SNLI data are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License 4.", |
| "F Examples of Generated Questions": "Here we provide examples of questions generated by humans and by the different models we consider. Table 3 reports questions generated by Step 1, Step 2, Step 3 and Human.", |
| "G Data Related Information": "The data collected from annotators contains the manually generated questions and the scoring of generated questions. There are no issues of offensive content or privacy in this data, as it based closely on the SNLI dataset.\n4http://creativecommons.org/licenses/by-sa/4.0/\nACL 2023 Responsible NLP Checklist", |
| "3 A2. Did you discuss any potential risks of your work?": "Ethics and Impact section (after Section 6: Conclusions)", |
| "3 A3. Do the abstract and introduction summarize the paper’s main claims?": "Section 1\n7 A4. Have you used AI writing assistants when working on this paper? Left blank.\nB 3 Did you use or create scientific artifacts? Section 5.", |
| "3 B1. Did you cite the creators of artifacts you used?": "Provided a citation to the SNLI dataset and SQUAD.", |
| "3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?": "Appendix E.\nB3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Appendix E.", |
| "3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any": "information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix G.\nB5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Appendix E.\n3 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5.\nC 3 Did you run computational experiments? Section 5.", |
| "3 C1. Did you report the number of parameters in the models used, the total computational budget": "(e.g., GPU hours), and computing infrastructure used? Appendix C & D\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.\nC2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank.", |
| "3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary": "statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5", |
| "3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did": "you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix C\nD 3 Did you use human annotators (e.g., crowdworkers) or research with human participants? Section 5.", |
| "3 D1. Did you report the full text of instructions given to participants, including e.g., screenshots,": "disclaimers of any risks to participants or annotators, etc.? Appendix A.", |
| "3 D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)": "and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? Appendix B.", |
| "3 D3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to": "crowdworkers explain how the data would be used? Appendix B.", |
| "3 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?": "Appendix B.", |
| "3 D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?": "Appendix B." |
| } |