Method
We focus on creating high-quality, non-trivial questions which will allow the model to learn to extract the proper answer from a context-question pair.
Sentence Retrieval: A standard cloze question can be obtained by taking the original sentence in which the answer appears from the context and masking the answer with a chosen token. However, a model trained on this data will only learn text matching and how to fill-in-the-blank, with little generalizability. For this reason, we chose to use a retrieval-based approach to obtain a sentence similar to that which contains the answer, upon which to create a given question. For our experiments, we focused on answers which are named entities, which has proven to be a useful prior assumption for downstream QA performance [@lewis-etal-2019-unsupervised] confirmed by our initial experiments. First, we indexed all of the sentences from a Wikipedia dump using the ElasticSearch search engine. We also extract named entities for each sentence in both the Wikipedia corpus and the sentences used as queries. We assume access to a named-entity recognition system, and in this work make use of the spaCy[^4] NER pipeline. Then, for a given context-answer pair, we query the index, using the original context sentence as a query, to return a sentence which (1) contains the answer, (2) does not come from the context, and (3) has a lower than 95% F1 score with the query sentence to discard highly similar or plagiarized sentences. Besides ensuring that the retrieved sentence and query sentence share the answer entity, we require that at least one additional matching entity appears in both the query sentence and in the entire context, and we perform ablation studies on the effect of this matching below. These retrieved sentences are then fed into our question-generation module.
Template-based Question Generation: We consider several question styles (1) generic cloze-style questions where the answer is replaced by the token "[MASK]", (2) templated question "Wh+B+A+?" as well as variations on the ordering of this template, as shown in Figure 2{reference-type="ref" reference="fig:question-example"}. Given the retrieved sentence in the form of [``Fragment A``]`` ``[``Answer``]`` ``[``Fragment B``], the templated question "Wh+B+A+?" replaces the answer with a Wh-component (e.g., what, who, where), which depends on the entity type of the answer and places the Wh-component at the beginning of the question, followed by sentence Fragment B and Fragment A. For the choice of wh-component, we sample a bi-gram based on prior probabilities of that bi-gram being associated with the named-entity type of the answer. This prior probability is calculated based on named-entity and question bi-gram starters from the SQuAD dataset. This information does not make use of the full context-question-answer and can be viewed as prior information, not disturbing the integrity of our unsupervised approach. Additionally, the choice of wh component does not significantly affect results. For template-based approaches, we also experimented with clause-based templates but did not find significant differences in performance.