ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1049.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:01:41.608285Z"
},
"title": "Automatic Question Answering for Medical MCQs: Can It Go Further than Information Retrieval?",
"authors": [
{
"first": "Le",
"middle": [
"An"
],
"last": "Ha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wolverhampton",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Yaneva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wolverhampton",
"location": {
"country": "UK"
}
},
"email": "v.yaneva@wlv.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel approach to automatic question answering that does not depend on the performance of an information retrieval (IR) system and does not require training data. We evaluate the system performance on a challenging set of university-level medical science multiple-choice questions. Best performance is achieved when combining a neural approach with an IR approach, both of which work independently. Unlike previous approaches, the system achieves statistically significant improvement over the random guess baseline even for questions that are labeled as challenging based on the performance of baseline solvers.",
"pdf_parse": {
"paper_id": "R19-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel approach to automatic question answering that does not depend on the performance of an information retrieval (IR) system and does not require training data. We evaluate the system performance on a challenging set of university-level medical science multiple-choice questions. Best performance is achieved when combining a neural approach with an IR approach, both of which work independently. Unlike previous approaches, the system achieves statistically significant improvement over the random guess baseline even for questions that are labeled as challenging based on the performance of baseline solvers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic question answering has seen a renewed interest in recent years as a challenge problem for evaluating machine intelligence. This has driven the development of large-scale questionanswering data sets such as SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , WikiMovies benchmark (Chen et al., 2017) , Triv-iaQA (Joshi et al., 2017 ) (to name a few), as well as the organisation of workshops such as the Machine Reading for Question Answering 2018 workshop 1 . In spite of the optimistic advances over crowd-sourced questions and online queries, automatic question answering for real exam questions is still a very challenging and under-explored area. For example, the Allen AI Science Challenge 2 invited researchers worldwide to develop systems that could solve standardized eight-grade science questions. The best system out of all 780 participating teams achieved a score of 59.31% correct answers using a combination of 15 gradient-boosting models (random baseline of 25%), while the authors report that using Information Retrieval (IR) alone results in a score of 55%.",
"cite_spans": [
{
"start": 222,
"end": 246,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 256,
"end": 280,
"text": "(Trischler et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 304,
"end": 323,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 336,
"end": 355,
"text": "(Joshi et al., 2017",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The difficulties related to answering exam questions are partly due to the complexity of the reasoning involved and partly to the lack of large training data. Another significant reason is the fact that the existing approaches to question answering are dependent on the performance of IR systems and can rarely go far beyond the performance of such systems. While IR is a powerful method when answering questions where the correct answer is a string contained within a document, the systems fail when the sentences within the question do not individually hold a clue to what the correct answer might be . This is one of the characteristics of Multiple Choice Questions (MCQs) from the science domain that makes them so challenging for both machines and for humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we aim to address these shortcomings by developing an approach that: i) does not require that the training data (often unavailable) be in the form of multiple-choice questions and ii) does not depend on matching strings of text with one another. We use a challenging set of medical exam questions developed for the United States Medical Licensing Examination (USMLE \u00ae ), a standardized medical exam that university students need to pass in order to obtain the right to practice medicine in the US. As such, the USMLE represents a very difficult set, requiring a high level of specialized professional knowledge and reasoning over facts. Furthermore, the USMLE contains a wide variety of question types such as selecting the most appropriate diagnosis, treatment, specific further examination needed, etc., all of which require application of clinical knowledge over facts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions We introduce and compare two approaches for automatic question answering that do not require training data in the form of MCQs, using Information Retrieval (IR) techniques and standard neural network models. Unlike previous work, our neural approach is independent of the performance of the IR system, as it does not build upon it. Thus, it is possible to achieve improvements over both systems by combining them, as each system has an individual contributions towards solving the problem. The best combination results in 18% improvement over a random guess baseline. The neural models achieve a statistically significicant improvement over the random baseline on the challenging sets. The code used in this study, as well as the public data 3 are made available at: https://bit.ly/2jNW2ym.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the recent work in the field focuses on answering reading comprehension questions from benchmark datasets such as SQuAD (Rajpurkar et al., 2016) , the release of which ignited a rapid progress in the field. For example, Wang et al. (2017) use gated self-matching networks and report accuracy as high as 75.9% over a random guess baseline of around 4% and a logistic regression baseline of around 51%. Among the most successful approaches in other studies are ones that use neural models such as match-LSTM to build question-aware passage representation (Wang and Jiang, 2015) , bi-directional attention flow networks to model question-passage pairs (Seo et al., 2016) , or dynamic co-attention networks (Xiong et al., 2016) .",
"cite_spans": [
{
"start": 128,
"end": 152,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 228,
"end": 246,
"text": "Wang et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 561,
"end": 583,
"text": "(Wang and Jiang, 2015)",
"ref_id": "BIBREF14"
},
{
"start": 657,
"end": 675,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 711,
"end": 731,
"text": "(Xiong et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As mentioned in the previous section, automatic question answering for science exams is a lot more challenging than for crowd-sourced reading comprehension questions. When applied to science questions, IR techniques: i) still perform somewhat close to the state-of-the-art and ii) fail on tasks where the correct answer is not specifically contained in relevant sentences. implement five of the best models from the studies on the reading comprehension data sets (TableILP (Khashabi et al., 2016) , TupleInference (Khot et al., 2017) , Neural entailment models (De-compAttn, DGEM, and DGEM -OpenIE) (Parikh et al., 2016) , and BiDAF (Seo et al., 2016) ), as well as IR models and test them on a total of 7787 science questions. The questions are divided into two sets, challenging and easy, and are targeted at students between the ages of 8 and 13. It is important to note that the authors define a question as being challenging or easy not on the basis of human performance or the age of the students it is targeted at, but based on whether it has been answered incorrectly by at least two of the baseline solvers. The results indicated that none of the algorithms performed significantly higher than the random guess baseline of 25% on the challenging set, while the performance on the easy set was within the range of 36% and 62%. According to the authors, a possible explanation for the low accuracy is that nearly all models use some form of information retrieval to obtain relevant sentences, and the retrieval bias in these systems is towards sentences that are very similar to the question, as opposed to sentences that individually differ but together explain the correct answer . Notably, the neural solvers performed poorly on the easy set, while the best result was achieved by an IR-only system.",
"cite_spans": [
{
"start": 473,
"end": 496,
"text": "(Khashabi et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 514,
"end": 533,
"text": "(Khot et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 599,
"end": 620,
"text": "(Parikh et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 633,
"end": 651,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the USMLE data each test item is a singlebest-answer MCQ consisting of a stem (question) followed by several response options (distractors), one of which is the correct answer (key). An example of such an item is provided in Table 1 . We divide our data into two sets: private and public ( Table 2 ). The private data set consists of a total of 2,720 MCQs and they are not available to the public due to test security reasons. The public data set consists of 454 items from USMLE 2015",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 293,
"end": 300,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Step 1, USMLE 2016 Step 1, USMLE 2014",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Step 2, and USMLE 2017 Step 2 sample leaflets. These are available at the USMLE website 4 and in our repository. For the purpose of this study, we have selected only those items that fulfill the following criteria: i) whose correct answer contains at least one heading from the Medical Subject Headings (MeSH 5 ) database that is at most three words, and ii) have exactly 5 options that have at least one MeSH heading that is at most three words. The A 56-year-old man comes to the emergency department because of a 4-day history of colicky right flank pain that radiates to the groin and hematuria. Ultrasound examination of the kidneys shows right-sided hydronephrosis and a dilated ureter. Which of the following is most likely to be found on urinalysis? (A) Erythrocyte casts (B) Glucose (C) Leukocyte casts (D) Oval fat bodies (E)* Uric acid crystals latter is in order to keep the random guess baseline at a constant for all items (20%). As a result, the final data that we have is 164 items for the public set and 922 for the private one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We develop and compare two methods for answering the USMLE questions, both of which do not require training data in the form of MCQs. The details of each method are described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "We use a standard IR approach. First, we index 2012 MEDLINE abstracts using Lucene 6 with its default options. Then, for each item we build the five queries, where each query contains the stem and an option. We use three settings for the queries:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Based Method",
"sec_num": "4.1"
},
{
"text": "\u2022 All words (IR-All) (baseline)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Based Method",
"sec_num": "4.1"
},
{
"text": "\u2022 Nouns only (IR-Nouns)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Based Method",
"sec_num": "4.1"
},
{
"text": "\u2022 Nouns, Verbs, or Adjectives only (IR-NVA),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Based Method",
"sec_num": "4.1"
},
{
"text": "We then get the top 5 documents returned by Lucene and calculate the sum of the retrieval scores. The picked answer is the one that has the highest score when combined with the stem to form the query. This method is similar to the IR baseline described in and variations of it have been previously applied to medical MCQs for the purposes of distractor generation (Ha and Yaneva, 2018) and predicting item difficulty (Ha et al., 2019) .",
"cite_spans": [
{
"start": 364,
"end": 385,
"text": "(Ha and Yaneva, 2018)",
"ref_id": "BIBREF3"
},
{
"start": 417,
"end": 434,
"text": "(Ha et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Based Method",
"sec_num": "4.1"
},
{
"text": "For this approach we train neural networks to predict the MeSH headings for each abstract. The premise of this approach is that we hypothesise that the task of answering an USMLE item could be considered to be similar to the task of identifying the topics of a snippet of text: in the case of MEDLINE indexing, indexers read the abstract, and then choose the topics that are most relevant to the abstract; whereas in the case of taking USMLE exam, test takers read the stem, and then choose the option that is most relevant to the stem. Approaching the problem this way, we can benefit from the availability of the MEDLINE data, in which each abstract has been manually (or semimanually) assigned most relevant subject headings. We focus only on headings that appear in the options of the set of items (see above). For our set, there are around 1000 headings. Our neural networks 7 were trained using Keras 8 . We use two main structures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Method",
"sec_num": "4.2"
},
{
"text": "\u2022 Bidirectional LSTM (LSTM). Specifications: an input layer, followed by an embedding layer and a bidirectional layer, each of size 250. The final two layers are a flattening layer and a dense layer. The classes are weighted inversely to their frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Method",
"sec_num": "4.2"
},
{
"text": "\u2022 Convoluted 1d with attention (Conv1d). Specifications: an input layer, followed by an embedding layer, three convolutional layers, and a concatenating layer, each of size 250. 7 Preprocessing includes tokenization (using keras.preprocessing.text package in python), no lower case normalization, no number normalization, recording words with a min frequency of 5. The neural network models then further restrict the vocabulary to the first 200000 most frequent words. Out of vocabulary rate was 1%. Nadam optimizer was used with its default options (learning rate = 0.002, beta 1 = 0.9, beta 2 = 0.999, epsilon = None, schedule decay = 0.004). Batch size = 128, activation function used in the last layer was Softmax.",
"cite_spans": [
{
"start": 178,
"end": 179,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Method",
"sec_num": "4.2"
},
{
"text": "8 https://keras.io/ Table 3 : Accuracy of the different systems. The values marked with * signify statistically significant difference over the random guess baseline and ** signifies statistically significant improvement over both baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Network Method",
"sec_num": "4.2"
},
{
"text": "These are followed by an attention layer and a densely connected layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Method",
"sec_num": "4.2"
},
{
"text": "We train the models on 10,000,000 MEDLINE abstracts (the same set used in the IR approach), going through them twice. We experiment with pre-trained GloVe840b (Pennington et al., 2014) and word2vec 9 , but the results are inferior to training the embedding layers from scratch. We then use the trained models to predict the probability of a MeSH heading in an option given the stem. We then average the probabilities if the option contains more than 1 heading.",
"cite_spans": [
{
"start": 159,
"end": 184,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Method",
"sec_num": "4.2"
},
{
"text": "We use two methods to combine the IR and neural model scores. The first method just adds the log value of the two scores together (log(IR Noun)+log(Conv1d)). The second method uses the neural model scores as a tie breaker ('Neural as tie breaker'): if the IR method returns a single option, we take the result from the IR. If the IR method returns more than one options, we take the results from the neural model instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Method",
"sec_num": "4.3"
},
{
"text": "We compare our results to two baselines: the probability of a random guess to pick the correct answer and the IR-All model described above. 9 https://code.google.com/archive/p/word2vec/",
"cite_spans": [
{
"start": 140,
"end": 141,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "The results from our study are presented in Table 3 . Best performance is achieved by using neural model scores as a tie breaker. This result significantly outperforms both the random guess baseline and the IR-All approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "It is interesting to note that while neural approaches alone present a significant improvement only over the random guess baseline, using neural approaches to solve ties leads to an overall increase in performance for the best combined models. The independent nature of the neural approach is best illustrated when testing its performance over items that were incorrectly solved by the best IR approaches. This is the case for 110 items from the public data set, and 587 items from the private data set, which, if we follow the definition of , can be regarded as \"challenging\" since the best IR solver could not answer them correctly. In the case of none of the tested solvers achieved significant improvement over the random guess baseline when evaluated on the challenging questions. In our case, the neural approaches achieve 29% accuracy (32 items) for the public data set and 27.6% accuracy (162 items) for the private one, which are both statistically significant when comparing to random guess. This independence, resulting from the use of humanly produced subject headings, indicate that these headings do provide additional information with regards to the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "A drawback of the neural approach proposed in this paper is that it relies on the availability of a manually indexed database such as MEDLINE. This limits the applicability of the approach to other domains, however, this may change when more resources become available in the future. It is important to note that in this restricted setting the method solves a very difficult problem better than any other approach so far. In the future, instead of using the adhoc neural network architectures presented in this paper, we plan to utilise state-of-the-art architectures such as Elmo (Peters et al., 2018) or BERT (Devlin et al., 2018) , while using the prediction of MESH headings as an additional learning objective.",
"cite_spans": [
{
"start": 581,
"end": 602,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 611,
"end": 632,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We presented an approach to automatic question answering that does not rely on training data in the form of MCQs and can perform independently from IR. We first train neural networks to predict the MeSH headings for a set of MEDLINE abstracts and then use the trained network to predict the correct answers of medical MCQs. Best performance was achieved when combining this approach with an information retrieval approach and the model significantly outperformed both a random guess baseline and one based on a common IR approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://mrqa2018.github.io/ 2 https://www.kaggle.com/c/the-allen-ai-sciencechallenge",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Section 3. The Public data set used in this study consists of questions released as training materials by the USMLE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The items can be accessed at the USMLE web site at http://www.usmle.org/, for example:http://www.usmle.org/pdfs/step-1/ samples_step1.pdf 5 https://www.nlm.nih.gov/mesh/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://lucene.apache.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reading wikipedia to answer open-domain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00051"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading wikipedia to an- swer open-domain questions. arXiv preprint arXiv:1704.00051.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Think you have solved question answering? try arc, the ai2 reasoning challenge",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Cowhey",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Carissa",
"middle": [],
"last": "Schoenick",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.05457"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic distractor suggestion for multiple-choice tests using concept embeddings and information retrieval",
"authors": [
{
"first": "An",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yaneva",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "389--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le An Ha and Victoria Yaneva. 2018. Automatic distractor suggestion for multiple-choice tests using concept embeddings and information retrieval. In Proceedings of the Thirteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 389-398.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Predicting the difficulty of multiple choice questions in a high-stakesmedical exam",
"authors": [
{
"first": "Le An",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Yaneva",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Mee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le An Ha, Victoria Yaneva, Peter Baldwin, and Janet Mee. 2019. Predicting the difficulty of multiple choice questions in a high-stakesmedical exam. In Proceedings of the Fourteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.03551"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. arXiv preprint arXiv:1705.03551.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Question answering via integer programming over semi-structured knowledge",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.06076"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Khashabi, Tushar Khot, Ashish Sabhar- wal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming over semi-structured knowledge. arXiv preprint arXiv:1604.06076.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Answering complex questions using open information extraction",
"authors": [
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.05572"
]
},
"num": null,
"urls": [],
"raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering complex questions using open informa- tion extraction. arXiv preprint arXiv:1704.05572.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01933"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur P Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.05250"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Newsqa: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.09830"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning natural language inference with lstm",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.08849"
]
},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2015. Learning nat- ural language inference with lstm. arXiv preprint arXiv:1512.08849.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gated self-matching networks for reading comprehension and question answering",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "189--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189-198.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dynamic coattention networks for question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01604"
]
},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Public Private</td></tr><tr><td>Number of Items</td><td>164</td><td>921</td></tr><tr><td colspan=\"2\">Average words per item 116</td><td>87</td></tr></table>",
"text": "An example of an item from the USMLE exam (question 128, USMLE 2015 step 1 sample test questions)"
},
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Characteristics of the two sets"
}
}
}
}