ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:01:15.917855Z"
},
"title": "Entropy as a Proxy for Gap Complexity in Open Cloze Tests",
"authors": [
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ALTA Institute Computer Laboratory University of Cambridge Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Paula",
"middle": [],
"last": "Buttery",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ALTA Institute Computer Laboratory University of Cambridge Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a pilot study of entropy as a measure of gap complexity in open cloze tests aimed at learners of English. Entropy is used to quantify the information content in each gap, which can be used to estimate complexity. Our study shows that average gap entropy correlates positively with proficiency levels while individual gap entropy can capture contextual complexity. To the best of our knowledge, this is the first unsupervised information-theoretical approach to evaluating the quality of cloze tests.",
"pdf_parse": {
"paper_id": "R19-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a pilot study of entropy as a measure of gap complexity in open cloze tests aimed at learners of English. Entropy is used to quantify the information content in each gap, which can be used to estimate complexity. Our study shows that average gap entropy correlates positively with proficiency levels while individual gap entropy can capture contextual complexity. To the best of our knowledge, this is the first unsupervised information-theoretical approach to evaluating the quality of cloze tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Fill-in-the-gap or cloze test exercises are common means of assessing grammar and vocabulary in the realm of English as a Foreign Language (EFL). The most common example is the multiple choice question, which presents the student with a gapped sentence and a set of possible answers from which the right one is to be selected. These are referred to as closed cloze questions, since the answer is limited to the alternatives given. On the contrary, open cloze questions do not provide predefined options, so the student must produce an answer from scratch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generating these exercises is a laborious process, since they must be carefully designed to ensure they test the desired learning objective and do not confuse or present trivial questions to the student. For this reason, choosing the optimal locations in a sentence to insert the gaps and defining a suitable set of answer options becomes crucial, especially when exercises are generated automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on open cloze tests and show how entropy can be used to assess the complexity of each gap in the text. Entropy is shown to provide insights into the expected difficulty of the question and correlate directly with the target proficiency level of the exercises. Exploiting this information should thus facilitate the automatic generation of more reliable open cloze exercises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Work on automated cloze test generation has mostly focused on multiple choice questions and distractor selection (Mitkov and Ha, 2003; Sumita et al., 2005; Brown et al., 2005; Lee and Seneff, 2007; Lin et al., 2007; Smith et al., 2010; Sakaguchi et al., 2013) . Conversely, there has been little work on open cloze tests. Pino et al. (2008) describes a strategy to generate open cloze questions using example sentences from a learners' dictionary. Sentences are chosen based on four linguistic criteria: (grammatical) complexity, welldefined context (collocations), grammaticality and length. Further work improved on this method by providing hints for the gapped words (Pino and Eskenazi, 2009) . Malafeev (2014) developed an open source system to emulate open cloze tests in Cambridge English exams based on the most frequent gapped words. Expert EFL instructors found the generated gaps to be useful in most cases and had difficulty differentiating automated exercises from authentic exams. More recently, Marrese-Taylor et al. (2018) trained sequence labelling and classification models to decide where to insert gaps in open cloze exercises. The models achieved around 90% accuracy/F 1 when evaluated on manually created exercises.",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Mitkov and Ha, 2003;",
"ref_id": "BIBREF10"
},
{
"start": 135,
"end": 155,
"text": "Sumita et al., 2005;",
"ref_id": "BIBREF18"
},
{
"start": 156,
"end": 175,
"text": "Brown et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 176,
"end": 197,
"text": "Lee and Seneff, 2007;",
"ref_id": "BIBREF6"
},
{
"start": 198,
"end": 215,
"text": "Lin et al., 2007;",
"ref_id": "BIBREF7"
},
{
"start": 216,
"end": 235,
"text": "Smith et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 236,
"end": 259,
"text": "Sakaguchi et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 322,
"end": 340,
"text": "Pino et al. (2008)",
"ref_id": "BIBREF13"
},
{
"start": 670,
"end": 695,
"text": "(Pino and Eskenazi, 2009)",
"ref_id": "BIBREF12"
},
{
"start": 698,
"end": 713,
"text": "Malafeev (2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While the quality of the generated gaps has traditionally been judged by human experts (Pino et al., 2008; Malafeev, 2014) or estimated from student responses (Sumita et al., 2005; Brown et al., 2005; Skory and Eskenazi, 2010; Beinborn et al., 2014; Susanti et al., 2016) , systems should ideally predict the quality of the gaps during the generation process. In this regard, Skory and Eskenazi (2010) observe that Shannon's information theory (Shannon, 1948) could be used to estimate the reading difficulty of answers to a gap based on their probability of occurrence. Thus, for the sentence \"She drives a nice \", the word \"car\" would be the most likely answer (lowest readability level) while words such as \"taxi\", \"tank\" and \"ambulance\" would be at increasingly higher levels.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "(Pino et al., 2008;",
"ref_id": "BIBREF13"
},
{
"start": 107,
"end": 122,
"text": "Malafeev, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 159,
"end": 180,
"text": "(Sumita et al., 2005;",
"ref_id": "BIBREF18"
},
{
"start": 181,
"end": 200,
"text": "Brown et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 201,
"end": 226,
"text": "Skory and Eskenazi, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 227,
"end": 249,
"text": "Beinborn et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 250,
"end": 271,
"text": "Susanti et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 376,
"end": 401,
"text": "Skory and Eskenazi (2010)",
"ref_id": "BIBREF16"
},
{
"start": 444,
"end": 459,
"text": "(Shannon, 1948)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Research on predicting the difficulty of cloze tests is also directly relevant to this work. Beinborn et al. (2014) built models to predict the difficulty of C-tests (i.e. gaps with half of the required word removed) at the gap and test level and later extended their approach to cover closed cloze tests (Beinborn et al., 2015; Beinborn, 2016) . More recently, Pandarova et al. 2019presented a difficulty prediction model for cued gap-fill exercises aimed at practising English verb tenses while Lee et al. 2019investigated how difficulty predictions could be manipulated to adapt tests to a target proficiency level. Unlike our work, however, all these approaches are supervised and not applied to open cloze tests.",
"cite_spans": [
{
"start": 305,
"end": 328,
"text": "(Beinborn et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 329,
"end": 344,
"text": "Beinborn, 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we build on the assumption that the complexity of a gap is correlated to the number of possible answers determined by the surrounding context and the likelihood of each answer. As noted by Pino et al. (2008) , high-quality open cloze questions should sufficiently narrow the context of each gap in order to avoid multiple valid answers, which would make the exercise too broad in scope and therefore ineffective. We thus assume that gaps with more restricted context eliciting very specific answers should be more useful than broad gaps with very general answers, so the less \"branching\" that a gap allows, the better. This property can be modelled by entropy, which quantifies the amount of information conveyed by an event. Intuitively, entropy can be considered a measure of disorder, uncertainty or surprise. If the probability of an event is very high, entropy will be low (i.e. there is less surprise about what will happen) while events with low probabilities will lead to higher entropy. Shannon's entropy, a common formulation to measure the number of bytes needed to encode information, is shown in Equation 1, where P (x i ) stands for the probability of event x i , i.e. the probability that each word in the vocabulary occurs in the evaluated context.",
"cite_spans": [
{
"start": 204,
"end": 222,
"text": "Pino et al. (2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(X) = \u2212 n i=1 P (x i ) log 2 P (x i )",
"eq_num": "(1)"
}
],
"section": "Entropy",
"sec_num": "3"
},
{
"text": "In this work, we use entropy to assign a score to each gap based on the number of valid words that could fill in the slot given the surrounding context. As a result, gaps with many possible answers will yield higher entropy than those with fewer answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy",
"sec_num": "3"
},
{
"text": "We followed Malafeev's (2014) approach and used open cloze tests from Cambridge English examinations as our gold standard data, since they are manually created by experts in the field of EFL testing. We collected the sample open cloze tests for KET, FCE, CAE and CPE exams that are featured in their respective online handbooks 1 (one per exam together with their answers). These exams correspond respectively to levels A2, B2, C1 and C2 in the Common European Framework of Reference for Languages (CEFR) . An open cloze test is not included in the PET (B1) exam, which is why it has not been included in our experiments.",
"cite_spans": [
{
"start": 12,
"end": 29,
"text": "Malafeev's (2014)",
"ref_id": "BIBREF8"
},
{
"start": 498,
"end": 504,
"text": "(CEFR)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For each exam, we restored the original text by using the answers provided (using the first alternative if there were many) and created 10 different variations of the open cloze tests by inserting gaps randomly throughout the text. We created the same number of gaps as in the original tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For each original and automatically generated test, we compute entropy per gap using a 5-gram language model trained on the 1 Billion Word WMT 2011 News Crawl corpus 2 using KenLM (Heafield, 2011) . We use the language model bidirectionally, taking 3 words to the left and right of each gap to predict the probability of the next and previous words respectively. Since we obtain a probability for all the words in our vocabulary (> 82, 200 words) given the left and right context individually, we multiply the probabilities for each word to get a unified \"bidirectional\" probability (see Figure 1 ). Given that this can lead to infinitesimal probabilities that can affect computa- tion, we use only the top 100 most probable words when computing entropy for each gap. Table 1 shows information about our gold standard tests, including CEFR levels, number of gaps and average gap entropy. The average gap entropy correlates positively with CEFR levels, suggesting that entropy increases with proficiency levels. We then computed the average gap entropy for each of the 10 automatically generated tests per exam and compared them to the gold standard. Results are shown in Table 2 .",
"cite_spans": [
{
"start": 180,
"end": 196,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 588,
"end": 596,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 768,
"end": 775,
"text": "Table 1",
"ref_id": null
},
{
"start": 1171,
"end": 1178,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Unlike the handcrafted gold standard, the automatically generated tests were produced randomly by a machine with no knowledge of test design so we would expect automatic gaps to be often inserted in inconvenient locations within the text, yielding lower quality tests. This hypothesis is verified by looking at the average gap entropy for the automatic tests, which is much higher than for the gold standard in the majority of cases (77.5%). This supports our intuition that entropy can be used to discriminate between good and bad gaps and, consequently, between good and bad tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "We noticed that automatically generated tests for CPE tend to have lower entropy than the gold standard, contradicting our assumption in principle. However, we do not believe that these lower values indicate better tests but rather that they deviate from the expected difficulty for this proficiency level. In fact, we would expect high-quality tests to have average gap entropy around that of the gold standard tests, not too far below or over this reference value. Based on this premise, better automated tests can be constructed by controlling the entropy of gaps in the text, in line with previous work by Lee et al. (2019) . ",
"cite_spans": [
{
"start": 610,
"end": 627,
"text": "Lee et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "We looked at the gaps with the lowest and highest entropy to analyse how these values relate to the surrounding contexts. Table 3 shows the gaps in our gold standard tests with the lowest and highest entropy. First, we found that gaps with the lowest entropy correspond mostly to exams at low CEFR levels while those with the highest entropy correspond to the highest CEFR level. This confirms our initial finding that entropy correlates directly with proficiency levels.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.2"
},
{
"text": "Second, we observed that gaps with low entropy are very restricted in context and built around very simple grammatical structures or vocabulary, making it easy to figure out the answers. On the other hand, gaps with high entropy are part of more complex grammatical structures and require longer context or understanding in order to be solved. This explains why our language model is unable to estimate the right answers for complex gaps, leading to higher entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.2"
},
{
"text": "Finally, we investigated the correlation between entropy and the number of valid answers per gap. Pearson correlation for gaps in our gold standard tests is reported in Table 4 . Contrary to our intuition, there is no consistent relationship between entropy and the number of valid answers per gap in our gold standard: KET shows negative correlation while CPE shows moderate positive correlation. We hypothesise that this is due to a limitation of the language model used in this preliminary study, which is unable to estimate the right word probabilities for gaps in complex contexts for the reasons described above. Using a more sophisticated language model should ameliorate this problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.2"
},
{
"text": "In any case, the values of entropy computed with our current model seem to capture the complexity of the gaps in context, which serves as a measure of difficulty. This, combined with the positive correlation with CEFR levels, makes en- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.2"
},
{
"text": "This work investigated the use of entropy as an evaluation measure for gaps in open cloze EFL tests. Our study revealed that the average gap entropy of a test correlates positively with proficiency levels, so easier tests will contain gaps with lower entropy. A comparison between randomly generated tests and the handcrafted gold standard tests showed that the former had much higher entropy in general, confirming our intuition that generating random gaps is not optimal and that entropy can be used to discriminate between good and bad tests. We also investigated the correlation between entropy and the number of valid answers per gap but results showed no consistent relationship, most likely due to the limitations of the n-gram lan-guage model used in this preliminary work. However, entropy was found to be a suitable proxy for gap complexity, which can be used to control the automatic generation of open cloze tests. Future work will address the limitations in this pilot study and investigate entropy on a larger sample.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "https://www.cambridgeenglish.org/ exams-and-tests/ 2 https://www.statmt.org/lm-benchmark/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Predicting the difficulty of language proficiency tests",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "517--529",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2014. Predicting the difficulty of language profi- ciency tests. Transactions of the Association for Computational Linguistics, 2:517-529.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Candidate evaluation strategies for improved difficulty prediction of language tests",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2015. Candidate evaluation strategies for improved difficulty prediction of language tests. In Proceed- ings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1-11, Denver, Colorado. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Predicting and Manipulating the Difficulty of Text-Completion Exercises for Language Learning",
"authors": [
{
"first": "Lisa",
"middle": [
"Marina"
],
"last": "Beinborn",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Marina Beinborn. 2016. Predicting and Manip- ulating the Difficulty of Text-Completion Exercises for Language Learning. Ph.D. thesis, Fachbereich Informatik, Technische Universitt Darmstadt, Darm- stadt, Germany. PhD thesis.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic question generation for vocabulary assessment",
"authors": [
{
"first": "Jonathan",
"middle": [
"C"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Gwen",
"middle": [
"A"
],
"last": "Frishkoff",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "819--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan C. Brown, Gwen A. Frishkoff, and Maxine Eskenazi. 2005. Automatic question generation for vocabulary assessment. In Proceedings of the Con- ference on Human Language Technology and Em- pirical Methods in Natural Language Processing, HLT '05, pages 819-826, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "KenLM: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197, Edinburgh, Scotland. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Manipulating the difficulty of c-tests",
"authors": [
{
"first": "Ji-Ung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Schwan",
"suffix": ""
},
{
"first": "Christian",
"middle": [
"M"
],
"last": "Meyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji-Ung Lee, Erik Schwan, and Christian M. Meyer. 2019. Manipulating the difficulty of c-tests. CoRR, abs/1906.06905.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic generation of cloze items for prepositions",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 2007,
"venue": "Eighth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "2173--2176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lee and Stephanie Seneff. 2007. Automatic generation of cloze items for prepositions. In Eighth Annual Conference of the International Speech Communication Association, pages 2173- 2176, Antwerp, Belgium.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An automatic multiple-choice question generation scheme for english adjective understanding",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2007,
"venue": "Workshop on Modeling, Management and Generation of Problems/Questions in eLearning",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Lin, L. Sung, and M. Chen. 2007. An auto- matic multiple-choice question generation scheme for english adjective understanding. In Workshop on Modeling, Management and Generation of Prob- lems/Questions in eLearning, pages 137-142. 15th International Conference on Computers in Educa- tion (ICCE 2007).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language exercise generation: Emulating cambridge open cloze",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Malafeev",
"suffix": ""
}
],
"year": 2014,
"venue": "Int. J. Concept. Struct. Smart Appl",
"volume": "2",
"issue": "2",
"pages": "20--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexey Malafeev. 2014. Language exercise generation: Emulating cambridge open cloze. Int. J. Concept. Struct. Smart Appl., 2(2):20-35.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to automatically generate fill-in-the-blank quizzes",
"authors": [
{
"first": "Edison",
"middle": [],
"last": "Marrese-Taylor",
"suffix": ""
},
{
"first": "Ai",
"middle": [],
"last": "Nakajima",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "Ono",
"middle": [],
"last": "Yuichi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "152--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edison Marrese-Taylor, Ai Nakajima, Yutaka Matsuo, and Ono Yuichi. 2018. Learning to automatically generate fill-in-the-blank quizzes. In Proceedings of the 5th Workshop on Natural Language Pro- cessing Techniques for Educational Applications, pages 152-156, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Computer-aided generation of multiple-choice tests",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
},
{
"first": "Le",
"middle": [
"An"
],
"last": "Ha",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing",
"volume": "",
"issue": "",
"pages": "17--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Mitkov and Le An Ha. 2003. Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 Workshop on Building Edu- cational Applications Using Natural Language Pro- cessing, pages 17-22.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predicting the difficulty of exercise items for dynamic difficulty adaptation in adaptive language tutoring",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Pandarova",
"suffix": ""
},
{
"first": "Torben",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Hartig",
"suffix": ""
},
{
"first": "Ahc\u00e8ne",
"middle": [],
"last": "Boubekki",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"Dale"
],
"last": "Jones",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Brefeld",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Artificial Intelligence in Education",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irina Pandarova, Torben Schmidt, Johannes Hartig, Ahc\u00e8ne Boubekki, Roger Dale Jones, and Ulf Brefeld. 2019. Predicting the difficulty of exercise items for dynamic difficulty adaptation in adaptive language tutoring. International Journal of Artifi- cial Intelligence in Education.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Measuring hint level in open cloze questions",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2009,
"venue": "Twenty-Second International FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "460--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Pino and Maxine Eskenazi. 2009. Measuring hint level in open cloze questions. In Twenty-Second In- ternational FLAIRS Conference, pages 460-465.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A selection strategy to improve cloze question quality. Intelligent Tutoring Systems for Ill-Defined Domains: Assessment and Feedback in Ill-Defined Domains",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Pino, Michael Heilman, and Maxine Eskenazi. 2008. A selection strategy to improve cloze question quality. Intelligent Tutoring Systems for Ill-Defined Domains: Assessment and Feedback in Ill-Defined Domains., page 22.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discriminative approach to fill-in-theblank quiz generation for language learners",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Yuki",
"middle": [],
"last": "Arase",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "238--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Yuki Arase, and Mamoru Ko- machi. 2013. Discriminative approach to fill-in-the- blank quiz generation for language learners. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 238-242, Sofia, Bulgaria. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "Claude",
"middle": [
"Elwood"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "The Bell System Technical Journal",
"volume": "27",
"issue": "3",
"pages": "379--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude Elwood Shannon. 1948. A mathematical the- ory of communication. The Bell System Technical Journal, 27(3):379-423.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Predicting cloze task quality for vocabulary training",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Skory",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Skory and Maxine Eskenazi. 2010. Predicting cloze task quality for vocabulary training. In Pro- ceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 49-56, Los Angeles, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Gap-fill tests for language learners: Corpusdriven item generation",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "P",
"middle": [
"V S"
],
"last": "Avinesh",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ICON-2010: 8th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Smith, P. V. S. Avinesh, and Adam Kilgarriff. 2010. Gap-fill tests for language learners: Corpus- driven item generation. In Proceedings of ICON- 2010: 8th International Conference on Natural Lan- guage Processing, pages 1-6. Macmillan Publishers.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Measuring non-native speakers' proficiency of english by using a test with automatically-generated fill-in-the-blank questions",
"authors": [
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Fumiaki",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "Seiichi",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Second Workshop on Building Educational Applications Using NLP, EdAppsNLP 05",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eiichiro Sumita, Fumiaki Sugaya, and Seiichi Ya- mamoto. 2005. Measuring non-native speak- ers' proficiency of english by using a test with automatically-generated fill-in-the-blank questions. In Proceedings of the Second Workshop on Building Educational Applications Using NLP, EdAppsNLP 05, pages 61-68, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Item difficulty analysis of english vocabulary questions",
"authors": [
{
"first": "Yuni",
"middle": [],
"last": "Susanti",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Nishikawa",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Obari",
"suffix": ""
}
],
"year": 2016,
"venue": "CSEDU 2016 -Proceedings of the 8th International Conference on Computer Supported Education",
"volume": "1",
"issue": "",
"pages": "267--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuni Susanti, Hitoshi Nishikawa, Takenobu Tokunaga, and Hiroyuki Obari. 2016. Item difficulty analysis of english vocabulary questions. In CSEDU 2016 -Proceedings of the 8th International Conference on Computer Supported Education, volume 1, pages 267-274.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "An example calculation of candidate answers for a gap using the left and right context (in red). Candidate words are ranked from the most to the least probable.",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": ".40 3.47 3.63 3.73 4.22 4.18 4.60 4.33 4.74 4.34 FCE 4.53 7.13 6.12 4.30 2.23 4.01 2.45 3.93 4.18 5.67 CAE 4.70 4.66 3.57 2.68 3.26 4.38 2.79 5.08 5.07 4.82 CPE 6.58 3.91 2.72 4.43 5.02 4.18 5.83 5.46 4.02 3.26",
"html": null,
"num": null,
"content": "<table><tr><td>Exam</td><td>1</td><td>2</td><td>3</td><td>Average gap entropy per test 4 5 6 7</td><td>8</td><td>9</td><td>10</td></tr><tr><td>KET 5</td><td/><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF3": {
"type_str": "table",
"text": "Average gap entropy for the automatically generated tests. Values lower than the gold standard are marked in bold.",
"html": null,
"num": null,
"content": "<table><tr><td>Exam Gap in context</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"text": "Example gaps with the lowest and highest entropy.",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"2\">Exam Pearson's \u03c1</td></tr><tr><td>KET</td><td>-0.1518</td></tr><tr><td>FCE</td><td>0.2333</td></tr><tr><td>CAE</td><td>0.0908</td></tr><tr><td>CPE</td><td>0.5149</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"text": "Correlation between entropy and the number of valid answers per gap. tropy a suitable unsupervised evaluation measure for gaps in open cloze tests and encourages future work beyond this pilot study.",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}