ACL-OCL / Base_JSON /prefixQ /json /Q14 /Q14-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:15.619140Z"
},
"title": "Predicting the Difficulty of Language Proficiency Tests",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Duisburg-Essen",
"location": {}
},
"email": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Duisburg-Essen",
"location": {}
},
"email": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab",
"institution": "University of Duisburg-Essen",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Language proficiency tests are used to evaluate and compare the progress of language learners. We present an approach for automatic difficulty prediction of C-tests that performs on par with human experts. On the basis of detailed analysis of newly collected data, we develop a model for C-test difficulty introducing four dimensions: solution difficulty, candidate ambiguity, inter-gap dependency, and paragraph difficulty. We show that cues from all four dimensions contribute to C-test difficulty.",
"pdf_parse": {
"paper_id": "Q14-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "Language proficiency tests are used to evaluate and compare the progress of language learners. We present an approach for automatic difficulty prediction of C-tests that performs on par with human experts. On the basis of detailed analysis of newly collected data, we develop a model for C-test difficulty introducing four dimensions: solution difficulty, candidate ambiguity, inter-gap dependency, and paragraph difficulty. We show that cues from all four dimensions contribute to C-test difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In a labor market that is increasingly globalized, knowledge of at least one foreign language is more relevant than ever before. Due to increased mobility, multilingual skills are also required for private communication as friendships stretch across geographical and linguistic borders. In order to provide adequate language learning support, it is important to frequently evaluate learner progress on the basis of language proficiency tests that enable a fair comparison between learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The test difficulty needs to match the intended target group as the test should be challenging for the learner but not lead to frustration. According to Vygotsky's zone of proximal development (Vygotsky, 1978) , the range of suitable material is very small. Thus, creating a test that fits this narrow target zone is a tedious and timeconsuming task. Teachers predict the difficulty of a test based on their teaching experience. However, as they already know the solutions, they cannot always anticipate the confusion a test might cause for learners. This results in a subjective difficulty estimation that often lacks the consistency required for comparing learners over different tests.",
"cite_spans": [
{
"start": 193,
"end": 209,
"text": "(Vygotsky, 1978)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The underlying principle of most language proficiency tests is the concept of reduced redundancy testing (Spolsky, 1969) . It is based on the idea that \"natural language is redundant\" and that more advanced learners can be dis-tinguished from beginners by their ability to deal with reduced redundancy. For language testing, redundancy can be reduced by eliminating words from a text and asking the learner to fill in the gap, also known as the cloze test. The C-test is a variant of the cloze test which contains more gaps but provides part of the solution as a hint and has been found to be a good estimate for language proficiency (Eckes and Grotjahn, 2006) .",
"cite_spans": [
{
"start": 105,
"end": 120,
"text": "(Spolsky, 1969)",
"ref_id": "BIBREF37"
},
{
"start": 634,
"end": 660,
"text": "(Eckes and Grotjahn, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present an approach for determining the difficulty of C-tests that overcomes the mentioned drawbacks of subjective evaluation by teachers. Our approach is based on objective measurable properties and thus produces consistent results. We show that our approach performs on par with human experts and analyze to which extent Ctest difficulty is determined by individual gap properties (micro-level processing) and higher level dependencies (macro-level processing). On the theoretical level, our model provides new insights into the factors that affect difficulty in reduced redundancy testing. On the practical level, our results may help teachers to precisely evaluate the difficulty of a test and to foresee challenging parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The C-test is a form of reduced redundancy testing and has been established as a standard entrance exam for many language centers. It usually consists of five coherent paragraphs or short texts. The example below consists of a single paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The C-Test",
"sec_num": "2"
},
{
"text": "The roots of humanity can be traced back to millions of years ago. T primary evid comes fr fossils -skulls, skel and bo fragments. Scien have ma tools th allow th to ext subtle infor from anc bones a their enviro settings. Mod forensic wo in t field a in labora can n provide a rich understanding of how our ancestors lived. 1 After an unaltered introductory sentence, every second word is transformed into a gap. When the intended number of gaps is reached (usually 20) , the rest of the text is left intact. For each gap, the smaller half of the word is provided and the missing part has to be completed by the learner. Since its introduction, the C-test has been researched from many angles and has been adapted for over 20 languages (see Grotjahn et al. (2002) for an overview).",
"cite_spans": [
{
"start": 325,
"end": 326,
"text": "1",
"ref_id": null
},
{
"start": 458,
"end": 470,
"text": "(usually 20)",
"ref_id": null
},
{
"start": 742,
"end": 764,
"text": "Grotjahn et al. (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The C-Test",
"sec_num": "2"
},
{
"text": "The C in C-test stands for its origin in the cloze test. In cloze tests, full words are transformed into gaps according to a fixed deletion pattern (e.g. every 7th word).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C-Tests vs Cloze Tests",
"sec_num": "2.1"
},
{
"text": "The main problem with cloze tests is the ambiguity of the solution. Unless function words are deleted, the gap allows many alternative solutions such as synonyms and hypernyms, but also entirely different words that change the meaning of the text but also fit the context. Language teachers have proposed two ways of dealing with this ambiguity: the application of relaxed scoring schemes and the use of distractors. In relaxed scoring, teachers accept all tolerable candidates for a gap and not only the intended solution as in exact scoring. Unfortunately, this scoring method turned out to be quite subjective and time-consuming as it is not possible to anticipate all tolerable solutions . The use of distractors circumvents this open solution space by providing a closed set of candidates from which the solution needs to be picked. Several approaches have been proposed for automatic distractor selection (Sakaguchi et al., 2013; Zesch and Melamud, 2014) to make sure that the distractors are not too hard nor too easy and are not a valid solution themselves. However, the presence of the correct solution in the distractor set enables the option of random guessing leading to biased results.",
"cite_spans": [
{
"start": 911,
"end": 935,
"text": "(Sakaguchi et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 936,
"end": 960,
"text": "Zesch and Melamud, 2014)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C-Tests vs Cloze Tests",
"sec_num": "2.1"
},
{
"text": "In order to overcome this and other weaknesses of the cloze test, Klein-Braley and Raatz (1984) propose the Ctest as a more stable alternative. Thorough analyses following the principles of test theory indicate advantages of the C-test over the cloze test regarding empirical validity, reliability, and correlation with other language tests (Babaii and Ansary, 2001; Klein-Braley, 1997; Jafarpur, 1995) . For automatic approaches, the following properties of the C-tests are beneficial: The given prefix restricts the solution space to a single solution (in almost all cases) which enables automatic scoring without providing a guessing option. In addition, the prefix hint allows for a narrower deletion pattern (every second gap) providing more empirical evidence for the students' abilities on less text.",
"cite_spans": [
{
"start": 66,
"end": 95,
"text": "Klein-Braley and Raatz (1984)",
"ref_id": "BIBREF22"
},
{
"start": 341,
"end": 366,
"text": "(Babaii and Ansary, 2001;",
"ref_id": "BIBREF1"
},
{
"start": 367,
"end": 386,
"text": "Klein-Braley, 1997;",
"ref_id": "BIBREF26"
},
{
"start": 387,
"end": 402,
"text": "Jafarpur, 1995)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C-Tests vs Cloze Tests",
"sec_num": "2.1"
},
{
"text": "As the given prefixes reduce the extent to which productive skills are required, Cohen (1984) considers the Ctest to be a test of reading ability examining only recognition. However, Jakschik et al. (2010) transform the C-test into a true recognition test by providing multiple choice options and find that this variant is significantly easier than open C-test gaps. This indicates that C-test solving requires both, receptive and productive skills, and we reflect this in our feature choice.",
"cite_spans": [
{
"start": 81,
"end": 93,
"text": "Cohen (1984)",
"ref_id": "BIBREF9"
},
{
"start": 183,
"end": 205,
"text": "Jakschik et al. (2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C-Tests vs Cloze Tests",
"sec_num": "2.1"
},
{
"text": "Previous works in the field of educational natural language processing approach language proficiency tests from a generation perspective. The focus is on generating closed formats such as multiple choice cloze tests (Mostow and Jang, 2012; Agarwal and Mannem, 2011; Mitkov et al., 2006) , vocabulary exercises (Skory and Eskenazi, 2010; Heilman et al., 2007; Brown et al., 2005) and grammar exercises (Perez-Beltrachini et al., 2012) . The difficulty of these exercises is usually determined by the choice of distractors as students have to discriminate the correct answer from a provided set of candidates.",
"cite_spans": [
{
"start": 216,
"end": 239,
"text": "(Mostow and Jang, 2012;",
"ref_id": "BIBREF28"
},
{
"start": 240,
"end": 265,
"text": "Agarwal and Mannem, 2011;",
"ref_id": "BIBREF0"
},
{
"start": 266,
"end": 286,
"text": "Mitkov et al., 2006)",
"ref_id": "BIBREF27"
},
{
"start": 310,
"end": 336,
"text": "(Skory and Eskenazi, 2010;",
"ref_id": "BIBREF36"
},
{
"start": 337,
"end": 358,
"text": "Heilman et al., 2007;",
"ref_id": "BIBREF18"
},
{
"start": 359,
"end": 378,
"text": "Brown et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 401,
"end": 433,
"text": "(Perez-Beltrachini et al., 2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Difficulty",
"sec_num": "2.2"
},
{
"text": "C-tests follow a fixed construction pattern and are therefore easy to generate. As opposed to closed formats, the candidate space is only limited by the provided prefix and the length constraint. It is thus harder to determine the difficulty of a C-test because it is influenced by a combination of many text-and word-specific factors. The search for the factors that determine the difficulty of C-tests is tightly connected to the question of construct validity: \"Which skills does the C-test measure?\" While advocates of the C-test argue that it measures general language proficiency involving all levels of language (Eckes and Grotjahn, 2006; Sigott, 1995; Klein-Braley, 1985) others reduce it to a grammar test (Babaii and Ansary, 2001) or rather a vocabulary test (Chapelle, 1994; Singleton and Little, 1991) . 2 In our model, we aim at combining features touching all levels of language. The earliest analyses of C-test difficulty focused on the paragraph instead of the gap level. Klein-Braley (1984) performs a linear regression analysis with only two difficulty indicatorsaverage sentence length and type-token ratio -obtaining good results for her target group. Eckes (2011) intend to calibrate C-test difficulty using a Rasch model in order to compare different C-tests and build a test pool. 3 Kamimoto (1993) was the first to perform classical item analysis on the gap level. He created a tailored C-test that only contains selected gaps in order to better discriminate between the students. However, the gap selection is based on previous test results instead of specific gap features and thus cannot be applied on new tests.",
"cite_spans": [
{
"start": 619,
"end": 645,
"text": "(Eckes and Grotjahn, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 646,
"end": 659,
"text": "Sigott, 1995;",
"ref_id": "BIBREF33"
},
{
"start": 660,
"end": 679,
"text": "Klein-Braley, 1985)",
"ref_id": "BIBREF24"
},
{
"start": 715,
"end": 740,
"text": "(Babaii and Ansary, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 769,
"end": 785,
"text": "(Chapelle, 1994;",
"ref_id": "BIBREF8"
},
{
"start": 786,
"end": 813,
"text": "Singleton and Little, 1991)",
"ref_id": "BIBREF35"
},
{
"start": 988,
"end": 1007,
"text": "Klein-Braley (1984)",
"ref_id": "BIBREF23"
},
{
"start": 1172,
"end": 1184,
"text": "Eckes (2011)",
"ref_id": "BIBREF13"
},
{
"start": 1304,
"end": 1305,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Difficulty",
"sec_num": "2.2"
},
{
"text": "Previous work on gap difficulty is based on correlation analyses. Brown (1989) identifies the word class, the local word frequency, and readability measures as factors correlating with cloze gap difficulty. Sigott (1995) ines word frequency, word class, and constituent type of the gap for the C-test and finds high correlation only for the word frequency. Klein-Braley (1996) identifies additional error patterns related to production problems (right word stem in wrong form) and early closure, i.e. the solution works locally but not in the larger context. The cited works focus on the correlation between gap features and C-test difficulty but did not attempt to actually predict difficulty. In the following section, we present the results of our data analysis targeted towards building up a model for C-test difficulty.",
"cite_spans": [
{
"start": 207,
"end": 220,
"text": "Sigott (1995)",
"ref_id": "BIBREF33"
},
{
"start": 357,
"end": 376,
"text": "Klein-Braley (1996)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Difficulty",
"sec_num": "2.2"
},
{
"text": "For a better understanding of C-test difficulty, we need to perform data analysis. As suitable data was not available in digital form, we conducted a data collection study. In cooperation with the language center at Technische Universit\u00e4t Darmstadt, we gathered data from 3 test sessions. The C-tests are conducted in order to assign students to courses matching their language proficiency. One test consists of 5 paragraphs with 20 gaps each. We created a web interface in which the test had to be filled. Most students finished before the time limit of 20 minutes was reached. Weaker students left some gaps unfilled but did not ask for more time. In the first session, 357 participants filled in the same C-test (T1). In the second session, three different test instances (T2, T3, T4) were assigned randomly to 463 new participants. In the third session, the tests were composed by randomly choosing paragraphs from 5 groups, each consisisting of 5 paragraphs. A random combination of 5 paragraphs (one from each group) was then assigned to 1050 new participants. All participants are students enrolled at the university. Our analysis is based on the first two sessions and we use the data from the third session as test data. As six paragraphs of the third session had already been administered before, we remove these from the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "3"
},
{
"text": "As C-tests are designed mainly for the goal of comparing students, the difficulty of different tests should be balanced. The difficulty of a C-test is usually measured by the mean error rate over all gaps. The error rate of a single gap is the ratio of false answers to all answers. A higher mean error rate thus indicates higher test difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-level Analysis",
"sec_num": "3.1"
},
{
"text": "As we see in Table 1 , the mean error rate varies between the different tests, although they had been carefully (but manually) designed to be equally difficult. The first session was generally easier than the second session, and T2 stood out as particularly difficult. Within this test setting, it is thus not fair to compare students by their overall score, if they completed different tests. Automatic difficulty prediction prior to the test session could improve the comparability of test results. Figure 1 additionally shows the results for each paragraph. The teachers arrange the five paragraphs of a test with assumed ascending difficulty. We see that this works as a general tendency (paragraph 5 is more difficult than paragraph 1), but a true ordering has not been achieved for any test. In general, the high standard deviations indicate that the mean error rate is not a very informative measure, because each test contains very easy and very difficult gaps. In the extreme case, half of the gaps can be solved by all learners and the other half by almost no one. The test is then assigned a medium difficulty, but the results are not useful for discrimination between learners. We therefore now analyze the difficulty on the gap-level.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 1",
"ref_id": null
},
{
"start": 501,
"end": 509,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text-level Analysis",
"sec_num": "3.1"
},
{
"text": "Before we can further analyze single gaps, we need to examine whether the number of participants in our study was sufficient to obtain reliable error rates on the gap level. We calculate the standard error for each gap with Figure 2 shows the results for the first session (the results for the other three tests are similar). We see that already with 50 participants, the standard error is reduced to an acceptable level of 0.05. As we obtained data from more than 140 participants for each test, the obtained gap-level error rates are very reliable.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 232,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "Range of error rates In our data, the error rates range from 0.01 to 0.99 and are almost continuously distributed. Figure 3 shows an example for the high variance of the gap difficulty within a single paragraph. The error rates in the example are indicated by the size of the circles.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "Answer variety Even for the difficult gaps, the students always tried to provide a solution 5 because false answers did not have a negative effect on the result. This behavior leads to a high answer variety (19 different answers per gap on average). The number of provided answers correlates with the error rate (Pearson correlation of 0.57). This indicates that harder gaps trigger more alternatives and do not provoke the same mistake by everyone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "Spelling errors Many of the false answers are variants of the correct solution. The students recognize the solution word but fail to produce it correctly. Unfortunately, the line between a spelling error and a wrong solution cannot be clearly drawn. If a plural s is missing we cannot distinguish between a typo and lack of grammatical understanding. Spelling errors often also form new words e.g. of vs. off or then vs. than and we cannot decide whether it is a spelling error or a wrong word choice. As the generous time limit allows the students to revise their solutions for typos, we consider them as normal errors in line with . Hence, the potential problems for foreign language learners are manifold and hard to anticipate. We took a closer look at the false answers in order to gain deeper understanding of the dimensions that lead to wrong answers and therefore to higher difficulty. We find that the difficulty of C-tests is determined by a combination of many factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "In order to establish a shared terminology, learner strategies for C-test solving have been categorized as micro-level and macro-level processing strategies (Babaii and Ansary, 2001) . Psycholinguistic analyses (Sigott, 2006; Grotjahn and Stemmer, 2002) discuss in detail that both strategies are required for successful C-test solving. Therefore, we developed a model for C-test difficulty that incorporates features from both processing levels (see Figure 4 ).",
"cite_spans": [
{
"start": 157,
"end": 182,
"text": "(Babaii and Ansary, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 211,
"end": 225,
"text": "(Sigott, 2006;",
"ref_id": "BIBREF34"
},
{
"start": 226,
"end": 253,
"text": "Grotjahn and Stemmer, 2002)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 451,
"end": 459,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "Micro-level processing only deals with the solution of the gap and its surrounding micro context. The micro context consists of the word preceding the solution, the solution, and the following word. Both, the preceding and the following word are intact (i.e. not mutilated as gap) and can be used as solution hints by every learner, independent of the performance on the other gaps. In order to determine the difficulty of a gap based on micro-level cues, we estimate two dimensions: the solution difficulty and the candidate ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "Macro-level processing takes the wider context into account and evaluates the gap in relation to other elements in the sentence and in the whole paragraph. The difficulty of a gap on the macro-level is determined by two dimensions: the inter-gap dependency and the paragraph difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "In the remainder of this section, we elaborate on the individual dimensions. We provide examples that illustrate the described phenomena and introduce the features that operationalize them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gap-level Analysis",
"sec_num": "3.2"
},
{
"text": "This micro-level dimension comprises features that approximate whether a learner knows the solution and can correctly produce it in the context. We identified four important phenomena that contribute to the solution difficulty: word familiarity, cognateness, inflection, and phonetic complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "Word familiarity If we compare the solutions of the easiest (example 1) and the most difficult gap (example 2), it is obvious that you is easier because it is more familiar to the participants than plentiful. 6 1. If y are looking for new experiences, ... [you] 2. ..., people may try self-employment because the opportunities seem plen and financing is easy to get. [plentiful] The probability that a learner knows a word is usually estimated by the word frequency; more frequent words are more likely to be known. We therefore calculate the frequency of the solution and also its length as more frequent words tend to be shorter in English. In previous work, Brown (1989) calculates the frequency of the target word on the basis of the current test text. This is clearly a biased estimate of the frequency, but it is still identified as a good indicator for cloze gap difficulty. Sigott (1995) calculates the frequency of the solution word using counts from the SUSANNE corpus. 7 For our calculations, we use the larger Web1T corpus (Brants and Franz, 2006) and extract normalized probabilities instead of absolute frequencies for better comparison.",
"cite_spans": [
{
"start": 209,
"end": 210,
"text": "6",
"ref_id": null
},
{
"start": 367,
"end": 378,
"text": "[plentiful]",
"ref_id": null
},
{
"start": 667,
"end": 673,
"text": "(1989)",
"ref_id": null
},
{
"start": 882,
"end": 895,
"text": "Sigott (1995)",
"ref_id": "BIBREF33"
},
{
"start": 1035,
"end": 1059,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "Furthermore, a gap is easier to solve, if the solution occurs in a very typical context, e.g. in the micro context States o America, the candidate of is clearly favored, while in the context write o paper, the candidates on, our and off are more probable. In order to account for typical phrases, we calculate the normalized trigram probability of the micro context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "Even if a word seems familiar to a learner, it might be problematic when used in a compound (e.g. coastline) because the prefix only provides information about the first part of the word. In our approach, compounds are detected using a word splitting algorithm with an English dictionary. 8 Another issue are polysemous words, as learners might know one sense of a word but not be aware of the existence of a second sense. Polysemy interferes with frequency, e.g. the word well has a high frequency, but it occurs only rarely in its sense fountain. In order to account for polysemy, we count the number of represented word senses for the solution in the lexical-semantic resource UBY (Gurevych et al., 2012).",
"cite_spans": [
{
"start": 289,
"end": 290,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "The two senses of well also differ in their word class. The word class has been studied as a difficulty indicator by several researchers but with mixed results. Brown (1989) finds that function words are easier to solve, while Klein-Braley (1996) claims that prepositions are often harder for learners. Sigott (1995) could not confirm any effect of the word class on C-test difficulty.",
"cite_spans": [
{
"start": 167,
"end": 173,
"text": "(1989)",
"ref_id": null
},
{
"start": 227,
"end": 246,
"text": "Klein-Braley (1996)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "The word class is determined by identifying the part-ofspeech (POS) tag. As additional feature, we calculate the probability of the POS sequence of the micro context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "Cognateness Frequency is not the only indicator for word familiarity and can sometimes even be misleading (Beinborn et al., 2014) . Many solution words are cognates, i.e. they are very similar to words in other languages like information or laboratory. In reading comprehension, cognates are known as facilitators because their meaning can be deducted from the form similarity to a word in the mother tongue. We therefore assumed that cognate gaps are easier to solve. However, we observe that they are more likely to trigger production problems. In the 20 gaps with the highest answer variety (33 or more different answers), all solutions have a Latin stem. 9 The 20 gaps with the lowest answer variety (5 or less different answers) are very basic vocabulary. 10 The production problems are related to the different character combinations and the lower frequency of words with Latin stem. In addition, these words might not be part of the students active vocabulary and are only guessed because they occur as cognates in the students L1. This is supported by the fact that many of the cognate answers resemble orthographic principles from other languages, e.g. for skeletons we find *skellets, *skelleton(s), *skelets, *skelletts, *skeletton(s), *skeltons, *skeletes, and *skelette(s). 11 In order to account for this phenomenon, we estimate the cognateness of words by gathering data from four different lists. We retrieve cognates from UBY using string similarity and from a cognate production algorithm (Beinborn et al., 2013) . In addition, we consult the COCA list of academic words 12 and a list of words with latin roots. 13 Inflection Many errors are caused by wrong morphological inflection as in this example:",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "(Beinborn et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 659,
"end": 660,
"text": "9",
"ref_id": null
},
{
"start": 761,
"end": 763,
"text": "10",
"ref_id": null
},
{
"start": 1287,
"end": 1289,
"text": "11",
"ref_id": null
},
{
"start": 1507,
"end": 1530,
"text": "(Beinborn et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 1630,
"end": 1632,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "And in har times like these, ... [harder] The base form hard (72) is provided more often than the correct comparative harder (48), although it is too short. Other inflection errors are caused by singular/plural and adjective/adverb confusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "In order to account for this phenomenon, we test whether the solution is in lemma form or carries any inflection markers using a lemmatizer. We also check whether the word occurs elsewhere in the text in full form (i.e. not as a gap) because it facilitates the correct production for the student. This feature is comparable to the semantic cache used by Brown (1989) .",
"cite_spans": [
{
"start": 354,
"end": 366,
"text": "Brown (1989)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "Phonetic complexity Wrong answer variants for C-test gaps are often rooted in phonetic problems. The spelling of a word is more difficult, if it contains a rare sequence of characters. The word appropriate, for example, triggers 69 different answers, 40 of them were provided only once. In addition, a spelling error is more likely to occur, in words with rare grapheme-phoneme mapping as in Wednesday. We build a character-based language model that indicates the probability of a character sequence using BerkeleyLM (Pauls and Klein, 2011) . In addition, we build a phonetic model using phonetisaurus, a statistical alignment algorithm that maps characters onto phonemes. 14 Both models are trained only on words from the Basic English list in order to reflect the knowledge of a language learner. 15 Based on this scarce data, the phonetic model only learns the most frequent character-tophoneme mappings and assigns higher phonetic scores to less general letter sequences. We use this score as a feature and additionally calculate the string similarity between the output and the correct pronunciation in the CMU dictionary. 16 Another source for phonetic problems occurs, if the prefix boundary splits the word in a way that leads to another pronunciation pattern compared to the solution word as in this example.",
"cite_spans": [
{
"start": 517,
"end": 540,
"text": "(Pauls and Klein, 2011)",
"ref_id": "BIBREF29"
},
{
"start": 1128,
"end": 1130,
"text": "16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "It is not easy to design and build a mac that is both, efficient and durable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "[machine] Due to the syllable split, the prefix provokes answers with the pronunciation [mac] such as macanics, mac(h)anism, macanical, macbook, macphone, and macro instead of the original pattern [maS]. A similar issue occurs when the prefix splits a compound such as greenhouse. We check if the prefix boundary occurs within a compound or a syllable using a hyphenation dictionary. 17",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Difficulty",
"sec_num": "4.1"
},
{
"text": "This micro-level dimension examines whether a competing candidate is more accessible for the learner in the given context. Even if the learner is familiar with the solution word, she might still not be able to produce it, because a competing candidate is stronger. For example, in 42 gaps in our data, an alternative answer is provided more frequently than the intended solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "Some of these gaps actually have more than one possible solution as the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "Scientists have ma tools that allow them to extract subtle information from ancient bones and their environmental settings. [many] Instead of the correct solution many (89), most students provided made (238) which can also be considered correct here. These cases had not been anticipated by the language teachers, they only encoded one solution in the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "In other cases, alternative answers seem very probable to the students but are nevertheless false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "A natural blanket of greenhouse gases in the atmosphere keeps the planet warm enough for life as we know it at a comfortable 15C today. Human-caused emissions of greenhouse gases have made the blanket thi , trapping heat and leading to a global warming.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "[thicker] Instead of the correct solution thicker (12), the students provided many alternative solutions more often: thinner (31), thin (19), thick (18), this 14, thing (14). Thinner fits syntactically but completely changes the semantics of the sentences as it is the antonym of the correct solution. The learner needs to apply world knowledge to understand that a thinner blanket would not trap heat. In our model, we want to account for both cases, as it would be very helpful if ambiguous gaps could be automatically detected. This aspect has been neglected in previous work on C-test difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "In order to account for competing candidates, we first determine the candidate space and then describe our features approximating the probability that a competing candidate confuses the student.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "Candidate space Prior to the tests, the students are informed about the quite restrictive length constraint. The given prefix of C-test gaps consists always of the smaller half of the solution: if 3 characters are provided as prefix, the correct word can only consist of 6 or 7 characters. This can be a useful indicator for the solution, but the data reveals that in approx. 40% of the false answers, the length constraint is not respected. The absolute number of false length answers is higher for weaker students. However, the proportion of false length answers relative to all false answers is higher for stronger (0.45) than for weaker (0.32) students.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "Length violations can be caused by candidates that seem viable for the context and are more accessible than the solution or by wrong inflection of the word ending. In other cases, it is obvious that the student does not find a proper solution and provides just anything that remotely fits. It would be interesting to repeat the test with the constraint that false answers have a negative influence on the overall score in order to find out whether the students are aware of the length violation. Bresnihan and Ray (1992) show that students perform better on the C-test, if the length of the solution is graphically indicated by dashes or dots which supports the assumption that length violations are often not noticed in the standard C-test. As we want to account for this phenomenon, we decide to relax the length constraint. We only allow a length tolerance of 1, i.e. for a prefix of length 3, we consider candidates with 5 to 8 (instead of 6 to 7) characters, as the candidate space would be too large otherwise. We noticed that even candidates with wrongly spelled prefix can be competitors, e.g. some students provided the answer *demage for the prefix dem instead of the correct solution demands. The word damage actually fits semantically into the gap, but as the prefix is different, we currently do not add such cases to the candidate space.",
"cite_spans": [
{
"start": 496,
"end": 520,
"text": "Bresnihan and Ray (1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "In order to account for candidate ambiguity, we rank all candidates according to three criteria: the unigram frequency, the trigram frequency of the micro context, and the parse score. Statistical parsers usually provide parse scores in order to determine the best variant. This score cannot be used as an absolute value because it depends on the sentence length but it helps to distinguish between candidates. A candidate that produces another parse tree than the solution is less likely to be correct. For each ranking, we determine the rank of the solution and the number of candidates above a fixed threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "In addition, we take the intersection of the best candidates from the above rankings, combine them into a set of top candidates that are likely to compete with the solution and determine its size. Moreover, we calculate the maximum string similarity of the candidates with the solution in order to capture very close variants (e.g. base and basis).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Ambiguity",
"sec_num": "4.2"
},
{
"text": "This macro-level dimension assesses the dependency of the current gap on previous gaps: can it be solved, even if the previous gap has not been solved? In previous work, Harsch and Hartig (2010) examine dependencies between individual gaps using a Rasch testlet model and find that some gaps strongly depend on each other, while others can be solved independently.",
"cite_spans": [
{
"start": 170,
"end": 194,
"text": "Harsch and Hartig (2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-gap dependency",
"sec_num": "4.3"
},
{
"text": "At the same time, fertility is set to fall as women leave childbirth la and la .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-gap dependency",
"sec_num": "4.3"
},
{
"text": "[later] In these gaps, later is repeated which makes it easy to fill in the second gap, if the first one is solved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-gap dependency",
"sec_num": "4.3"
},
{
"text": "The dependency of a gap is related to its position and the difficulty of the preceding word. If a gap is preceded by a very difficult gap, the available context is damaged which can have an effect on the difficulty of the following gaps. A gap occurring towards the end of a sentence, is also more likely to be influenced by limited context. We thus calculate the position of the gap and the number of previous gaps in the sentence and in the paragraph. We check if the same solution also occurs in another gap to account for repetition. In order to estimate the difficulty of the previous gap, we calculate its unigram and trigram probability. If we already had a good difficulty prediction algorithm, we could perform incremental prediction and use the difficulty label of the previous gap as a feature for the current gap, but this is left to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-gap dependency",
"sec_num": "4.3"
},
{
"text": "In addition, we check for gaps with the prefix th because they enable many reference words such as this, that, there, then, these, those, they, and their. The student needs to perform co-reference resolution in order to select the correct word. These referential gaps usually cannot be solved on the basis of the micro context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-gap dependency",
"sec_num": "4.3"
},
{
"text": "This macro-level dimension determines whether the learner is generally able to understand the text. The overall difficulty of a paragraph contributes to the difficulty of the individual gaps because more complex texts are harder to parse for language learners, especially when every second word is a gap. Thus, the available context for each gap is assumed to be lower in more difficult paragraphs. As we have seen in Section 3.1, the difficulty of the gaps within one paragraph varies strongly. We therefore assume that the paragraph difficulty only adds a constant effect to the overall gap difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paragraph difficulty",
"sec_num": "4.4"
},
{
"text": "The difficulty of a paragraph is inversely related to its readability. We calculate the following readability features for the whole paragraph and for the sentence containing the gap. Average word and sentence length are the underlying basis of traditional readability measures such as Flesch-Kincaid and Fry which correlate with cloze test difficulty according to Brown (1989) . We calculate both, but do not find much variety as the paragraphs in our data are all of comparable length (64-99 words, 3-7 sentences, 4.85 characters per word).",
"cite_spans": [
{
"start": 365,
"end": 377,
"text": "Brown (1989)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paragraph difficulty",
"sec_num": "4.4"
},
{
"text": "The type-token ratio, the verb variation, and the pronoun ratio are used as indicators for lexical diversity and referentiality. Klein-Braley (1984) already determined the type-token ratio as useful cue for paragraph difficulty prediction. We also use syntactic readability features such as the number of entity mentions, the number of certain POS types (e.g. noun, determiner, adjective) and the number of certain phrase patterns (e.g. verbal phrase, noun phrase, subordinate phrase).",
"cite_spans": [
{
"start": 129,
"end": 148,
"text": "Klein-Braley (1984)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paragraph difficulty",
"sec_num": "4.4"
},
{
"text": "Having introduced all four dimensions of C-test difficulty, we now report on the results of the actual difficulty prediction. Difficulty prediction of C-tests has up to now only been performed on the paragraph level (Klein-Braley, 1984; Traxel and Dresemann, 2010) . In this article, we go beyond paragraphs and predict the difficulty of gaps. We first determine the human performance on the task and use it as a reference for the performance of the machine learning approach based on our difficulty model.",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "(Klein-Braley, 1984;",
"ref_id": "BIBREF23"
},
{
"start": 237,
"end": 264,
"text": "Traxel and Dresemann, 2010)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paragraph difficulty",
"sec_num": "4.4"
},
{
"text": "A3 Median A1-A3 Correct Prediction 200 209 192 213 Overestimation 90 99 83 101 Underestimation 107 89 118 84 NA 2 2 6 1 Accuracy 0.50 0.52 0.48 0.53 Table 2 : Results of the human annotations",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 182,
"text": "Correct Prediction 200 209 192 213 Overestimation 90 99 83 101 Underestimation 107 89 118 84 NA 2 2 6 1 Accuracy 0.50 0.52 0.48 0.53 Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "A2",
"sec_num": null
},
{
"text": "Due to the high number of participants, we already have precise gap-level error rates (cf. Figure 2 ) for our tests. We now want to determine to what extent human annotators are able to predict these error rates. For this purpose, we asked three English language teachers to assign a difficulty category to each gap according to the following scheme:",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Human Difficulty Prediction",
"sec_num": "5"
},
{
"text": "1: Very easy gap (error rate \u2264 0.25) 2: Easy gap (0.25 < error rate \u2264 0.5) 3: Medium gap (0.5 < error rate \u2264 0.75) 4: Difficult gap (error rate > 0.75)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Difficulty Prediction",
"sec_num": "5"
},
{
"text": "The annotation was performed on the same 20 texts as described in Section 3.1. The teachers were already familiar with these texts, as they had chosen them for the testing period. We consider a gap to be correctly annotated, if the human-assigned class matches the actual error rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Difficulty Prediction",
"sec_num": "5"
},
{
"text": "Given the highly experienced annotators, the prediction accuracy is lower than expected. The three annotators obtain comparable accuracy, each of them correctly predicts approximately 50% of the gaps (see Table 2 ). There is no obvious bias in the annotations, difficulty is both under-and overestimated. If we combine the human prediction by taking the median of the three annotators, 53.4% are annotated correctly. These results show that even experienced teachers are not able to foresee all factors that influence the difficulty of a gap.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Difficulty Prediction",
"sec_num": "5"
},
{
"text": "Somewhat surprisingly, the agreement between the annotators is also low. The Fleiss' Kappa for the three annotators is 0.36, the pairwise comparison ranges from 0.31 to 0.39. Only in 38.6% of the gaps, all three annotators agreed with each other. For only 25.3%, all three annotators agreed with each other and with the actual measured error rate. This shows that human difficulty prediction is quite subjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Difficulty Prediction",
"sec_num": "5"
},
{
"text": "The mediocre human performance on the task reveals the complexity of predicting the elements of language that cause problems for foreign language learners. However, this strengthens the need for reliable prediction methods like the one described in this paper. Note that the automatic prediction is compared with the actual error rates, not the human predicated ones. Thus, it is possible to outperform human performance with automatic methods and provide a very helpful tool. Table 3 : Results for leave-one-out crossvalidation on the training set for regression and classification prediction (both trained on support vector machines). Classification results are the weighted average of precision (P), recall (R) and F1-measure over all four classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 477,
"end": 484,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Difficulty Prediction",
"sec_num": "5"
},
{
"text": "Our difficulty prediction approach is based on the model described in the previous section. We extract the features using tools for natural language processing provided by DKPro Core (de Castilho and Gurevych, 2014). We then perform experiments with different datasets and classifiers using Weka (Hall et al., 2009) through the DKPro TC framework (Daxenberger et al., 2014) . 18",
"cite_spans": [
{
"start": 296,
"end": 315,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 347,
"end": 373,
"text": "(Daxenberger et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Difficulty Prediction",
"sec_num": "6"
},
{
"text": "For the human annotation, we used a classification scheme because assigning difficulty scores on a finegrained numerical scale would be too challenging even for experienced teachers. However, as the actual error rates are continuously distributed, gaps that are close to the class boundaries are more likely to be mislabeled. Therefore we also test regression prediction using the actual error rates instead of the artificially determined classes. We perform leave-one-out testing on the training set in order to determine the best approach. We compare our model against the human performance and two baselines: A naive one that predicts the majority class for classification and the mean value for regression and one that only uses the features proposed by Sigott (1995) (solution probability, word class of solution, and constituent type of gap).",
"cite_spans": [
{
"start": 758,
"end": 771,
"text": "Sigott (1995)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification vs Regression",
"sec_num": "6.1"
},
{
"text": "In Table 3 , we report weighted precision, recall and F 1 -measure over all classes for classification and Pearson correlation and root mean squared error for regression. It can be seen that our approach clearly outperforms the baselines in both cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification vs Regression",
"sec_num": "6.1"
},
{
"text": "For classification, the human median annotation is better than our approach. In order to also compare our regression results to the human annotations, we map the numerical predictions back into classes according to the scheme explained in the previous subsection. The quadratic weighted kappa considers the classes on an ordinal scale and thus gives a better impression of the usefulness of the prediction. The results in Table 4 Table 4 : Quadratic weighted \u03ba of difficulty class predictions the performance of the regression approach is almost on the same level as the median of the human prediction. Therefore, we will focus on regression prediction for the remainder of the paper. For a better understanding of the behaviour of human and automatic predictions, the plot in Figure 5 combines the two results. The position in the plot indicates the relation between the true error rate and the prediction and the symbols show the corresponding human annotation. The plot reveals that the regression equation predicts the right tendency but tends to slightly underestimate difficult gaps and overestimate easy gaps. The human prediction performs well for the easiest gaps (class 1, green circle) while the other three classes are confused quite often.",
"cite_spans": [],
"ref_spans": [
{
"start": 422,
"end": 429,
"text": "Table 4",
"ref_id": null
},
{
"start": 430,
"end": 437,
"text": "Table 4",
"ref_id": null
},
{
"start": 777,
"end": 785,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Classification vs Regression",
"sec_num": "6.1"
},
{
"text": "For a deeper analysis of our difficulty model, we now compare different feature groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "6.2"
},
{
"text": "The results in the first two rows show that the gap difficulty is mainly determined by the features representing micro-level processing. This is not surprising, as these features are calculated for each gap, while most of the macro-level features are constant for all gaps in the paragraph. The predictive power on the micro- level of our approach is a strong improvement over previous prediction approaches that only attempted to predict paragraph difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing Levels",
"sec_num": null
},
{
"text": "Dimensions The middle part of Table 5 shows that the prediction results decrease significantly, if we exclude features from one dimension. The effect is particularly strong, if we exclude the features estimating the difficulty of the solution, while the effect of the inter-item dependency features is quite small. This supports previous theoretical research claiming that the solution word itself and its micro context are most relevant for the solving processes. The dimensions candidate ambiguity and inter-item dependency have been newly introduced, while many of the features for solution and paragraph difficulty are well established. We therefore assume that future work on improved feature development for these dimensions could lead to even better prediction results.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Processing Levels",
"sec_num": null
},
{
"text": "As the results for the individual dimensions might be related to the number of features, we additionally perform feature selection and reduce the set to 21 features. 19 The selection shows that the probability of the word, the phrase and the character sequence play a major role for prediction. However, it might be the case that the continuous values of these features are simply more suitable for regression approaches than boolean features such as the word class of the solution. In addition, the number of available candidates plays an important role but priming effects also need to be considered (whether the solution occurs previously in the text or mutilated as another gap). For the paragraph difficulty, the number of verbs and embedded sentences seems to be a good indicator of difficulty. 20 Interestingly, features from all four dimensions are included in the selection as can be seen in Table 7 . This indicates that the dimensions in our model represent the factors that have an influence on the C-test difficulty quite well. However, the solution difficulty dimension is by far the most important one, while the other three dimensions contribute fewer features. We include the prediction results for the selected features in Table 8 which is discussed in the following section.",
"cite_spans": [
{
"start": 166,
"end": 168,
"text": "19",
"ref_id": null
},
{
"start": 801,
"end": 803,
"text": "20",
"ref_id": null
}
],
"ref_spans": [
{
"start": 901,
"end": 908,
"text": "Table 7",
"ref_id": "TABREF8"
},
{
"start": 1241,
"end": 1248,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Selected Features",
"sec_num": null
},
{
"text": "In order to evaluate our model on unseen data, we test it on a set of 375 additional gaps. The results on the test set are substantially worse than in the leave-one-out (LOOCV Train) setting. If we merge the two sets and perform leave-one-out testing on the whole data (LOOCV All), the results get close to our training set again. This indicates that the test set contains characteristics, that have not been observed during training. It is also interesting that using only the selected features yields better results on the smaller training set, while the full model is better on larger data. In order to support the assumption that our model performs better with more data, we plot a learning curve (see Figure 6 ). We calculated the Pearson correlation for increasing sample sizes of randomly selected instances and average the results over 100 runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 706,
"end": 714,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test results",
"sec_num": "6.3"
},
{
"text": "The anomaly for smaller sample sizes can be explained by very high standard deviations. Starting from a sample size of about 70 instances, the learning curve proceeds as Figure 6 : Learning curve for 10-fold cross-validation with increasing size of training data, results are averaged over 100 runs Figure 7 shows that our prediction approach produces a few strong outliers for the test data. In particular, it strongly underestimates the error rate for some very easy gaps. We perform an error analysis on the 9 outliers.",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 178,
"text": "Figure 6",
"ref_id": null
},
{
"start": 299,
"end": 307,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test results",
"sec_num": "6.3"
},
{
"text": "Underestimation In two underestimated gaps, the solution requires an apostrophe (Earth's, world's). This has not been seen in the training data, and therefore we cannot predict that the students have difficulties here. It is debatable whether punctuation should be included into the solution but the language teachers insisted on the importance of these gaps. In two other cases, the students systematically favour a wrong solution-one is due to Figure 7 : The prediction for the train-test setting produces more outliers. Instances with an absolute difference of predicted and actual error rate \u2265 0.5 are coloured red. spelling (of instead of off ) and the other due to referentiality (the instead of this)-which our approach did not anticipate correctly. The last two outliers occur in a phrase that is very frequent for native speakers but nevertheless unknown to the participants (cause untold damage, the continental United States 21 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 446,
"end": 454,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "Overestimation One of the items for which the error rate is strongly underestimated is the compound carbonfree. It can be seen, that the teachers deviated from the original length constraint here and applied it only on the second part of the component. As these kind of compounds have not been seen in the training set, our approach estimates the difficulty for providing carbon-free, while it should rather consider only free. The second overestimation is due to the named entity Deutsche Bahn, which is unlikely to occur in English text but very common for students living in Germany. The third overestimated outlier is simply due to an unfortunate combination of a long word (dangerous) at the end of a difficult sentence that is nevertheless easy for the students.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "The errors due to apostrophes and hyphenated compounds can be minimized by adapting the processing. In order to also anticipate the other outlier phenomena, we need more training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.4"
},
{
"text": "We introduce the first model for the automatic prediction of gap-level difficulty of C-tests. We collected data from real learners and find that the gap-level error rates are quite stable. The prediction results of our approach are on the same level as the performance of human experts. The learning curve indicates that even better results are possible with more training data. A higher number of instances makes it easier to learn the nuances for the prediction and this can help to improve the features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our work also sheds light on the question what C-tests measure. The difficulty of a C-test gap is determined by a combination of many factors. Our experiments have shown that both, micro-and macro-level cues, contribute to the gap difficulty: i) problems related to the solution such as spelling, phonetic difficulties and morphological derivation, ii) problems caused by competing candidates, iii) problems caused by dependencies between gaps, and iv) readability problems caused by text complexity. Even the reduced set of selected features comprises features from all introduced dimensions which shows that our conclusions drawn from the data analysis led to a very suitable model. However, the features measuring the difficulty of the solution and the probability of the micro context seem most relevant. As a next step, we need to improve the feature extraction for compound nouns and named entities. 21 The students provided only continent instead.",
"cite_spans": [
{
"start": 906,
"end": 908,
"text": "21",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our approach has already raised interest in language teachers who see strong practical benefits. The automatic difficulty prediction facilitates test selection, as teachers can run our approach on a corpus and only inspect tests with adequate difficulty. The system could also be tuned towards the prediction of potentially ambiguous gaps so that teachers become aware of the alternative solutions. In addition, our approach can also be used productively for the automatic test generation in platforms for selfdirected language learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our model has been developed for the difficulty prediction of English C-tests. However, it can also be generalized to other languages and to test variants of reduced redundancy testing. In future work, we aim at adapting the difficulty of a given text by varying the gap placement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Solutions: The, evidence, from, skeletons, bone, Scientists, made, that, them, extract, information, ancient, and, environmental, Modern, work, the, and, laboratories, now",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 2, pp. 517-529, 2014. Action Editor: Chris Callison-Burch. Submission batch: 4/2014; Revision batch 8/2014; Published 11/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It should be noted, that their definition of \"vocabulary\" is very wide.3 http://www.ondaf.de",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "C-Test Difficulty ModelNatural languages are complex and constantly developing constructs that include many exceptions to the rules.4 For each size, we calculate the error rate based on three randomly selected samples of participants and report the average result.5 Except for the weakest students who were not able to understand the texts and left entire paragraphs empty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In all examples, we only highlight a single gap to illustrate a certain phenomenon.7 http://www.grsampson.net/RSue.html 8 http://www.danielnaber.de/jwordsplitter/index en.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "appropriate, skeletons, tempting, extract, ancient, private, design, concentrations, state-of-the-art, scientists, modern, examined, constant, essential, stable, entering, basis, synthetic, cost, demands 10 longer, coffee, coffee, in, water, very, give, you, for, people, living, other, number, water, water, from, over, you, over 11 DE: Skelett, FR: squelette, ES: esqueleto, NL: skelet 12 http://www.academicvocabulary.info/ 13 http://en.wikipedia.org/wiki/List of Latin words with English derivatives",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://code.google.com/p/phonetisaurus/ 15 http://ogden.basic-english.org 16 http://www.speech.cs.cmu.edu/cgi-bin/cmudict 17 http://hindson.com.au/info/free/free-english-languagehyphenation-dictionary/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More information on data and resources can be found at http://www.ukp.tu-darmstadt.de/data/c-tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the WrapperSubsetEval-evaluator with SMOreg and BestFirst-search as implemented in Weka.20 The term CoverSentence inTable 6refers to the sentence containing the gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806, and by the Klaus Tschira Foundation under project No. 00.133.2008. We thank the anonymous reviewers for their very helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic Gap-fill Question Generation from Text Books",
"authors": [
{
"first": "Manish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Prashanth",
"middle": [],
"last": "Mannem",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "56--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manish Agarwal and Prashanth Mannem. 2011. Automatic Gap-fill Question Generation from Text Books. Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications, pages 56-64.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The C-test: a valid operationalization of reduced redundancy principle? System",
"authors": [
{
"first": "Esmat",
"middle": [],
"last": "Babaii",
"suffix": ""
},
{
"first": "Hasan",
"middle": [],
"last": "Ansary",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "29",
"issue": "",
"pages": "209--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esmat Babaii and Hasan Ansary. 2001. The C-test: a valid operationalization of reduced redundancy principle? System, 29(2):209-219.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cognate Production using Character-based Machine Translation",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "883--891",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2013. Cog- nate Production using Character-based Machine Translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 883-891. Asian Feder- ation of Natural Language Processing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Readability for foreign language learning: The importance of cognates",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Applied Linguistics",
"volume": "",
"issue": "",
"pages": "136--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2014. Readability for foreign language learning: The importance of cognates. International Journal of Applied Linguistics, pages 136-162.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Web 1T 5-gram corpus version 1.1. Linguistic Data Consortium",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram corpus version 1.1. Linguistic Data Consortium.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "1992. C-tests and the usefulness of non-linguistic instructions",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Bresnihan",
"suffix": ""
},
{
"first": "Stratton",
"middle": [],
"last": "Ray",
"suffix": ""
}
],
"year": null,
"venue": "Theoretische Grundlagen und praktische Anwendungen",
"volume": "1",
"issue": "",
"pages": "193--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Bresnihan and Stratton Ray. 1992. C-tests and the use- fulness of non-linguistic instructions. In R\u00fcdiger Grotjahn, editor, Der C-Test. Theoretische Grundlagen und praktische Anwendungen 1, pages 193-216. Brockmeyer, Bochum.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic Question Generation for Vocabulary Assessment",
"authors": [
{
"first": "Gwen",
"middle": [
"A"
],
"last": "Jonathan C Brown",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Frishkoff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "819--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan C Brown, Gwen A Frishkoff, and Maxine Eskenazi. 2005. Automatic Question Generation for Vocabulary As- sessment. In HLT '05: Proceedings of the conference on Hu- man Language Technology and Empirical Methods in Natu- ral Language Processing, pages 819-826, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cloze item difficulty",
"authors": [
{
"first": "",
"middle": [],
"last": "James Dean Brown",
"suffix": ""
}
],
"year": 1989,
"venue": "JALT journal",
"volume": "11",
"issue": "",
"pages": "46--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Dean Brown. 1989. Cloze item difficulty. JALT journal, 11:46-67.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Are C-tests valid measures for L2 vocabulary research? Second Language Research",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "Chapelle",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "10",
"issue": "",
"pages": "157--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. A. Chapelle. 1994. Are C-tests valid measures for L2 vocab- ulary research? Second Language Research, 10(2):157-187, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The C-Test in Hebrew. Language Testing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "1",
"issue": "",
"pages": "221--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew D. Cohen. 1984. The C-Test in Hebrew. Language Testing, 1(2):221-225.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "DKPro TC: A Java-based Framework for Supervised Learning Experiments on Textual Data",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Ferschke",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "61--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Daxenberger, Oliver Ferschke, Iryna Gurevych, and Torsten Zesch. 2014. DKPro TC: A Java-based Framework for Supervised Learning Experiments on Textual Data. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 61-66. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A broad-coverage collection of portable NLP components for building shareable analysis pipelines",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Workshop on Open Infrastructures and Analysis Frameworks for HLT (OIAF4HLT) at COLING 2014",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Eckart de Castilho and Iryna Gurevych. 2014. A broad-coverage collection of portable NLP components for building shareable analysis pipelines. In Proceedings of the Workshop on Open Infrastructures and Analysis Frameworks for HLT (OIAF4HLT) at COLING 2014, pages 1-11, August.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A closer look at the construct validity of C-tests",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Eckes",
"suffix": ""
},
{
"first": "R\u00fcdiger",
"middle": [],
"last": "Grotjahn",
"suffix": ""
}
],
"year": 2006,
"venue": "Language Testing",
"volume": "23",
"issue": "3",
"pages": "290--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Eckes and R\u00fcdiger Grotjahn. 2006. A closer look at the construct validity of C-tests. Language Testing, 23(3):290-325, July.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Item banking for C-tests: A polytomous Rasch modeling approach. Psychological Test and Assessment Modeling",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Eckes",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "53",
"issue": "",
"pages": "414--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Eckes. 2011. Item banking for C-tests: A polytomous Rasch modeling approach. Psychological Test and Assess- ment Modeling, 53(4):414-439.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "2002. C-Tests and language processing",
"authors": [
{
"first": "R\u00fcdiger",
"middle": [],
"last": "Grotjahn",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Stemmer",
"suffix": ""
}
],
"year": null,
"venue": "University language testing and the C-Test",
"volume": "",
"issue": "",
"pages": "115--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R\u00fcdiger Grotjahn and Brigitte Stemmer. 2002. C-Tests and language processing. In James A. Coleman, R\u00fcdiger Grot- jahn, and Ulrich Raatz, editors, University language testing and the C-Test, pages 115-130. AKS-Verlag, Bochum.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Large-Scale Unified Lexical-Semantic Resource Based on LMF",
"authors": [
{
"first": "R\u00fcdiger",
"middle": [],
"last": "Grotjahn",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Klein-Braley",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Raatz",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "580--590",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R\u00fcdiger Grotjahn, Christine Klein-Braley, and Ulrich Raatz. 2002. C-Tests: an overview. In James A. Coleman, R\u00fcdiger Grotjahn, and Ulrich Raatz, editors, University language testing and the C-Test, pages 93-114. AKS-Verlag, Bochum. Iryna Gurevych, Judith Eckle-Kohler, Silvana Hartmann, Michael Matuschek, Christian M Meyer, and Christian Wirth. 2012. A Large-Scale Unified Lexical-Semantic Re- source Based on LMF. Proceedings of the 13th Conference of the European Chapter of the Association for Computa- tional Linguistics, pages 580-590.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The WEKA data mining software: an update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. 11(1).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Empirische und inhaltliche Analyse lokaler Abh\u00e4ngigkeiten im C-Test",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Harsch",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Hartig",
"suffix": ""
}
],
"year": 2010,
"venue": "Beitr\u00e4ge aus der aktuellen Forschung",
"volume": "",
"issue": "",
"pages": "193--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Harsch and Johannes Hartig. 2010. Empirische und inhaltliche Analyse lokaler Abh\u00e4ngigkeiten im C-Test. In R\u00fcdiger Grotjahn, editor, Der C-Test: Beitr\u00e4ge aus der ak- tuellen Forschung, pages 193-204. Peter Lang.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Combining lexical and grammatical features to improve readability measures for first and second language texts",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Kevyn",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Callan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "460--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J. Heilman, Kevyn Collins-Thompson, Jamie Callan, and Maxine Eskenazi. 2007. Combining lexical and gram- matical features to improve readability measures for first and second language texts. In Proceedings of NAACL-HLT, pages 460-467.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Is C-testing superior to cloze? Language Testing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jafarpur",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "12",
"issue": "",
"pages": "194--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Jafarpur. 1995. Is C-testing superior to cloze? Language Testing, 12(2):194-216, July.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Computergest\u00fctzter Multiple Choice C-Test in der Bundesagentur f\u00fcr Arbeit: Bundesweite Erprobung und Einf\u00fchrung",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "Jakschik",
"suffix": ""
},
{
"first": "Hella",
"middle": [],
"last": "Klemmert",
"suffix": ""
},
{
"first": "Dorothea",
"middle": [],
"last": "Klinck",
"suffix": ""
}
],
"year": 2010,
"venue": "Der C-Test: Beitr\u00e4ge aus der aktuellen Forschung The C-Test: Contributions from Current Research",
"volume": "",
"issue": "",
"pages": "231--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard Jakschik, Hella Klemmert, and Dorothea Klinck. 2010. Computergest\u00fctzter Multiple Choice C-Test in der Bundesagentur f\u00fcr Arbeit: Bundesweite Erprobung und Einf\u00fchrung. In R\u00fcdiger Grotjahn, editor, Der C-Test: Beitr\u00e4ge aus der aktuellen Forschung The C-Test: Contri- butions from Current Research, pages 231-264. Peter Lang International Academic Publishers.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Tailoring the Test to Fit the Students: Improvement of the C-Test through Classical Item Analysis. Language Laboratory",
"authors": [
{
"first": "Tadamitsu",
"middle": [],
"last": "Kamimoto",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "30",
"issue": "",
"pages": "47--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadamitsu Kamimoto. 1993. Tailoring the Test to Fit the Stu- dents: Improvement of the C-Test through Classical Item Analysis. Language Laboratory, 30:47-61, November.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A survey of research on the C-Test",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Braley",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Raatz",
"suffix": ""
}
],
"year": 1984,
"venue": "Language Testing",
"volume": "1",
"issue": "2",
"pages": "134--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Klein-Braley and Ulrich Raatz. 1984. A survey of re- search on the C-Test. Language Testing, 1(2):134-146, De- cember.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Advance Prediction of Difficulty with C-Tests",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Klein-Braley",
"suffix": ""
}
],
"year": 1984,
"venue": "Practice and problems in language testing",
"volume": "7",
"issue": "",
"pages": "97--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Klein-Braley. 1984. Advance Prediction of Difficulty with C-Tests. In Terry Culhane, Christine Klein-Braley, and Douglas K. Stevenson, editors, Practice and problems in lan- guage testing, volume 7, pages 97-112.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A cloze-up on the C-Test: a study in the construct validation of authentic tests",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Klein-Braley",
"suffix": ""
}
],
"year": 1985,
"venue": "Language Testing",
"volume": "2",
"issue": "1",
"pages": "76--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Klein-Braley. 1985. A cloze-up on the C-Test: a study in the construct validation of authentic tests. Language Testing, 2(1):76-104.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Towards a theory of C-Test processing",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Klein-Braley",
"suffix": ""
}
],
"year": 1996,
"venue": "Theoretische Grundlagen und praktische Anwendungen",
"volume": "3",
"issue": "",
"pages": "23--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Klein-Braley. 1996. Towards a theory of C-Test pro- cessing. In R\u00fcdiger Grotjahn, editor, Der C-Test. Theoretis- che Grundlagen und praktische Anwendungen 3, pages 23- 94. Brockmeyer, Bochum.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "C-Tests in the context of reduced redundancy testing: an appraisal. Language Testing",
"authors": [
{
"first": "Christine",
"middle": [],
"last": "Klein-Braley",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "14",
"issue": "",
"pages": "47--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Klein-Braley. 1997. C-Tests in the context of re- duced redundancy testing: an appraisal. Language Testing, 14(1):47-84, March.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A computer-aided environment for generating multiple-choice test items",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
},
{
"first": "Le",
"middle": [
"An"
],
"last": "Ha",
"suffix": ""
},
{
"first": "Nikiforos",
"middle": [],
"last": "Karamanis",
"suffix": ""
}
],
"year": 2006,
"venue": "Natural Language Engineering",
"volume": "12",
"issue": "2",
"pages": "177--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Mitkov, Le An Ha, and Nikiforos Karamanis. 2006. A computer-aided environment for generating multiple-choice test items. Natural Language Engineering, 12(2):177-194, May.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Generating Diagnostic Multiple Choice Comprehension Cloze Questions",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Mostow",
"suffix": ""
},
{
"first": "Hyeju",
"middle": [],
"last": "Jang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP",
"volume": "",
"issue": "",
"pages": "136--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Mostow and Hyeju Jang. 2012. Generating Diagnostic Multiple Choice Comprehension Cloze Questions. In Pro- ceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 136-146. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Faster and Smaller N-Gram Language Models",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the ACL: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "258--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Pauls and Dan Klein. 2011. Faster and Smaller N-Gram Language Models. In Proceedings of the 49th Annual Meet- ing of the ACL: Human Language Technologies, volume 1, pages 258-267. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Generating Grammar Exercises",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "147--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Perez-Beltrachini, Claire Gardent, and German Kruszewski. 2012. Generating Grammar Exercises. pages 147-156.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Introduction to language testing and to C-Tests. University language testing and the C-test",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Raatz",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Klein-Braley",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "75--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Raatz and Christine Klein-Braley. 2002. Introduction to language testing and to C-Tests. University language testing and the C-test, pages 75-91.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Yuki",
"middle": [],
"last": "Arase",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Yuki Arase, and Mamoru Komachi. 2013. Discriminative Approach to Fill-in-the-Blank Quiz Genera- tion for Language Learners.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The C-test: some factors of difficulty",
"authors": [
{
"first": "G\u00fcnther",
"middle": [],
"last": "Sigott",
"suffix": ""
}
],
"year": 1995,
"venue": "AAA. Arbeiten aus Anglistik und Amerikanistik",
"volume": "20",
"issue": "1",
"pages": "43--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnther Sigott. 1995. The C-test: some factors of difficulty. AAA. Arbeiten aus Anglistik und Amerikanistik, 20(1):43-54.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "How fluid is the C-Test construct",
"authors": [
{
"first": "G\u00fcnther",
"middle": [],
"last": "Sigott",
"suffix": ""
}
],
"year": 2006,
"venue": "Der C-Test: Theorie, Empirie, Anwendungen The C-Test: Theory, Empirical Research, Applications",
"volume": "",
"issue": "",
"pages": "139--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnther Sigott. 2006. How fluid is the C-Test construct. In R\u00fcdiger Grotjahn and G\u00fcnther Sigott, editors, Der C-Test: Theorie, Empirie, Anwendungen The C-Test: Theory, Empir- ical Research, Applications, pages 139-146. Peter Lang.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The second language lexicon: some evidence from university-level learners of French and German",
"authors": [
{
"first": "David",
"middle": [],
"last": "Singleton",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Little",
"suffix": ""
}
],
"year": 1991,
"venue": "Second Language Research",
"volume": "7",
"issue": "",
"pages": "61--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Singleton and David Little. 1991. The second lan- guage lexicon: some evidence from university-level learners of French and German. Second Language Research, 7:61- 81.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Predicting Cloze Task Quality for Vocabulary Training",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Skory",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2010,
"venue": "The 5th Workshop on Innovative Use of NLP for Building Educational Applications (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Skory and Maxine Eskenazi. 2010. Predicting Cloze Task Quality for Vocabulary Training. In The 5th Workshop on Innovative Use of NLP for Building Educational Applica- tions (NAACL-HLT).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Reduced Redundancy as a Language Testing Tool",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Spolsky",
"suffix": ""
}
],
"year": 1969,
"venue": "Applications of linguistics",
"volume": "",
"issue": "",
"pages": "383--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Spolsky. 1969. Reduced Redundancy as a Language Testing Tool. In G.E. Perren and J.L.M. Trim, editors, Appli- cations of linguistics, pages 383-390. Cambridge University Press, Cambridge, August.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Collect, callibrate, compare: A practical approach to estimating the difficulty of C-Test items",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Traxel",
"suffix": ""
},
{
"first": "Bettina",
"middle": [],
"last": "Dresemann",
"suffix": ""
}
],
"year": 2010,
"venue": "Der C-Test: Beitr\u00e4ge aus der aktuellen Forschung The C-Test: Contributions from Current Research",
"volume": "",
"issue": "",
"pages": "57--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Traxel and Bettina Dresemann. 2010. Collect, callibrate, compare: A practical approach to estimating the difficulty of C-Test items. In R\u00fcdiger Grotjahn, editor, Der C-Test: Beitr\u00e4ge aus der aktuellen Forschung The C-Test: Contribu- tions from Current Research, pages 57-69. Peter Lang Inter- national Academic Publishers, Frankfurt a.M.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Mind in society: The development of higher psychological processes",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Vygotsky",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Vygotsky. 1978. Mind in society: The development of higher psychological processes. Harvard University Press.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Automatic generation of challenging distractors using context-sensitive inference rules",
"authors": [
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "143--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Torsten Zesch and Oren Melamud. 2014. Automatic genera- tion of challenging distractors using context-sensitive infer- ence rules. In Proceedings of the Ninth Workshop on In- novative Use of NLP for Building Educational Applications, pages 143-148. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Standard error averaged over all gaps for increasing numbers of participants"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Visualisation of error rates for each gap increasing sample sizes.4"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "C-Test Difficulty Model"
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Regression results for leave-one-out testing on the training data. The symbols indicate the difficulty class that was annotated by the human experts."
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">and bo</td><td colspan=\"2\">fragments. Scien</td><td>have ma</td><td>tools th allow</td></tr><tr><td>th</td><td>to ext</td><td colspan=\"3\">subtle infor from anc</td><td>bones a</td><td>their</td></tr><tr><td colspan=\"2\">enviro</td><td colspan=\"2\">settings. Mod</td><td>forensic wo</td><td>in t field a</td></tr><tr><td colspan=\"2\">in labora</td><td>can n</td><td colspan=\"3\">provide a rich understanding of how</td></tr><tr><td colspan=\"3\">our ancestors lived.</td><td/><td/></tr></table>",
"text": "The roots of humanity can be traced back to millions of years ago. T primary evid comes fr fossils -skulls, skel"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: Regression results for different feature groups. Signifi-</td></tr><tr><td>cant differences to the result with all features are indicated with</td></tr><tr><td>*(p&lt;0.05) and **(p&lt;0.01).</td></tr></table>",
"text": ""
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>All Features</td><td>Selected Features</td></tr><tr><td>Solution Difficulty</td><td>37 (43%)</td><td>8 (38%)</td></tr><tr><td>Candidate Ambiguity</td><td>13 (15%)</td><td>4 (19%)</td></tr><tr><td>Inter-Gap Dependency</td><td>8 ( 9%)</td><td>4 (19%)</td></tr><tr><td>Paragraph Difficulty</td><td>28 (33%)</td><td>5 (24%)</td></tr><tr><td>Sum</td><td>87</td><td>21</td></tr></table>",
"text": "Selected Features"
},
"TABREF8": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Proportion of dimensions in selected features and all features"
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>0.6</td><td/><td/><td/></tr><tr><td>Pearson Correlation</td><td>0.2 0.4</td><td/><td/><td/><td>Mean Stdev</td></tr><tr><td/><td>0</td><td>0</td><td>200</td><td>400</td><td>600</td></tr><tr><td/><td/><td/><td/><td>Sample Size</td></tr></table>",
"text": "Results on the train and the test set expected and highlights the importance of a larger training set."
}
}
}
}