ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:41.447083Z"
},
"title": "LXPER Index 2.0: Improving Text Readability Assessment for L2 English Learners in South Korea",
"authors": [
{
"first": "Bruce",
"middle": [
"W"
],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania PA",
"location": {
"country": "USA"
}
},
"email": "brucelws@seas.upenn.edu"
},
{
"first": "Jason Hyung-Jong",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Developing a text readability assessment model specifically for texts in a foreign English Language Training (ELT) curriculum has never had much attention in the field of Natural Language Processing. Hence, most developed models show extremely low accuracy for L2 English texts, up to the point where not many even serve as a fair comparison. In this paper, we investigate a text readability assessment model for L2 English learners in Korea. In accordance, we improve and expand the Text Corpus of the Korean ELT curriculum (CoKEC-text). Each text is labeled with its target grade level. We train our model with CoKEC-text and significantly improve the accuracy of readability assessment for texts in the Korean ELT curriculum.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Developing a text readability assessment model specifically for texts in a foreign English Language Training (ELT) curriculum has never had much attention in the field of Natural Language Processing. Hence, most developed models show extremely low accuracy for L2 English texts, up to the point where not many even serve as a fair comparison. In this paper, we investigate a text readability assessment model for L2 English learners in Korea. In accordance, we improve and expand the Text Corpus of the Korean ELT curriculum (CoKEC-text). Each text is labeled with its target grade level. We train our model with CoKEC-text and significantly improve the accuracy of readability assessment for texts in the Korean ELT curriculum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text readability assessment has been an important field of research since the 1940s. However, most research focused on the native audience in English speaking countries (Benjamin, 2012) . In China, Japan, and Korea, many high and middle school students attend English language schools, in addition to their regular school classes. English subject plays an important role in the educational systems of the three countries (Mckay, 2002) .",
"cite_spans": [
{
"start": 169,
"end": 185,
"text": "(Benjamin, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 421,
"end": 434,
"text": "(Mckay, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the importance put in English education, the previous text readability assessment models have not been in active use in the three countries. This is due to the poor performance of traditional readability assessment models on L2 texts. We believe there is an immediate need for the development of an improved text readability assessment method for use in L2 education around the world. In this research, we put a specific focus on L2 English learners in South Korea. But our methodology is applicable to other ELT (English Language Training) curricula.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many traditional readability assessment models are linear regression models with a small number of linguistic features, consisting of the generic features of a text like total words, total sentences, and total syllables (Kincaid et al., 1975) . Such features are effective predictors of a text's readability, but more curriculum-specific features are required for L2 text readability assessments. The key distinction between native readability assessment and L2 readability assessment is that L2 students rigorously follow the specific national ELT curriculum. Unlike native students who learn English from a variety of sources, most L2 students have limited exposure to English. In this research, we reduce the average assessment error by implementing some curriculum-specific features.",
"cite_spans": [
{
"start": 220,
"end": 242,
"text": "(Kincaid et al., 1975)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are: (1) we utilize and expand CoKEC-text, one of the few graded corpora with texts from an actual L2 curriculum;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) we investigate novel linguistic features that were rarely tested on an L2 corpus; (3) we evaluate our model against other readability models, show significantly improved accuracy, and prove that \"grades\" are better modeled using logistic regression, not linear regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research efforts in developing automated text readability assessment models for L2 students only emerged in the 2000s (Xia et al., 2016) . Heilman et al. (2007) showed that grammatical features and lexical features play particularly important roles in L2 text readability prediction. Meanwhile, Vajjala and Meurers (2014) showed that the additional use of lexical features could significantly improve L2 readability assessment. Feng et al. (2010) also reported the importance of lexical features in general (for L1 speakers of English) text readability assessment.",
"cite_spans": [
{
"start": 118,
"end": 136,
"text": "(Xia et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 139,
"end": 160,
"text": "Heilman et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 295,
"end": 321,
"text": "Vajjala and Meurers (2014)",
"ref_id": "BIBREF12"
},
{
"start": 428,
"end": 446,
"text": "Feng et al. (2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, the common limitation of the previous research in L2 readability assessment was the training corpus annotated with the grade levels for L1 readers of English. Our results, which we obtain from training our model using CoKEC-text, introduces the possibility that lexical features are not as important as the previous researchers reported. In addition, we also show that a considerably accurate text readability model can built even with a small data set if the model is optimized and the corpus is well-labeled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since our goal is to improve the accuracy in text readability assessment of L2 texts, our ideal corpus has to fully consist of L2 texts from a non-native ELT curriculum. The base corpus that we use is CoKEC-text (Lee and Lee, 2020), which is a collection of 2760 unique grade-labeled texts that are officially administered by the Korean Ministry of Education (MOE). Similar texts are also used in the National Assessment of Educational Achievement, College Scholastic Ability Test, and MOEapproved middle school textbooks in Korea.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3"
},
{
"text": "However, as shown in Table 1 , the number of texts in the original CoKEC-text is heavily skewed to higher grades (K10 \u223c K12.5) than in lower grades (K7 \u223c K9). Such a disparity can affect the accuracy of our regression results and can become troublesome in predicting the lower grade texts' readability. Thus, we decided to collect about 900 more texts from Korean MOE-approved middle school textbooks and use them to create an expanded version of CoKEC-text.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3"
},
{
"text": "In addition, we found some K7 \u223c K9 texts that are only partially English. They contained ASCII Korean Characters (often explanations of difficult English words by the author). These count as a token (or possibly, even a sentence from case to case) in the NLTK parsing process but provide no meaningful linguistic properties. This can produce miscalculations of the \"average of x\" (e.g., the average number of words per sentence) features that we discuss in Section 4. We manually went through every text to make sure that clean data is used for model training. Our final training corpus consists of 3700 original L2 texts from K7 \u223c K12.5. K12.5 grade texts are from CSAT, which is a college entrance exam for Korean universities. In general, the Korean grades K7 to K12 are for middle and high school students of ages 13 to 19.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "3"
},
{
"text": "12 .5 691 691 12 590 601 11 596 602 10 571 580 9 80 313 8 215 302 7 17 305 ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 85,
"text": ".5 691 691 12 590 601 11 596 602 10 571 580 9 80 313 8 215 302 7",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Difficulty Labels Original Expanded",
"sec_num": null
},
{
"text": "We now describe the 35 linguistic features we studied. Table 2 contains a list of the features with a shortcode name used throughout this paper. The list is divided into five parts: traditional features, POS-based features, entity density features, lexical chain features, and word difficulty features.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Selecting features",
"sec_num": "4"
},
{
"text": "We first implemented some traditional features from the popular Flesch-Kincaid model: aWPS (average number of Words per Sentence), aSPW (average number of Syllables per Word), and P3T (words with more than 3 syllables per Text). These are one of the earliest linguistic features studied in text readability prediction, but they prove to be still useful in recent studies (Feng et al., 2010) .",
"cite_spans": [
{
"start": 371,
"end": 390,
"text": "(Feng et al., 2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional Features",
"sec_num": "4.1"
},
{
"text": "A number of researchers commonly reported that POS-based features are effective in text readability prediction. In particular, Peterson and Ostendorf (2009) investigated the following features: aNP, aNN, aVP, aAdj, aSBr, aPP, nNP, nNN, nVP, nAdj, aSBr, nPP. Lee and Lee (2020) proved that these features are highly correlated with the difficulty of L2 texts. However, their dataset mostly consisted of K10, K11, K12 texts, and their evaluation was conducted only on K9 \u223c K12. Thus, there exists a possibility that the result was heavily influenced by higher grade L2 texts. We evaluate these features again with our expanded version of CoKEC-text.",
"cite_spans": [
{
"start": 127,
"end": 156,
"text": "Peterson and Ostendorf (2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "POS-based Features",
"sec_num": "4.2"
},
{
"text": "We implement entity density features in an attempt to account for the difficulty in comprehending conceptual information in texts. Such information is often introduced by entities, or more specifically, general nouns and named entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Density Features",
"sec_num": "4.3"
},
{
"text": "The density of entities introduced in a text relates to the working memory burden, which has an increasing trend in a positive correlation with the age of the reader. Our main task is to develop a model that would be particularly useful to L2 student groups, and accurately classify the given texts to the respective student grade level. Hence, we believe that these entity density features are great predictors. Some of these features were never tested on an L2 corpus, and the results we obtain are novel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Density Features",
"sec_num": "4.3"
},
{
"text": "We believe that the accuracy of L2 text readability can be improved by incorporating lexical chain features as well. Since L2 readers have limited exposure to English compared to native readers, we hypothesize that L2 readers work harder in connecting several entities and recognizing the semantic relationship. Entities that form these semantic relations are connected throughout the text in the form of lexical chains. However, in Table 3 we observe that lexical chain features are weakly correlated to target grade levels of L2 texts in Korea.",
"cite_spans": [],
"ref_spans": [
{
"start": 433,
"end": 440,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Lexical Chain Features",
"sec_num": "4.4"
},
{
"text": "Native English readers learn vocabulary from a variety of sources. On the other hand, most L2 students learn new English words step by step, following the respective national ELT curriculum. Hence, implementing curriculum specific features related to vocabularies can be particularly useful in predicting the text difficulty for L2 students.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Difficulty Features",
"sec_num": "4.5"
},
{
"text": "We use CoKEC-word to identify the difficulty of words (Lee and Lee, 2020). The word corpus is a classification of 30608 words in 6 levels. It only consists of the words that previously appeared in the Korean ELT curriculum. We focused on the vocabularies in levels B, C, D, E, and F. This covers vocabularies from K5 to college level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Difficulty Features",
"sec_num": "4.5"
},
{
"text": "We used a combination of spaCy (popular opensource library for NLP) (Honnibal and Montani, 2017) , NLTK (NLP toolkit for Python) (Bird et al., 2009) , Gensim (famous for topic modeling and FastText model) (\u0158eh\u016f\u0159ek and Sojka, 2010), and the Berkeley Neural Parser (constituency parser) (Kitaev and Klein, 2018) to parse and count the features described in this section. To keep operation simple, only NLTK and Berkeley Neural Parser were used in the previous version of LXPER Index. However, our further investigation show that certain tasks are performed at much higher accuracy by complementary libraries. For example, spaCy showed the highest accuracy at recognizing a sentence, and Gensim improved the lexical chaining process.",
"cite_spans": [
{
"start": 68,
"end": 96,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 129,
"end": 148,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 285,
"end": 309,
"text": "(Kitaev and Klein, 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing and Counting Modules",
"sec_num": "4.6"
},
{
"text": "We computed the Pearson correlation value of each feature and checked if it was significant enough (correlation > 0.07) in predicting the target grade level of a text. The \"Cor\" column in Table 3 lists the correlation value of each feature. We ordered the list in decreasing correlation values. Next, we removed the features that are highly correlated (\"Paired?\" column). The \"Include?\" column in Table 3 We collected the texts from two sources to test how our readability assessment model performs on different types of texts. Ideally, our results should show a continuous increase from K7 to K12 texts. Our target average assessment error is below 0.5 grade level. Table 4 summarizes our results. We compare our LXPER Index 2.0 (LX 2.0) to traditionally popular models like Flesch-Kincaid (F-K) (Kincaid et al., 1975) , Dale-Chall (D-C) (Dale and Chall, 1949) , and the previous LXPER Index 1.0 (LX 1.0) (Lee and Lee, 2020).",
"cite_spans": [
{
"start": 797,
"end": 819,
"text": "(Kincaid et al., 1975)",
"ref_id": "BIBREF7"
},
{
"start": 822,
"end": 838,
"text": "Dale-Chall (D-C)",
"ref_id": null
},
{
"start": 839,
"end": 861,
"text": "(Dale and Chall, 1949)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 397,
"end": 404,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 667,
"end": 674,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Selecting Features",
"sec_num": "4.7"
},
{
"text": "We also wanted to compare our model to the more recently developed models, but we could not find any suitable L2 readability index. We attempted comparison with Lexile Score and Coh-Metrix L2 Readability Score (Crossley et al., 2008) . However, the models had a completely different grading scale and did not show a consistently increasing trend with grades. This was also reported in our previous research (Lee and Lee, 2020).",
"cite_spans": [
{
"start": 210,
"end": 233,
"text": "(Crossley et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Features",
"sec_num": "4.7"
},
{
"text": "In this research, we introduced LXPER Index 2.0, a readability assessment tool that incorporates traditional, POS, entity density, lexical chain, and word difficulty features. Then, we trained the model on our own expanded version of CoKEC-text. We obtained a continuously increasing output for L2 texts from K7 to K12. In addition, we achieved our initial target average accuracy error of less than 0.5 grade levels, which is more accurate than any L2 text readability prediction model we are aware of.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The improvements we report in this paper are largely due to two changes: 1. the CoKEC expansion and 2. the use of a logistic regression model. The contribution from the corpus is quite obvious in that our model could now learn more about the lower grades (K7 \u223c K9). However, the contribution from the change of the regression model is something that we should put more thought into. But it seems evident that the \"grades\" classification task is better modeled with a logistic regression model. A possible explanation could be that the difficulty of a text does not linearly correlate with the target grades.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Even though we wanted to test our model on other East Asia L2 ELT curricula, like Japan and China, we could not implement due to the lack of openly-available corpus in the countries. The novelty of the LXPER Index model is that it focuses on in-curriculum text readability analysis, possibly even with a small data set of less than 4000 texts. Thus, applying the model will fail to give meaningful outcomes without a pre-processed and labeled corpus like CoKEC. Thus, the application of a similar model on those countries would first require foundation research on constructing corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reconstructing readability: Recent developments and recommendations in the analysis of text difficulty",
"authors": [
{
"first": "Rebekah",
"middle": [
"G"
],
"last": "Benjamin",
"suffix": ""
}
],
"year": 2012,
"venue": "Educational Psychology Review",
"volume": "",
"issue": "21",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/S10648-011-9181-8"
]
},
"num": null,
"urls": [],
"raw_text": "Rebekah G. Benjamin. 2012. Reconstructing readabil- ity: Recent developments and recommendations in the analysis of text difficulty. Educational Psychol- ogy Review, 24(21).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Edward Loper, and Ewan Klein. 2009. Natural Language Processing with Python. O'Reilly Media Inc.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Assessing text readability using cognitively based indices",
"authors": [
{
"first": "A",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Jerry",
"middle": [],
"last": "Crossley",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"S"
],
"last": "Greenfield",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Namara",
"suffix": ""
}
],
"year": 2008,
"venue": "TESOL Quarterly",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1002/j.1545-7249.2008.tb00142.x"
]
},
"num": null,
"urls": [],
"raw_text": "Scott A. Crossley, Jerry Greenfield, and Danielle S. Mc- Namara. 2008. Assessing text readability using cog- nitively based indices. TESOL Quarterly, 42(3).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The concept of readability",
"authors": [
{
"first": "Edgar",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Jeanne",
"middle": [
"S"
],
"last": "Chall",
"suffix": ""
}
],
"year": 1949,
"venue": "Elementary English",
"volume": "",
"issue": "23",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar Dale and Jeanne S. Chall. 1949. The concept of readability. Elementary English, 26(23).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A comparison of features for automatic readability assessment",
"authors": [
{
"first": "Lijun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jansche",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Huenerfauth",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "276--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lijun Feng, Martin Jansche, Matt Huenerfauth, and Noemie Elhadad. 2010. A comparison of features for automatic readability assessment. In Proceed- ings of the 23rd International Conference on Com- putational Linguistics, pages 276-284.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Combining lexical and grammatical features to improve readability measures for first and second language text",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Kevyn",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Callan",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "460--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman, Kevyn Collins-Thompson, Jamie Callan, and Maxine Eskenazi. 2007. Combining lex- ical and grammatical features to improve readabil- ity measures for first and second language text. In Proceedings of North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 460-467.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. Version2.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Derivation of new readability formulas (automated readability index, fog count, and flesch reading ease formula) for navy enlisted personnel",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kincaid",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"P"
],
"last": "Fishburne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"S"
],
"last": "Rogers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chissom",
"suffix": ""
}
],
"year": 1975,
"venue": "Research Branch Report",
"volume": "",
"issue": "",
"pages": "8--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Peter Kincaid, Robert P. Fishburne Jr., Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count, and flesch reading ease formula) for navy enlisted personnel. Research Branch Report, pages 8-75.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Constituency parsing with a self-attentive encoder",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2676--2686",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1249"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics, pages 2676--2686.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lxper index: A curriculum-specific text readability assessment model for efl students in korea",
"authors": [
{
"first": "W",
"middle": [],
"last": "Bruce",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "International Journal of Advanced Computer Science and Applications",
"volume": "11",
"issue": "8",
"pages": "",
"other_ids": {
"DOI": [
"10.14569/IJACSA.2020.0110801"
]
},
"num": null,
"urls": [],
"raw_text": "Bruce W. Lee and Jason H. Lee. 2020. Lxper in- dex: A curriculum-specific text readability assess- ment model for efl students in korea. International Journal of Advanced Computer Science and Appli- cations, 11(8).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Teaching English as an International Language: Rethinking Goals and Perspectives",
"authors": [
{
"first": "Sandra",
"middle": [
"L"
],
"last": "Mckay",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra L. Mckay. 2002. Teaching English as an Inter- national Language: Rethinking Goals and Perspec- tives. OUP Oxford.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A machine learning approach to reading level assessment",
"authors": [
{
"first": "Sarah",
"middle": [
"E"
],
"last": "Peterson",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Speech and Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.csl.2008.04.003"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah E. Peterson and Mari Ostendorf. 2009. A ma- chine learning approach to reading level assessment. Computer Speech and Language, 23.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Readability assessment for text simplification: From analyzing documents to identifying sentential simplification",
"authors": [
{
"first": "Sowmya",
"middle": [],
"last": "Vajjala",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Applied Linguistics",
"volume": "165",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1075/itl.165.2.04vaj"
]
},
"num": null,
"urls": [],
"raw_text": "Sowmya Vajjala and Detmar Meurers. 2014. Readabil- ity assessment for text simplification: From analyz- ing documents to identifying sentential simplifica- tion. International Journal of Applied Linguistics, 165(2).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text readability assessment for second language learners",
"authors": [
{
"first": "Menglin",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Kochmar",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "12--22",
"other_ids": {
"DOI": [
"10.18653/v1/W16-0502"
]
},
"num": null,
"urls": [],
"raw_text": "Menglin Xia, Ekaterina Kochmar, and Ted Briscoe. 2016. Text readability assessment for second lan- guage learners. In Proceedings of the 11th Work- shop on Innovative Use of NLP for Building Educa- tional Applications, pages 12-22.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Software framework for topic modelling with large corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In Pro- ceedings of LREC Workshop on New Challenges for NLP Frameworks, pages 45-50.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"text": "Number of texts in two corpus versions",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"text": "Number of texts in two corpus versions",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>Code</td><td>Cor</td><td colspan=\"3\">Sig? Paired? Include?</td></tr><tr><td>nDw</td><td>0.532</td><td>Y</td><td>Y</td><td>Y</td></tr><tr><td>aWS</td><td>0.512</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aSPW</td><td>0.499</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aDw</td><td>0.487</td><td>Y</td><td>Y</td><td>N</td></tr><tr><td>nBw</td><td>0.454</td><td>Y</td><td>Y</td><td>Y</td></tr><tr><td>aNP</td><td>0.446</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>P3T</td><td>0.444</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aNN</td><td>0.434</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aPP</td><td>0.423</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>nPP</td><td>0.417</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>nCw</td><td>0.402</td><td>Y</td><td>Y</td><td>Y</td></tr><tr><td>nEw</td><td>0.399</td><td>Y</td><td>Y</td><td>Y</td></tr><tr><td>nAdj</td><td>0.394</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aAdj</td><td>0.378</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>nNN</td><td>0.376</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aVP</td><td>0.323</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>nWD</td><td>0.321</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>nNP</td><td>0.308</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aSBr</td><td>0.298</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aCw</td><td>0.289</td><td>Y</td><td>Y</td><td>N</td></tr><tr><td>aBw</td><td>0.274</td><td>Y</td><td>Y</td><td>N</td></tr><tr><td>nSBr</td><td>0.221</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aEw</td><td>0.221</td><td>Y</td><td>Y</td><td>Y</td></tr><tr><td>nLC</td><td>0.212</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>PND</td><td>0.201</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>nEw</td><td>0.195</td><td>Y</td><td>Y</td><td>Y</td></tr><tr><td>PNS</td><td>0.174</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aLCW</td><td>0.154</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>nVP</td><td>0.126</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td colspan=\"2\">aLCN 0.0995</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aFw</td><td>0.0976</td><td>Y</td><td>Y</td><td>N</td></tr><tr><td>aLCS</td><td>0.0913</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aUE</td><td>0.0884</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>aEM</td><td>0.0792</td><td>N</td><td>N</td><td>N</td></tr><tr><td>nUE</td><td>0.00833</td><td>N</td><td>N</td><td>N</td></tr></table>",
"text": "summarizes the final features.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>: Selecting features</td></tr><tr><td>5 Readability Assessment</td></tr><tr><td>We built a logistic regression model and trained it</td></tr><tr><td>with the new expanded version of CoKEC-text to</td></tr><tr><td>complete our assessment tool; our model is pro-</td></tr><tr><td>grammed in Python. To evaluate the new model's</td></tr><tr><td>effectiveness for L2 students in Korea, we prepared</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>: Final results</td></tr><tr><td>a separate test corpus. The first part (K10 \u223c K12)</td></tr><tr><td>of our test corpus is from the official mock tests</td></tr><tr><td>that were used by KICE (Korea Institute of Cur-</td></tr><tr><td>riculum &amp; Evaluation) to assess the educational</td></tr><tr><td>achievement of high school students from 2017 to</td></tr><tr><td>2020. There are 270 texts in the first part of our</td></tr><tr><td>test corpus (K10: 90 texts, K11: 90 texts, K12:</td></tr><tr><td>90 texts). The second part of our corpus is from</td></tr><tr><td>the government-approved middle school textbooks</td></tr><tr><td>(K7: 90 texts, K8: 90 texts, K9: 90 texts).</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}