| { |
| "paper_id": "W18-0505", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T05:22:42.426723Z" |
| }, |
| "title": "Estimating Linguistic Complexity for Science Texts", |
| "authors": [ |
| { |
| "first": "Farah", |
| "middle": [], |
| "last": "Nadeem", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": {} |
| }, |
| "email": "farahn@uw.edu" |
| }, |
| { |
| "first": "Mari", |
| "middle": [], |
| "last": "Ostendorf", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": {} |
| }, |
| "email": "ostendor@uw.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Evaluation of text difficulty is important both for downstream tasks like text simplification, and for supporting educators in classrooms. Existing work on automated text complexity analysis uses linear models with engineered knowledge-driven features as inputs. While this offers interpretability, these models have lower accuracy for shorter texts. Traditional readability metrics have the additional drawback of not generalizing to informational texts such as science. We propose a neural approach, training on science and other informational texts, to mitigate both problems. Our results show that neural methods outperform knowledge-based linear models for short texts, and have the capacity to generalize to genres not present in the training data.", |
| "pdf_parse": { |
| "paper_id": "W18-0505", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Evaluation of text difficulty is important both for downstream tasks like text simplification, and for supporting educators in classrooms. Existing work on automated text complexity analysis uses linear models with engineered knowledge-driven features as inputs. While this offers interpretability, these models have lower accuracy for shorter texts. Traditional readability metrics have the additional drawback of not generalizing to informational texts such as science. We propose a neural approach, training on science and other informational texts, to mitigate both problems. Our results show that neural methods outperform knowledge-based linear models for short texts, and have the capacity to generalize to genres not present in the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "A typical classroom presents a diverse set of students in terms of their reading comprehension skills, particularly in the case of English language learners (ELLs). Supporting these students often requires educators to estimate accessibility of instructional texts. To address this need, several automated systems have been developed to estimate text difficulty, including readability metrics like Lexile (Stenner et al., 1988) , the end-toend system TextEvaluator (Sheehan et al., 2013) , and linear models (Vajjala and Meurers, 2014; Petersen and Ostendorf, 2009; Schwarm and Ostendorf, 2005) . These systems leverage knowledgebased features to train regression or classification models. Most systems are trained on literary and generic texts, since analysis of text difficulty is usually tied to language teaching. Existing approaches for automated text complexity analysis pose two issues: 1) systems using knowledge based features typically work better for longer texts (Vajjala and Meurers, 2014) , and 2) complex-ity estimates are less accurate for informational texts such as science (Sheehan et al., 2013) . In the context of science, technology and engineering (STEM) education, both problems are significant. Teachers in these areas have less expertise in identifying appropriate reading material for students as opposed to language teachers, and shorter texts become important when dealing with assessment questions and identifying the most difficult parts of instructional texts to modify for supporting students who are ELLs.", |
| "cite_spans": [ |
| { |
| "start": 398, |
| "end": 427, |
| "text": "Lexile (Stenner et al., 1988)", |
| "ref_id": null |
| }, |
| { |
| "start": 451, |
| "end": 487, |
| "text": "TextEvaluator (Sheehan et al., 2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 508, |
| "end": 535, |
| "text": "(Vajjala and Meurers, 2014;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 536, |
| "end": 565, |
| "text": "Petersen and Ostendorf, 2009;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 566, |
| "end": 594, |
| "text": "Schwarm and Ostendorf, 2005)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 975, |
| "end": 1002, |
| "text": "(Vajjala and Meurers, 2014)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 1092, |
| "end": 1114, |
| "text": "(Sheehan et al., 2013)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our work specifically looks at ways to address these two problems. First, we propose recurrent neural network (RNN) architectures for estimating linguistic complexity, using text as input without feature engineering. Second, we specifically train on science and other informational texts, using the grade level of text as a proxy for linguistic complexity and dividing grades k-12 into 6 groups. We explore four different RNN architectures in order to identify aspects of text which contribute more to complexity, with a novel structure introduced to account for cross-sentence context. Experimental results show that when specifically trained for informational texts, RNNs can accurately predict text difficulty for shorter science texts. The models also generalize to other types of texts, but perform slightly worse than feature-based regression models on a mix of genres for texts longer than 100 words. We use attention with all models, both to improve accuracy, and as a tool to visualize important elements of text contributing to linguistic complexity. The key contributions of the work include new neural network architectures for characterizing documents and experimental results demonstrating good performance for predicting reading level of short science texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of the paper is organized as follows: section 2 looks at existing work on automated readability analysis and introduces RNN architec-tures we build on for this work. Section 3 lays out the data sources, section 4 covers proposed models, and section 5 presents results. Discussion and concluding remarks follow in sections 6 and 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Studies have shown that language difficulty of instructional materials and assessment questions impacts student performance, particularly for language learners (Hickendorff, 2013; Abedi and Lord, 2001; Abedi, 2006) . This has lead to extensive work on readability analysis, some of which is explored here. The second part of this section looks at work that leverages RNNs in automatic text classification tasks and the use of attention with RNNs.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 179, |
| "text": "(Hickendorff, 2013;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 180, |
| "end": 201, |
| "text": "Abedi and Lord, 2001;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 202, |
| "end": 214, |
| "text": "Abedi, 2006)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Traditional reading metrics including Flesch-Kincaid (Kincaid et al., 1975) and Coleman-Liau index (Coleman and Liau, 1975) are often used to assess a text for difficulty. These metrics utilize surface features such as average length of sentences and words, or word lists (Chall and Dale, 1995) . The development of automated text analysis systems has made it possible to leverage additional linguistic features, as well as conventional reading metrics, to estimate text complexity quantified as reading level. NLP tools can be used to extract a variety of lexical, syntactic and discourse features from text, which can then be used with traditional features as input to models for predicting reading level. Some of the models include statistical language models (Collins-Thompson and Callan, 2004) , support vector machine classifiers (Schwarm and Ostendorf, 2005; Petersen and Ostendorf, 2009) , and logistic regression (Feng et al., 2010) . Text coherence has also been explored as a predictor of difficulty level in (Graesser et al., 2004) , with an extended feature set that includes syntactic complexity and discourse in addition to coherence (Graesser et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 75, |
| "text": "Flesch-Kincaid (Kincaid et al., 1975)", |
| "ref_id": null |
| }, |
| { |
| "start": 80, |
| "end": 123, |
| "text": "Coleman-Liau index (Coleman and Liau, 1975)", |
| "ref_id": null |
| }, |
| { |
| "start": 272, |
| "end": 294, |
| "text": "(Chall and Dale, 1995)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 763, |
| "end": 798, |
| "text": "(Collins-Thompson and Callan, 2004)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 836, |
| "end": 865, |
| "text": "(Schwarm and Ostendorf, 2005;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 866, |
| "end": 895, |
| "text": "Petersen and Ostendorf, 2009)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 922, |
| "end": 941, |
| "text": "(Feng et al., 2010)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1020, |
| "end": 1043, |
| "text": "(Graesser et al., 2004)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1149, |
| "end": 1172, |
| "text": "(Graesser et al., 2011)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated Readability Analysis", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A study conducted in (Nelson et al., 2012) indicates that metrics that incorporate a large set of linguistic features perform better at predicting text difficulty level; the metrics were specifically tested on the Common Core Standards (CCS) texts. 1 Features from second language acquisition complexity measures were used in (Vajjala and Meurers, 2012) to improve readability assessment. This feature set was further extended to include morphological, semantic and psycholinguistic features to build a readability analyzer for shorter texts (Vajjala and Meurers, 2014) . A tool specifically built for text complexity analysis for teaching and assessing is the TextEvaluator TM . While knowledgebased features offer interpretability, a drawback is that if the text being analyzed is short, the feature vector is sparse, and prediction accuracy drops (Vajjala and Meurers, 2014) . This is particularly true for assessment questions, which are shorter than the samples most models are trained on.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 42, |
| "text": "(Nelson et al., 2012)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 249, |
| "end": 250, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 326, |
| "end": 353, |
| "text": "(Vajjala and Meurers, 2012)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 542, |
| "end": 569, |
| "text": "(Vajjala and Meurers, 2014)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 850, |
| "end": 877, |
| "text": "(Vajjala and Meurers, 2014)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated Readability Analysis", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Generally, for any text classification task, the type of text used for training the model is important in terms of how well it performs; training on more representative text tends to improve performance. The work in (Sheehan et al., 2013) shows that traditional readability measures underestimate the reading level of literary texts, and overestimate that of informational texts, such as history, science and mathematics articles. This is due, in part, to the vocabulary specific to the genre. Science texts have longer words, though they may be easier to infer from context. Literary texts, on the other hand, might have simpler words, but more complicated sentence structure. The work demonstrated that more accurate grade level estimates can be obtained by two stage classification: i) classify the text as either literary, informational, or mixed, and then ii) use a genre-dependent analyzer to estimate the level. In an analysis on how well a model trained on news and informational articles generalizes to the categories in CCS, the work in (Vajjala and Meurers, 2014) shows better performance on informational genre than literary texts. Training on more representative text, however, requires genre-specific annotated data.", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 238, |
| "text": "(Sheehan et al., 2013)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1047, |
| "end": 1074, |
| "text": "(Vajjala and Meurers, 2014)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automated Readability Analysis", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Recurrent neural networks (RNNs) are adept at learning text representations, as demonstrated by language modeling (Mikolov et al., 2010 ) and text classification tasks (Yogatama et al., 2017) . Additional RNN structures have been proposed for improved representation, including tree LSTMs (Tai et al., 2015 ) and a hierarchical RNN (Yang et al., 2016) . In addition, hierarchical models have been proposed to better represent document structure (Yang et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 135, |
| "text": "(Mikolov et al., 2010", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 168, |
| "end": 191, |
| "text": "(Yogatama et al., 2017)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 289, |
| "end": 306, |
| "text": "(Tai et al., 2015", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 332, |
| "end": 351, |
| "text": "(Yang et al., 2016)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 445, |
| "end": 464, |
| "text": "(Yang et al., 2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Classification with RNNs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Attention mechanisms were introduced to improve neural machine translation tasks , and have also been shown to im-prove the performance of text classification (Yang et al., 2016) . In machine translation, attention is computed over the source sequence when predicting the words in the target sequence. This \"context\" attention is based on a score computed between the target hidden state h t and a subset of the source hidden states h s . The score can be computed in several ways, of which a general form is (Luong et al., 2015) . Attention has also been used for a variety of other language processing tasks. In particular, for text classification, attention weights are learned that target the final classification decision. This approach is referred to as \"self attention\" in (Lin et al., 2017 ), but will be referred to here as \"task attention.\" The hierarchical RNN in (Yang et al., 2016) uses task attention mechanisms at both word and sentence levels. Since our work builds on this model, it is described in further detail in section 4. In addition, we propose extensions of the hierarchical RNN that leverage attention in different ways, including combining the concept of context attention from machine translation with task attention to capture interdependence of adjoining sentences in a document.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 178, |
| "text": "(Yang et al., 2016)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 509, |
| "end": 529, |
| "text": "(Luong et al., 2015)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 780, |
| "end": 797, |
| "text": "(Lin et al., 2017", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 875, |
| "end": 894, |
| "text": "(Yang et al., 2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Classification with RNNs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "score(h t , h s ) = h T t W \u03b1 h T s", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Classification with RNNs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For our work we consider grade level as a proxy for linguistic complexity. Within a grade level, there is variability across different genres, which students are expected to learn. Since there is no publicly available data set for estimating grade level and text difficulty aimed at informational texts, we created a corpus using online science, history and social studies textbooks. The textbooks are written for either specific grades, or for a grade range, e.g. grades 6-8. There are a total of 44 science textbooks and 11 history and social studies textbooks, distributed evenly across grades K-12. Given the distribution of textbooks for each grade level, we decide to classify into one of six grade bands: K-1, 2-3, 4-5, 6-8, 9-10 and 11-12. Because of our interest in working with short texts, we split the books into paragraphs, using end line as the delimiter. 2 In addition to the textbooks, we also used the WeeBit corpus (Vajjala and Meurers, 2012) for training, again split into paragraphs.", |
| "cite_spans": [ |
| { |
| "start": 933, |
| "end": 960, |
| "text": "(Vajjala and Meurers, 2012)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Test set chapters K-1 25 -2-3 22 2 4-5 53 9 6-8 165 12 9-10 48 5 11-12 28 3 We have three different sources of test data: i) the CCS appendix B texts, ii) a subset of the online texts that we collected, 3 and iii) a collection of science assessment items.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 84, |
| "text": "K-1 25 -2-3 22 2 4-5 53 9 6-8 165 12 9-10 48 5 11-12", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grade Level All chapters", |
| "sec_num": null |
| }, |
| { |
| "text": "The CCS appendix B data is of interest because it has been extensively used for evaluating linguistic complexity models, e.g. in (Sheehan et al., 2013; Vajjala and Meurers, 2014) . It includes both informational and literary texts. We use document-level samples from the CCS data for comparison to prior work, and paragraph-level samples to provide a more direct comparison to the information test data we created.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 151, |
| "text": "(Sheehan et al., 2013;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 152, |
| "end": 178, |
| "text": "Vajjala and Meurers, 2014)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grade Level All chapters", |
| "sec_num": null |
| }, |
| { |
| "text": "For the informational texts, we selected chapters from multiple open source texts. Since we had so few texts at the K-1 level, the test data only included texts from higher grade levels, as shown in table 1. The paragraphs in these chapters were randomly assigned to test and validation sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grade Level All chapters", |
| "sec_num": null |
| }, |
| { |
| "text": "To assess the models on stand alone texts, we assembled a corpora of science assessment questions from (Khot et al., 2015; Clark et al., 2018) , AI2 Science Questions Mercury, 4 and AI2 Science Questions v2.1 (October 2017). 5 This test set includes 5470 questions for grades 6-8 from sources including standardized state and national tests. The average length of a question is 49 words.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 122, |
| "text": "(Khot et al., 2015;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 123, |
| "end": 142, |
| "text": "Clark et al., 2018)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 225, |
| "end": 226, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grade Level All chapters", |
| "sec_num": null |
| }, |
| { |
| "text": "For training, two data configurations were used. When testing on the CCS data and the science assessment questions, there is no concern about overlap between training and test data, so all text can be used for training. We held out 10% of this data for analysis, and the remaining text is used for the D 1 training configuration. Data statistics are given in For the elementary grade levels, we have much less data than for middle school, and for high school, we have substantial training data with coarser labels (grades 9-12). To work around both issues, we first used all training samples to train the RNN to predict one of four labels (grades K-3, 4-5, 6-8 and 9-12). We then used the training data with fine labels to train to predict one of six labels. This approach was more effective than alternating the training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grade Level All chapters", |
| "sec_num": null |
| }, |
| { |
| "text": "This section introduces the four RNN structures for linguistic complexity estimation, including: a sequential RNN with task attention, a hierarchical attention network, and two proposed extensions of the hierarchical model using multi-head attention and attention over bidirectional context. In all cases, the resulting document vector is used in a final stage of ordinal regression to predict linguistic complexity. All systems are trained in an end-toend fashion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models for Estimating Linguistic Complexity", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The basic RNN model we consider is a sequential RNN with task attention, where the entire text in a paragraph or document is taken as a sequence. For a document t i with words K words w ik k \u2208 {1, 2, ..., K}, a bidirectional GRU is used to learn representation for each word h ik , using a forward run from w i1 to w iK , and a backward run from", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w iK to w i1 . \u2212 \u2192 h ik = \u2212 \u2212\u2212 \u2192 GRU (w ik ) (1) \u2190 \u2212 h ik = \u2190 \u2212\u2212 \u2212 GRU (w ik ) (2) h ik = [ \u2212 \u2192 h ik , \u2190 \u2212 h ik ]", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Attention is computed over the entire sequence \u03b1 ik , and used to compute the document representation v seq i :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "u ik = tanh(W s h ik + b s )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 ik = exp(u T ik us) ik exp(u T ik us)", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "v seq i = k \u03b1 ik h ik (6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The document vector is used to predict reading level. Since the grade levels are ordered categorical labels, we implement ordinal regression using the proportional odds model (McCullagh, 1980) . For the reading level labels j \u2208 {1, 2, ..., J}, the cumulative probability is modeled as", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 192, |
| "text": "(McCullagh, 1980)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (y \u2264 j|v seq i ) = \u03c3(\u03b2 j \u2212 w T ord v seq i ),", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where \u03c3(.) is the sigmoid function, and \u03b2 j and w ord are estimated during training by minimizing the negative log-likelihood", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "L ord = \u2212 i log(\u03c3(\u03b2 j(i) \u2212 w T ord v seq i ) \u2212 (8) \u03c3(\u03b2 j(i)\u22121 \u2212 w T ord v seq i )).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential RNN", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "While a sequential RNN has the capacity to capture discourse across sentences, it does not capture document structure. Therefore, we also explored the hierarchical attention network for text classification from (Yang et al., 2016) . The model builds a vector representation", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 230, |
| "text": "(Yang et al., 2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "v i for each document t i with L sentences s l , l \u2208 {1, 2, .., L}, each with T l words w lt , t \u2208 {1, 2, ..., T l }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The first level of the hierarchy takes words as input and learns a representation for each word h lt using a bidirectional GRU. Task attention at the word level \u03b1 lt highlights words important for the classification task, and is computed using the word level context vector u w . The word representations are then averaged using attention weights to form a sentence representation s l", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 lt = exp(u T lt uw) t exp(u T lt uw)", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s l = t \u03b1 lt h lt ,", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where u lt = tanh(W w h lt + b w ) is a projection of the target hidden state for learning word-level attention. The second level of the hierarchy takes the sentence vectors as input, learns representation h l for them using a bidirectional GRU. Using a method similar to the word-level attention, a document representation v i is created using sentencelevel task attention \u03b1 l which is computed using the sentence level context vector u s", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 l = exp(u T l us) l exp(u T l us)", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "v i = l \u03b1 l h l ,", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where u l = tanh(W s h l +b s ) is analogous to u lt at the sentence level. The word-and sentence-level context vectors, u w and u s , as well as W w , W s , b w and b s , are learned during training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Work has shown that having multiple attention heads improves neural machine translation tasks (Vaswani et al., 2017) . To capture multiple aspects contributing to text complexity, we learn two sets of word level task attention over the word level GRU output. These two sets of sentence vectors feed into separate sentence-level GRUs to give us two document vectors by averaging using task attention weights at the sentence level. The document vectors are then concatenated to form the document representation. The multi-head attention RNN is shown in figure 1.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 116, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Head Attention", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The hierarchical model is designed for representing document structure, however, the sentences within a document are encoded independently. To capture information across sentences, we extend the concept of context attention used in machine translation, using it to learn context vectors for adjoining sentences. We extend the hierarchical RNN by introducing bi-directional context with attention. Using the word level GRU output, a \"look-back\" context vector c l\u22121 (w lt ) is calculated using context attention over the preceding sentence, and a \"look-ahead\" context vector c l+1 (w lt ) using context attention over the following sentence for each word in the current sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN with Bidirectional Context", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u03b1 (l\u22121)t (w lt ) = exp(score(h lt ,h (l\u22121)t ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN with Bidirectional Context", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "where score(h lt , h kt ) = h lt W \u03b1 h T kt and a single W \u03b1 is used for computing the score in both directions. The context vectors are concatenated with the hidden state to form the new hidden state h lt .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical RNN with Bidirectional Context", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h lt = [c l\u22121 (w lt ), h lt , c l+1 (w lt )]", |
| "eq_num": "(17)" |
| } |
| ], |
| "section": "Hierarchical RNN with Bidirectional Context", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The rest of the structure is the same as a hierarchical RNN, using equations 9-12 with h lt instead of h lt . Figure 2 shows the structure for calculating 'look-back\" context.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 110, |
| "end": 118, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hierarchical RNN with Bidirectional Context", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The implementation is done via the Tensorflow library (Abadi et al., 2016 ). 6 All RNNs use GRUs with layer normalization (Ba et al., 2016) , trained using Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001. Regularization was done via drop out. The validation set was used to do hyper-parameter tuning, with a grid search over drop out rate, number of epochs, and hidden dimension of GRU cells. Good result for all four architectures are obtained with a batch size of 10, a dropout rate of 0.5-0.7, a cell size of 75-250 for the word-level GRU, and a cell size of 40-75 for the sentence-level GRU. For the RNN, we also trained a version with a larger word-level hidden layer cell size of 600. Pre-trained Glove embeddings 7 are used for all models (Pennington et al., 2014) , using a vocabulary size of 65000-75000. 8 The out of vocabulary (OOV) percentage on the CCS test set was 3%, and on the informational test set was 0.5%. All OOV words were mapped to an 'UNK' token. The text was lower-cased, and split into sentences for the hierarchical models using the natural language toolkit (NLTK) (Loper and Bird, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 73, |
| "text": "(Abadi et al., 2016", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 122, |
| "end": 139, |
| "text": "(Ba et al., 2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 765, |
| "end": 790, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 833, |
| "end": 834, |
| "text": "8", |
| "ref_id": null |
| }, |
| { |
| "start": 1112, |
| "end": 1134, |
| "text": "(Loper and Bird, 2002)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "We test our models on the two science test sets, as well as on the CCS appendix B document level texts and a paragraph-level version of these texts. We also evaluated the best performing model on the middle school science questions data set. Since both the true reading level and predicted levels are ordered variables, we use Spearman's rank correlation as the evaluation metric to capture the monotonic relation between the predictions and the true levels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As a baseline, we use the WeeBit linear regression system (Vajjala and Meurers, 2014 ). The WeeBit system uses knowledge-based features as input to a linear regression model to predict reading level as a number between 1 and 5.5, which maps to text appropriate for readers 7-16 years of age. The feature set includes parts-of-speech (e.g. density of different parts-of-speech), lexical (e.g. measurement of lexical variation), syntactic (e.g. the number of verb phrases), morphological (e.g. ratio of transitive verbs to total words) and psycholinguistic (e.g. age of acquisition) features. There are no features related to discourse, thus it is possible to compute features for sentence level texts. The system was trained on a subset of the data that our system was trained on, so it is at a disadvantage. We did not have the capability to retrain the system.", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 84, |
| "text": "(Vajjala and Meurers, 2014", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Results for the different models:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results by Genre", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 sequential RNN with self attention (RNN),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results by Genre", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 large sequential RNN with self attention (RNN 600),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results by Genre", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 hierarchical RNN with attention at the word and sentence level (HAN),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results by Genre", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 hierarchical RNN with bidirectional context and attention (BCA), and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results by Genre", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 multi-head attention (MHA)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results by Genre", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "are shown in table 3, together with the results for the WeeBit system which has state-of-the-art results on the CCS documents. For the CCS data, both D 1 and D 2 training configurations are used for the neural models; only D 2 is used for the informational test set. For all of these models the hidden layer dimension for the word level was between 125 and 250. We also trained a sequential RNN with a larger hidden layer dimension of 600. The HAN does better for document level samples than a sequential RNN; the converse is true for paragraph level texts. The RNN with a larger hidden layer dimension performs better for longer texts, while the performance for smaller dimension RNN deteriorates with increasing text length. The BCA model seems to generalize to longer documents and new genres better than the other neural networks. Figure 3 shows the error distribution for BCA(D 1 ) in terms of distance from true prediction broken down by genre on the 168 CCS documents. The category of informational texts is often over predicted, which we hypothesize is roughly due to specific articles related to the United States history and constitution. The only training data for our models with that subject is in the grades 6-8 and 9-12 categories. The performance for literary and mixed texts, on the other hand, is roughly unbiased; this shows that the model is better at generalizing to non-informational texts, even when there are no literary text samples in the training data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 835, |
| "end": 843, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results by Genre", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Figures 4 and 5 show the performance of our models and the WeeBit model as a function of document length, both on the informational paragraphs test set and the CCS paragraph level test set. The results indicate that for shorter texts, particularly under 100 words, neural models tend to do better. Even for a mixture of genres, the model with bidirectional context performs better than the featurebased regression model, as shown in figure 5. It is likely that the WeeBit results results on shorter texts would improve if trained on the same training set that is used for the neural models. However, we hypothesize that the feature-based approach is less well suited for shorter documents because the feature vector will be more sparse. Comparing the CCS document-and paragraphlevel test sets, the average percentage of features that are zero-valued is 28% for document-level texts and 44% for paragraph-level texts. The most sparse vectors are 40% and 81% for document and paragraph-level texts, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results by Length", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Finally, we apply both the baseline WeeBit system and our best model (BCA trained on D 1 ) to the set of 5470 grade 6-8 science questions. The results are shown in figures 6 and 7, where the grade 6-8 category (ages 11-14) corresponds to predicted level 3 for BCA and predicted level 4 for WeeBit. The results indicate that BCA predictions are better aligned with human rankings than the baseline. As expected, grade 6 questions more likely to be predicted as less difficult than grade 8 questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results for Science Assessment Questions", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Attention can help provide insight into what the model is learning. In the analyses here, all attention values are normalized by dividing by the highest attention value in the sentence/document to account for different sequence lengths. Figure 8 shows the word-level attention for the BCA and HAN for a sample text from the science assessment questions test set. (Attention weights in the figure are smoothed to reflect the fact that a word vector from a biLSTM reflects the word's context.) The results show that attention weights are more sparse for HAN than for BCA. At the sentence level (not shown here), the BCA sentence weights tend to be more uniformly distributed, whereas HAN weights are again more selective. Another aspect of the attention is that a word does not have the same attention level for all occurrences in a document. We look at maximum and minimum values of attention as a function of word frequency for each grade band, shown in figure 9 for grade 6-8 science assessment questions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 237, |
| "end": 245, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Attention Visualization", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The pattern is similar for each grade band in the validation and test sets. The minimum attention values assigned to a word drop with increasing word frequency, while the maximum values increase. This suggests that the attention weights are more confident for more frequent words, such as of. Words like fusion and m/s get high max- Figure 6 : BCA predicted levels for middle school science assessment questions imum attention values, despite not being as high frequency as words like of and the. This may indicate that they are likely to contribute to linguistic complexity. The fact that transformation has a high minimum is also likely an indicator of its importance. For HAN without bidirectional context, a similar visualization shows that while the trend is similar, the attention weights typically tend to be lower, both for minimum and maximum values.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 333, |
| "end": 341, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Attention Visualization", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We find that sentence-end tokens (period, exclamation and question mark) have high average attention weight, ranging from 0.54 to 0.81, while sentence-internal punctuation (comma, colon and semicolon) get slightly lower weights, ranging from 0.20 to 0.47. The trend is similar for all grades. These high attention values might be due to punctuation serving as a proxy for sentence structure. It is interesting to note that the question mark gets higher minimum attention value than period, despite being high frequency. It may be that questions carry information that is particularly relevant to informational text difficulty.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention Visualization", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Our work differs from existing models that estimate text difficulty since we do not use engineered features. There are advantages and disadvantages to both approaches, which we briefly discuss here. Models using engineered features based on research on language acquisition offer interpretability and insight into which specific linguistic features are contributing to text difficulty. An additional advantage of using engineered features in a regression or classification model is that less training data is required.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "However, given both the evolving theories in language acquisition and the large number of variables that impact second language acquisition, the methodologies used in language acquisition research have certain limitations. For example, the number of variables that can be considered in a study is practically limited, the sample population is often small, and the question of qualitative vs. quantitative methodologies used can influence outcomes (more details in (Larsen-Freeman and Long, 2014; Mitchell et al., 2013) ). These limitations can carry into the feature engineering process. Using a model with text as input ensures that these constraints are not inherently part of the model; the performance of the system is not limited by the features provided. Of course, performance is limited by the training data, both in terms of the cost of collection and any biases inherent in the data. In addition, with advances in neural architectures such as attention modeling, there may be opportunities for identifying specific aspects of texts that are particularly difficult, though research in this direction is still in early stages.", |
| "cite_spans": [ |
| { |
| "start": 464, |
| "end": 495, |
| "text": "(Larsen-Freeman and Long, 2014;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 496, |
| "end": 518, |
| "text": "Mitchell et al., 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In summary, this work explored different neural architectures for linguistic complexity analysis, to mitigate issues with accuracy of systems based on engineered features. Experimental results show that it is possible to achieve high accuracy on texts shorter than 100 words using RNNs with attention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Using hierarchical structure improves results, particularly with attention models that leverage bidirectional sentence context. Testing on a mix of genres shows that the best neural model can generalize to subjects beyond what it is trained on, though it performs slightly worse than a featurebased regression model on texts longer than 100 words. More training data from other genres will likely reduce the performance gap. Analysis of attention weights can provide insights into which phrases/sentences are important, both at the aggregate and sample level. Developing new methods for analysis of attention may be useful both for improving model performance and for providing more interpretable results for educators. Two aspects not considered in this work are explicit representation of syntax and discourse structure. Syntax can be incorporated by concatentating word and dependency embeddings at the token level. Our BCA model was designed to capture cross-sentence coherence and coordination, but it may be useful to extend the hierarchy for longer documents and/or introduce explicit models of the types of discourse features used in Coh-Metrix (Graesser et al., 2004) .", |
| "cite_spans": [ |
| { |
| "start": 1153, |
| "end": 1176, |
| "text": "(Graesser et al., 2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "http://www.corestandards.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In splitting the text into paragraphs, we are implicitly assuming that all paragraphs have the same linguistic complexity as the textbook, which is probably not the case. Thus, there will be noise in both the training and test data, so some variation in the predicted levels is to be expected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Available at https://tinyurl.com/yc59hlgj. 4 http://data.allenai.org/ ai2-science-questions-mercury/ 5 http://data.allenai.org/ ai2-science-questions/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "t exp(score(h lt ,h (l\u22121)t )) (13) c l\u22121 (w lt ) = t \u03b1 (l\u22121)t (w lt )h (l\u22121)t (14) \u03b1 (l+1)t (w lt ) = exp(score(h lt ,h (l+1)t )) t exp(score(h lt ,h (l+1)t )) (15) c l+1 (w lt ) = t \u03b1 (l+1)t (w lt )h (l+1)t (16)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The code and trained models are available at https: //github.com/Farahn/Liguistic-Complexity.7 http://nlp.stanford.edu/data/glove. 840B.300d.zip 8 In vocabulary words not present in Glove had randomly initialized word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank Dr. Meurers, Professor University of Tubingen, and Dr. Vajjala-Balakrishna, Assistant Professor Iowa State University, for sharing the WeeBit training corpus, their trained readability assessment model and the Common Core test corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", |
| "authors": [ |
| { |
| "first": "Mart\u00edn", |
| "middle": [], |
| "last": "Abadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Barham", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Brevdo", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Citro", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Devin", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1603.04467" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Psychometric issues in the ELL assessment and special education eligibility", |
| "authors": [ |
| { |
| "first": "Jamal", |
| "middle": [], |
| "last": "Abedi", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Teachers College Record", |
| "volume": "108", |
| "issue": "11", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jamal Abedi. 2006. Psychometric issues in the ELL as- sessment and special education eligibility. Teachers College Record, 108(11):2282.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The language factor in mathematics tests", |
| "authors": [ |
| { |
| "first": "Jamal", |
| "middle": [], |
| "last": "Abedi", |
| "suffix": "" |
| }, |
| { |
| "first": "Carol", |
| "middle": [], |
| "last": "Lord", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Applied Measurement in Education", |
| "volume": "14", |
| "issue": "3", |
| "pages": "219--234", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jamal Abedi and Carol Lord. 2001. The language fac- tor in mathematics tests. Applied Measurement in Education, 14(3):219-234.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1409.0473" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Readability revisited: The new Dale-Chall readability formula", |
| "authors": [ |
| { |
| "first": "Jeanne", |
| "middle": [], |
| "last": "Sternlicht", |
| "suffix": "" |
| }, |
| { |
| "first": "Chall", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Edgar", |
| "middle": [], |
| "last": "Dale", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeanne Sternlicht Chall and Edgar Dale. 1995. Read- ability revisited: The new Dale-Chall readability formula. Brookline Books.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merri\u00ebnboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Fethi", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1406.1078" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Isaac", |
| "middle": [], |
| "last": "Cowhey", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "Tushar", |
| "middle": [], |
| "last": "Khot", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Sabharwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Carissa", |
| "middle": [], |
| "last": "Schoenick", |
| "suffix": "" |
| }, |
| { |
| "first": "Oyvind", |
| "middle": [], |
| "last": "Tafjord", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1803.05457" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A computer readability formula designed for machine scoring", |
| "authors": [ |
| { |
| "first": "Meri", |
| "middle": [], |
| "last": "Coleman", |
| "suffix": "" |
| }, |
| { |
| "first": "Ta", |
| "middle": [ |
| "Lin" |
| ], |
| "last": "Liau", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Journal of Applied Psychology", |
| "volume": "60", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2):283.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A language modeling approach to predicting reading difficulty", |
| "authors": [ |
| { |
| "first": "Kevyn", |
| "middle": [], |
| "last": "Collins-Thompson", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "P" |
| ], |
| "last": "Callan", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "193--200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevyn Collins-Thompson and James P. Callan. 2004. A language modeling approach to predicting read- ing difficulty. In Human Language Technology Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics, HLT-NAACL 2004, Boston, Massachusetts, USA, May 2-7, 2004, pages 193-200.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A comparison of features for automatic readability assessment", |
| "authors": [ |
| { |
| "first": "Lijun", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Jansche", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Huenerfauth", |
| "suffix": "" |
| }, |
| { |
| "first": "No\u00e9mie", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", |
| "volume": "", |
| "issue": "", |
| "pages": "276--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lijun Feng, Martin Jansche, Matt Huenerfauth, and No\u00e9mie Elhadad. 2010. A comparison of features for automatic readability assessment. In Proceed- ings of the 23rd International Conference on Com- putational Linguistics: Posters, pages 276-284. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Coh-metrix. Educational Researcher", |
| "authors": [ |
| { |
| "first": "Arthur", |
| "middle": [ |
| "C" |
| ], |
| "last": "Graesser", |
| "suffix": "" |
| }, |
| { |
| "first": "Danielle", |
| "middle": [ |
| "S" |
| ], |
| "last": "Mcnamara", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonna", |
| "middle": [ |
| "M" |
| ], |
| "last": "Kulikowich", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "40", |
| "issue": "", |
| "pages": "223--234", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arthur C. Graesser, Danielle S. McNamara, and Jonna M. Kulikowich. 2011. Coh-metrix. Educa- tional Researcher, 40(5):223-234.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Coh-metrix: Analysis of text on cohesion and language", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Arthur", |
| "suffix": "" |
| }, |
| { |
| "first": "Danielle", |
| "middle": [ |
| "S" |
| ], |
| "last": "Graesser", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [ |
| "M" |
| ], |
| "last": "Mcnamara", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiqiang", |
| "middle": [], |
| "last": "Louwerse", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Behavior Research Methods", |
| "volume": "36", |
| "issue": "2", |
| "pages": "193--202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arthur C Graesser, Danielle S McNamara, Max M Louwerse, and Zhiqiang Cai. 2004. Coh-metrix: Analysis of text on cohesion and language. Behav- ior Research Methods, 36(2):193-202.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The language factor in elementary mathematics assessments: Computational skills and applied problem solving in a multidimensional irt framework", |
| "authors": [ |
| { |
| "first": "Marian", |
| "middle": [], |
| "last": "Hickendorff", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Applied Measurement in Education", |
| "volume": "26", |
| "issue": "4", |
| "pages": "253--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marian Hickendorff. 2013. The language factor in el- ementary mathematics assessments: Computational skills and applied problem solving in a multidimen- sional irt framework. Applied Measurement in Edu- cation, 26(4):253-278.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Markov logic networks for natural language question answering", |
| "authors": [ |
| { |
| "first": "Tushar", |
| "middle": [], |
| "last": "Khot", |
| "suffix": "" |
| }, |
| { |
| "first": "Niranjan", |
| "middle": [], |
| "last": "Balasubramanian", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Gribkoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Sabharwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1507.03045" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tushar Khot, Niranjan Balasubramanian, Eric Gribkoff, Ashish Sabharwal, Peter Clark, and Oren Etzioni. 2015. Markov logic networks for natural language question answering. arXiv preprint arXiv:1507.03045.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel", |
| "authors": [ |
| { |
| "first": "Robert P Fishburne", |
| "middle": [], |
| "last": "Peter Kincaid", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "L" |
| ], |
| "last": "Jr", |
| "suffix": "" |
| }, |
| { |
| "first": "Brad", |
| "middle": [ |
| "S" |
| ], |
| "last": "Rogers", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chissom", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Naval Technical Training Command Millington TN Re- search Branch.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "An introduction to second language acquisition research", |
| "authors": [ |
| { |
| "first": "Diane", |
| "middle": [], |
| "last": "Larsen-Freeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael H", |
| "middle": [], |
| "last": "Long", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diane Larsen-Freeman and Michael H Long. 2014. An introduction to second language acquisition re- search. Routledge.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A structured self-attentive sentence embedding", |
| "authors": [ |
| { |
| "first": "Zhouhan", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Minwei", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Cicero", |
| "middle": [], |
| "last": "Nogueira", |
| "suffix": "" |
| }, |
| { |
| "first": "Mo", |
| "middle": [], |
| "last": "Santos", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Ziang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proc. ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira do San- tos, Mo Yu, Bing Ziang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proc. ICLR.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Nltk: The natural language toolkit", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Loper", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bird", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "63--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computa- tional linguistics-Volume 1, pages 63-70. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Effective approaches to attentionbased neural machine translation", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1508.04025" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Regression models for ordinal data", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Mccullagh", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Journal of the royal statistical society. Series B (Methodological)", |
| "volume": "", |
| "issue": "", |
| "pages": "109--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter McCullagh. 1980. Regression models for ordinal data. Journal of the royal statistical society. Series B (Methodological), pages 109-142.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Recurrent neural network based language model", |
| "authors": [ |
| { |
| "first": "Tom\u00e1\u0161", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Karafi\u00e1t", |
| "suffix": "" |
| }, |
| { |
| "first": "Luk\u00e1\u0161", |
| "middle": [], |
| "last": "Burget", |
| "suffix": "" |
| }, |
| { |
| "first": "Ja\u0148", |
| "middle": [], |
| "last": "Cernock\u1ef3", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Eleventh Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Measures of text difficulty: Testing their predictive value for grade levels and student performance. Council of Chief State School Officers", |
| "authors": [ |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Nelson", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Perfetti", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Liben", |
| "suffix": "" |
| }, |
| { |
| "first": "Meredith", |
| "middle": [], |
| "last": "Liben", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jessica Nelson, Charles Perfetti, David Liben, and Meredith Liben. 2012. Measures of text difficulty: Testing their predictive value for grade levels and student performance. Council of Chief State School Officers, Washington, DC.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A machine learning approach to reading level assessment. Computer speech & language", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sarah", |
| "suffix": "" |
| }, |
| { |
| "first": "Mari", |
| "middle": [], |
| "last": "Petersen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ostendorf", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "23", |
| "issue": "", |
| "pages": "89--106", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sarah E Petersen and Mari Ostendorf. 2009. A ma- chine learning approach to reading level assessment. Computer speech & language, 23(1):89-106.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Reading level assessment using support vector machines and statistical language models", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sarah", |
| "suffix": "" |
| }, |
| { |
| "first": "Mari", |
| "middle": [], |
| "last": "Schwarm", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ostendorf", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "523--530", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sarah E Schwarm and Mari Ostendorf. 2005. Reading level assessment using support vector machines and statistical language models. In Proceedings of the 43rd Annual Meeting on Association for Computa- tional Linguistics, pages 523-530. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "A two-stage approach for generating unbiased estimates of text complexity", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kathleen", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Sheehan", |
| "suffix": "" |
| }, |
| { |
| "first": "Diane", |
| "middle": [], |
| "last": "Flor", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Napolitano", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Workshop on Natural Language Processing for Improving Textual Accessibility", |
| "volume": "", |
| "issue": "", |
| "pages": "49--58", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kathleen M Sheehan, Michael Flor, and Diane Napoli- tano. 2013. A two-stage approach for generating un- biased estimates of text complexity. In Proceedings of the Workshop on Natural Language Processing for Improving Textual Accessibility, pages 49-58.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "The lexile framework", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Aj", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Stenner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Horabin", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "Malbert", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "AJ Stenner, Ivan Horabin, Dean R Smith, and Malbert Smith. 1988. The lexile framework. Durham, NC: MetaMetrics.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Improved semantic representations from tree-structured long short-term memory networks", |
| "authors": [ |
| { |
| "first": "Kai Sheng", |
| "middle": [], |
| "last": "Tai", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1503.00075" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "On improving the accuracy of readability classification using insights from second language acquisition", |
| "authors": [ |
| { |
| "first": "Sowmya", |
| "middle": [], |
| "last": "Vajjala", |
| "suffix": "" |
| }, |
| { |
| "first": "Detmar", |
| "middle": [], |
| "last": "Meurers", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "163--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sowmya Vajjala and Detmar Meurers. 2012. On im- proving the accuracy of readability classification us- ing insights from second language acquisition. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 163- 173. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Readability assessment for text simplification: From analysing documents to identifying sentential simplifications", |
| "authors": [ |
| { |
| "first": "Sowmya", |
| "middle": [], |
| "last": "Vajjala", |
| "suffix": "" |
| }, |
| { |
| "first": "Detmar", |
| "middle": [], |
| "last": "Meurers", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ITL-International Journal of Applied Linguistics", |
| "volume": "165", |
| "issue": "2", |
| "pages": "194--222", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sowmya Vajjala and Detmar Meurers. 2014. Read- ability assessment for text simplification: From analysing documents to identifying sentential sim- plifications. ITL-International Journal of Applied Linguistics, 165(2):194-222.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "6000--6010", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Hierarchical attention networks for document classification", |
| "authors": [ |
| { |
| "first": "Zichao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Diyi", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Alexander", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [ |
| "H" |
| ], |
| "last": "Smola", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1480--1489", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J Smola, and Eduard H Hovy. 2016. Hi- erarchical attention networks for document classifi- cation. In HLT-NAACL, pages 1480-1489.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Generative and discriminative text classification with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Dani", |
| "middle": [], |
| "last": "Yogatama", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Wang", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1703.01898" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dani Yogatama, Chris Dyer, Wang Ling, and Phil Blun- som. 2017. Generative and discriminative text clas- sification with recurrent neural networks. arXiv preprint arXiv:1703.01898.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "RNN with Multi-Head Attention Figure 2: RNN with Bidirectional Context and Attention", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Performance vs. text length for informational paragraphs BCA(D 2 )", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": "Performance vs. maximum text length for CCS paragraphs BCA(D 1 )", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "text": "WeeBit predicted levels for middle school science assessment questions", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "text": "Word level attention visualization for BCA (top) and HAN (bottom) for a middle school science assessment question Maximum and minimum values of attention as a function of word count for BCA", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "text": "Chapter-based test data split", |
| "html": null, |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "text": "About 20% of the training sam-", |
| "html": null, |
| "content": "<table><tr><td>Grade Level</td><td>Train Samples</td><td>Mean Length</td></tr><tr><td>K-1</td><td>739</td><td>24.42</td></tr><tr><td>2-3</td><td>723</td><td>62.05</td></tr><tr><td>4-5</td><td>4570</td><td>63.82</td></tr><tr><td>6-8</td><td>15940</td><td>74.79</td></tr><tr><td>9-10</td><td>3051</td><td>68.24</td></tr><tr><td>11-12</td><td>2301</td><td>75.28</td></tr></table>", |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "text": "Training data (D 1 ) with mean length of text in words ples (5152) are from WeeBit, spread across grades 2-12. For testing on all three sets, we defined a training configuration D 2 that did not include any text from chapters overlapping with the test data, so there training set is somewhat smaller than for D 1 , except for grades K-1. The same WeeBit training data was included in both cases.", |
| "html": null, |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "text": "", |
| "html": null, |
| "content": "<table><tr><td>: Results (Spearman Rank Correlation)</td></tr></table>", |
| "num": null |
| } |
| } |
| } |
| } |