| { |
| "paper_id": "E14-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:40:31.363439Z" |
| }, |
| "title": "Assessing the relative reading level of sentence pairs for text simplification", |
| "authors": [ |
| { |
| "first": "Sowmya", |
| "middle": [], |
| "last": "Vajjala", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Sprachwissenschaft Universit\u00e4t T\u00fcbingen", |
| "location": {} |
| }, |
| "email": "sowmya@sfs.uni-tuebingen.de" |
| }, |
| { |
| "first": "Detmar", |
| "middle": [], |
| "last": "Meurers", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Sprachwissenschaft Universit\u00e4t T\u00fcbingen", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "While the automatic analysis of the readability of texts has a long history, the use of readability assessment for text simplification has received only little attention so far. In this paper, we explore readability models for identifying differences in the reading levels of simplified and unsimplified versions of sentences. Our experiments show that a relative ranking is preferable to an absolute binary one and that the accuracy of identifying relative simplification depends on the initial reading level of the unsimplified version. The approach is particularly successful in classifying the relative reading level of harder sentences. In terms of practical relevance, the approach promises to be useful for identifying particularly relevant targets for simplification and to evaluate simplifications given specific readability constraints.", |
| "pdf_parse": { |
| "paper_id": "E14-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "While the automatic analysis of the readability of texts has a long history, the use of readability assessment for text simplification has received only little attention so far. In this paper, we explore readability models for identifying differences in the reading levels of simplified and unsimplified versions of sentences. Our experiments show that a relative ranking is preferable to an absolute binary one and that the accuracy of identifying relative simplification depends on the initial reading level of the unsimplified version. The approach is particularly successful in classifying the relative reading level of harder sentences. In terms of practical relevance, the approach promises to be useful for identifying particularly relevant targets for simplification and to evaluate simplifications given specific readability constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Text simplification essentially is the process of rewriting a given text to make it easier to process for a given audience. The target audience can either be human users trying to understand a text or machine applications, such as a parser analyzing text. Text simplification has been used in a variety of application scenarios, from providing simplified newspaper texts for aphasic readers to supporting the extraction of protein-protein interactions in the biomedical domain (Jonnalagadda and Gonzalez, 2009) .", |
| "cite_spans": [ |
| { |
| "start": 477, |
| "end": 510, |
| "text": "(Jonnalagadda and Gonzalez, 2009)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A related field of research is automatic readability assessment, which can be useful for evaluating text simplification. It can also be relevant for intermediate simplification steps, such as the identification of target sentences for simplification. Yet, so far there has only been little research connecting the two subfields, possibly because readability research typically analyzes documents, whereas simplification approaches generally targeted lexical and syntactic aspects at the sentence level. In this paper, we attempt to bridge this gap between readability and simplification by studying readability at a sentence level and exploring how well can a readability model identify the differences between unsimplified and simplified sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our main research questions in this paper are: 1. Can the readability features that worked at the document level successfully be used at the sentence level? 2. How accurately can we identify the differences in the sentential reading level before and after simplification? To pursue these questions, we started with constructing a documentlevel readability model. We then applied it to normal and simplified versions of sentences drawn from Wikipedia and Simple Wikipedia.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As context of our work, we first discuss relevant related research. Section 2 then describes the corpora and the features we used to construct our readability model. Section 3 discusses the performance of our readability model in comparison with other existing systems. Sections 4 and 5 present our experiments with sentence level readability analysis and the results. In Section 6 we present our conclusions and plans for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Research into automatic text simplification essentially started with the idea of splitting long sentences into multiple shorter sentences to improve parsing efficiency . This was followed by rule-based approaches targeting human and machine uses (Carroll et al., 1999; Siddharthan, 2002 Siddharthan, , 2004 .", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 268, |
| "text": "(Carroll et al., 1999;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 269, |
| "end": 286, |
| "text": "Siddharthan, 2002", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 287, |
| "end": 306, |
| "text": "Siddharthan, , 2004", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "With the availability of a sentence-aligned corpus based on Wikipedia and SimpleWikipedia texts, data-driven approaches, partly inspired by statistical machine translation, appeared (Specia, 2010; Zhu et al., 2010; Bach et al., 2011; Coster and Kauchak, 2011; Woodsend and Lapata, 2011) .", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 196, |
| "text": "(Specia, 2010;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 197, |
| "end": 214, |
| "text": "Zhu et al., 2010;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 215, |
| "end": 233, |
| "text": "Bach et al., 2011;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 234, |
| "end": 259, |
| "text": "Coster and Kauchak, 2011;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 260, |
| "end": 286, |
| "text": "Woodsend and Lapata, 2011)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "While simplification methods have evolved, understanding which parts of a text need to be simplified and methods for evaluating the simplified text so far received only little attention. The use of readability assessment for simplification has mostly been restricted to using traditional readability formulae for evaluating or generating simplified text (Zhu et al., 2010; Wubben et al., 2012; Klerke and S\u00f8gaard, 2013; Stymne et al., 2013) . Some recent work briefly addresses issues such as classifying sentences by their reading level (Napoles and Dredze, 2010) and identifying sentential transformations needed for text simplification using text complexity features (Medero and Ostendorf, 2011) . Some simplification approaches for non-English languages (Aluisio et al., 2010; Gasperin et al., 2009; \u0160tajner et al., 2013 ) also touch on the use of readability assessment.", |
| "cite_spans": [ |
| { |
| "start": 354, |
| "end": 372, |
| "text": "(Zhu et al., 2010;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 373, |
| "end": 393, |
| "text": "Wubben et al., 2012;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 394, |
| "end": 419, |
| "text": "Klerke and S\u00f8gaard, 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 420, |
| "end": 440, |
| "text": "Stymne et al., 2013)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 538, |
| "end": 564, |
| "text": "(Napoles and Dredze, 2010)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 670, |
| "end": 698, |
| "text": "(Medero and Ostendorf, 2011)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 758, |
| "end": 780, |
| "text": "(Aluisio et al., 2010;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 781, |
| "end": 803, |
| "text": "Gasperin et al., 2009;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 804, |
| "end": 824, |
| "text": "\u0160tajner et al., 2013", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "In the present paper, we focus on the neglected connection between readability analysis and simplification. We show through a cross-corpus evaluation that a document level, regression-based readability model successfully identifies the differences between simplified vs. unsimplified sentences. This approach can be useful in various stages of simplification ranging from identifying simplification targets to the evaluation of simplification outcomes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "We built and tested our document and sentence level readability models using three publicly available text corpora with reading level annotations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "WeeBit Corpus: The WeeBit corpus consists of 3,125 articles belonging to five reading levels, with 625 articles per reading level. The texts compiled from the WeeklyReader and BBC Bitesize target English language learners from 7 to 16 years of age. We used this corpus to build our primary readability model by mapping the five reading levels in the corpus to a scale of 1-5 and considered readability assessment as a regression problem. (CCSSO, 2010) . They are annotated by experts with grade bands that cover the grades 1 to 12. These texts serve as exemplars for the level of reading ability at a given grade level. This corpus was introduced as an evaluation corpus for readability models in the recent past (Sheehan et al., 2010; Nelson et al., 2012; Flor et al., 2013) , so we used it to compare our model with other systems.", |
| "cite_spans": [ |
| { |
| "start": 438, |
| "end": 451, |
| "text": "(CCSSO, 2010)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 713, |
| "end": 735, |
| "text": "(Sheehan et al., 2010;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 736, |
| "end": 756, |
| "text": "Nelson et al., 2012;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 757, |
| "end": 775, |
| "text": "Flor et al., 2013)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "This corpus was created by Zhu et al. (2010) and consists of \u223c100k aligned sentence pairs drawn from Wikipedia and Simple English Wikipedia. We removed all pairs of identical sentences, i.e., where the Wiki and the SimpleWiki versions are the same. We used this corpus to study reading level assessment at the sentence level.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 44, |
| "text": "Zhu et al. (2010)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Wiki-SimpleWiki Sentence Aligned Corpus:", |
| "sec_num": null |
| }, |
| { |
| "text": "We started with the feature set described in and added new features focusing on the morphological and psycholinguistic properties of words. The features can be broadly classified into four groups.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We adapted the lexical features from . This includes measures of lexical richness from Second Language Acquisition (SLA) research and measures of lexical variation (noun, verb, adjective, adverb and modifier variation). In addition, this feature set also includes part-of-speech densities (e.g., the average # of nouns per sentence). The information needed to calculate these features was extracted using the Stanford Tagger (Toutanova et al., 2003) . None of the lexical richness and POS features we used refer to specific words or lemmas. Syntactic Complexity features: Parse tree based features and some syntactic complexity measures derived from SLA research proved useful for readability classification in the past, so we made use of all the syntactic features from : mean lengths of various production units (sentence, clause, t-unit), measures of coordination and subordination (e.g., # of coordinate clauses per clause), the presence of particular syntactic structures (e.g., VPs per t-unit), the number of phrases of various categories (e.g., NP, VP, PP), the average lengths of phrases, the parse tree height, and the number of constituents per subtree. None of the syntactic features refer to specific words or lemmas. We used the BerkeleyParser (Petrov and Klein, 2007) for generating the parse trees and the Tregex tool (Levy and Andrew, 2006) to count the occurrences of the syntactic patterns.", |
| "cite_spans": [ |
| { |
| "start": 425, |
| "end": 449, |
| "text": "(Toutanova et al., 2003)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1257, |
| "end": 1281, |
| "text": "(Petrov and Klein, 2007)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1333, |
| "end": 1356, |
| "text": "(Levy and Andrew, 2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "While the first two feature sets are based on our previous work, as far as we know the next two are used in readability assessment for the first time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "Features from the Celex Lexical Database: The Celex Lexical Database (Baayen et al., 1995) is a database consisting of information about morphological, syntactic, orthographic and phonological properties of words along with word frequencies in various corpora. Celex for English contains this information for more than 50,000 lemmas. An overview of the fields in the Celex database is provided online 1 and the Celex user manual 2 .", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 90, |
| "text": "(Baayen et al., 1995)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the morphological and syntactic properties of lemmas as features. We excluded word frequency statistics and properties which consisted of word strings. In all, we used 35 morphological and 49 syntactic properties that were expressed using either character or numeric codes in this database as features for our task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "The morphological properties in Celex include information about the derivational, inflectional and compositional features of the words, their morphological origins and complexity. The syntactic properties of the words in Celex describe the attributes of a word depending on its parts of speech. For the morphological and syntactic properties from this database, we used the proportion of occurrences per text as features. For example, the ratio of transitive verbs, complex morphological words, and vocative nouns to number of words. Lemmas from the text that do not have entries in the Celex database were ignored.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "Word frequency statistics from Celex have been used before to analyze text difficulty in the past (Crossley et al., 2007) . However, to our knowledge, this is the first time morphological and syntactic information from the Celex database is used for readability assessment.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 121, |
| "text": "(Crossley et al., 2007)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "Psycholinguistic features: The MRC Psycholinguistic Database (Wilson, 1988 ) is a freely available, machine readable dictionary annotated with 26 linguistic and psychological attributes of about 1.5 million words. 3 We used the measures of word familiarity, concreteness, imageability, meaningfulness, and age of acquisition from this database as our features, by encoding their average values per text. Kuperman et al. (2012) compiled a freely available database that includes Age of Acquisition (AoA) ratings for over 50,000 English words. 4 This database was created through crowd sourcing and was compared with several other AoA norms, which are also included in the database. For each of the five AoA norms, we computed the average AoA of words per text.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 74, |
| "text": "(Wilson, 1988", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 214, |
| "end": 215, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 404, |
| "end": 426, |
| "text": "Kuperman et al. (2012)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 542, |
| "end": 543, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "Turning to the final resource used, we included the average number of senses per word as calculated using the MIT Java WordNet Interface as a feature. 5 We excluded auxiliary verbs for this calculation as they tend to have multiple senses that do not necessarily contribute to reading difficulty.", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 152, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "Combining the four feature groups, we encode 151 features for each text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical richness and POS features:", |
| "sec_num": null |
| }, |
| { |
| "text": "In our first experiment, we tested the documentlevel readability model based on the 151 features using the WeeBit corpus. Under a regression perspective on readability, we evaluated the approach using Pearson Correlation and Root Mean Square Error (RMSE) in a 10-fold cross-validation setting. We used the SMO Regression implementation from WEKA (Hall et al., 2009) and achieved a Pearson correlation of 0.92 and an RMSE of 0.53.", |
| "cite_spans": [ |
| { |
| "start": 346, |
| "end": 365, |
| "text": "(Hall et al., 2009)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document-Level Readability Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The document-level performance of our 151 feature model is virtually identical to that of the regression model we presented in Vajjala and Meurers (2013) . But compared to our previous work, the Celex and psycholinguistic features we included here provide more lexical information that is meaningful to compute even for the sentencelevel analysis we turn to in the next section.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 153, |
| "text": "Vajjala and Meurers (2013)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document-Level Readability Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To be able to compare our document-level results with other contemporary readability approaches, we need a common test corpus. Nelson et al. (2012) compared several state of the art readability assessment systems using five test sets and showed that the systems that went beyond traditional formulae and wordlists performed better on these real-life test sets. We tested our model on one of the publicly accessible test corpora from this study, the Common Core Standards Corpus. Flor et al. (2013) used the same test set to study a measure of lexical tightness, providing a further performance reference. Table 1 compares the performance of our model to that reported for several commercial (indicated in italics) and research systems on this test set. Nelson et al. 2012 As the table shows, our model is the best noncommercial system and overall second (tied with the Reading Maturity system) to SourceRater as the best performing commercial system on this test set. These results on an independent test set confirm the validity of our document-level readability model. With this baseline, we turned to a sentence-level readability analysis.", |
| "cite_spans": [ |
| { |
| "start": 479, |
| "end": 497, |
| "text": "Flor et al. (2013)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 605, |
| "end": 612, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Document-Level Readability Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For each of the pairs in the Wiki-SimpleWiki Sentence Aligned Corpus introduced above, we labeled the sentence from Wikipedia as hard and that from Simple English Wikipedia as simple. The corpus thus consisted of single sentences, each labeled either simple or hard. On this basis, we constructed a binary classification model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our document-level readability model does not include discourse features, so all 151 features can also be computed for individual sentences. We built a binary sentence-level classification model using WEKA's Sequential Minimal Optimization (SMO) for training an SVM in WEKA on the Wiki-SimpleWiki sentence aligned corpus. The choice of algorithm was primarily motivated by the fact that it was shown to be efficient in previous work on readability classification (Feng, 2010; Hancke et al., 2012; Falkenjack et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 463, |
| "end": 475, |
| "text": "(Feng, 2010;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 476, |
| "end": 496, |
| "text": "Hancke et al., 2012;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 497, |
| "end": 521, |
| "text": "Falkenjack et al., 2013)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The accuracy of the resulting classifier determining whether a given sentence is simple or hard was disappointing, reaching only 66% accuracy in a 10-fold cross-validation setting. Experiments with different classification algorithms did not yield any more promising results. To study how the classification performance is impacted by the size of the training data, we experimented with different sizes, using SMO as the classification algorithm. Figure 1 shows the classification accuracy with different training set sizes. The graph shows that beyond 10% of the training data, more training data did not result in significant differences in classification accuracy. Even at 10%, the training set contains around 10k instances per category, so the variability of any of the patterns distinguished by our features is sufficiently represented.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 447, |
| "end": 455, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We also explored whether feature selection could be useful. A subset of features chosen by removing correlated features using the CfsSubsetEval method in WEKA did not improve the results, yielding an accuracy of 65.8%. A simple baseline based on the sentence length as single feature results in an accuracy of 60.5%, underscoring the limited value of the rich feature set in this binary classification setup.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For the sake of a direct comparison with the document-level model, we also explored modeling the task as a regression on a 1-2 scale. In comparison to the document-level model, which as discussed in section 3 had a correlation of 0.92, the sentence-level model achieves only a correlation of 0.4. A direct comparison is also possible when we train the document-level model as a five-class classifier with SMO. This model achieved a classification accuracy of \u223c90% on the documents, compared to the 66% accuracy of the sentencelevel model classifying sentences. So under each of these perspectives, the sentence-level models on the sentence task are much less successful than the document-level models on the document task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "But does this indicate that it is not possible to accurately identify the reading level distinctions between simplified and unsimplified versions at the sentence level? Is there not enough information available when considering a single sentence?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We hypothesized that the drop in the classification accuracy instead results from the relative nature of simplification. For each pair of the Wiki-SimpleWiki sentence aligned corpus we used, the Wiki sentence was harder than the Sim-pleWikipedia sentence. But this does not necessarily mean that each of the Wikipedia sentences is harder than each of the SimpleWikipedia sentences. The low accuracy of the binary classifier may thus simply result from the inappropriate assumption of an absolute, binary classification viewing each of the sentences originating from SimpleWikipedia as simple and each from the regular Wiki as hard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The confusion matrices of the binary classification suggests some support for this hypothesis, as more simple sentences were classified as hard compared to the other way around. This can result when a simple sentence is simpler than its hard version, but could actually be simplified furtherand as such may still be harder than another unsimplified sentence. The hypothesis thus amounts to saying that the two-class classification model mistakenly turned the relative difference between the sentence pairs into a global classification of individual sentences, independent of the pairs they occur in.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "How can we verify this hypothesis? The sentence corpus only provides the relative ranking of the pairs, but we can try to identify more finegrained readability levels for sentences by applying the five class readability model for documents that was introduced in section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-Level Binary Classification", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We applied the document-level readability model to the individual sentences from the Wiki-SimpleWiki corpus to study which reading levels are identified by our model. As we are using a regression model, the values sometimes go beyond the training corpus' scale of 1-5. For ease of comparison, we rounded off the reading levels to the five level scale, i.e., 1 means 1 or below, and 5 means 5 or above. Figure 2 shows the distribution of Wikipedia and SimpleWikipedia sentences according to the predictions of our document-level readability model trained on the WeeBit corpus. The model determines that a high percentage of the SimpleWiki sentences belong to lower reading levels, with over 45% at the lowest reading level; yet there also are some SimpleWikipedia sentences which are aligned even to the highest readability level. In contrast, the regular Wikipedia sentences are evenly distributed across all reading levels.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 402, |
| "end": 410, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relative Reading Levels of Sentences", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The distributions identified by the model support our hypothesis that some Wiki sentences are simpler than some SimpleWikipedia sentences. Note that this is fully compatible with the fact that for each pair of (SimpleWiki,Wiki) sentences included in the corpus, the former is higher in reading level than the latter; e.g., just consider two sentence pairs with the levels (1, 2) and (3, 5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Reading Levels of Sentences", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Zooming in on the relative reading levels of the paired unsimplified and simplified sentences, we wanted to determine for how many sentence pairs the sentence reading levels determined by our model are compatible with the pair's ranking. In other words, we calculated the percentage of pairs (S, N ) in which the reading level of a simplified sentence (S) is identified as less than, equal to, or greater than the unsimplified (normal) version of the sentence (N ), i.e., S < N , S = N , and S > N . Where simplification split a sentence into multiple sentences, we computed S as the average reading level of the split sentences. Given the regression model setup, we can consider how big the difference between two reading levels determined by the model should be in order for us to interpret it as a categorical difference in reading level. Let us call this discriminating reading-level difference the d-level. For example, with d = 0.3, a sentence pair determined to be at levels (3.4, 3.2) would be considered a case of S = N , whereas (3.4, 3.7) would be an instance of S < N . The d-value can be understood as a measure of how fine-grained the model is in identifying reading-level differences between sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "On the discriminating power of the model", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "If we consider the percentage of samples identified as S <= N as an accuracy measure, Figure 3 shows the accuracy for different d-values. We can observe that the percentage of instances that the model correctly identifies as S <= N steadily increases from 70% to 90% as d increases. While the value of d in theory can be anything, values beyond 1 are uninteresting in the context of this study. At d = 1, most of the sentence pairs already belong to S = N , so increasing this further would defeat the purpose of identifying readinglevel differences. The higher the d-value, the more of the simplified and unsimplified pairs are lumped together as indistinguishable.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 86, |
| "end": 94, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "On the discriminating power of the model", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Spelling out the different cases from Figure 3 , the number of pairs identified correctly, equated, and misclassified as a function of the d-value is shown in Figure 4 . ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 38, |
| "end": 46, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 159, |
| "end": 167, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "On the discriminating power of the model", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We saw in Figure 2 that the Wikipedia sentences are uniformly distributed across the reading levels, and for each of these sentences, a human simplified version is included in the corpus. Even sentences identified by our readability model as belonging to the lower reading levels thus were further simplified. This leads us to investigate whether the reading level of the unsimplified sentence influences the ability of our model to correctly identify the simplification relationship.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 10, |
| "end": 18, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Influence of reading-level on accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To investigate this, we separately analyzed pairs where the unsimplified sentences had a higher reading level and those where it had a lower reading level, taking the middle of the scale (2.5) as the cut-off point. Figure 5 shows the accuracies obtained when distinguishing unsimplified sentences of two readability levels. For the pairs where the reading level of the unsimplified version is high, the accuracy of the readability model is high (80-95%). In the other case, the accuracy drops to 65-75% (for 0.3 <= d <= 0.6). Presumably the complex sentences for which the model performs best offer more syntactic and lexical material informing the features used. When we split the graph into the three cases again (S < N , S = N , S > N ), the pairs with a high-level unsimplified sentence in Figure 6 follow the overall picture of Figure 4 . On the other hand, the results in Figure 7 for the pairs with an unsimplified sentence at a low readability level establish that the model essentially is incapable to identify readability differences. The correctly identified S < N and the incorrectly identified S > N cases mostly overlap, indicating chance-level performance. Increasing the d-level only increases the number of equated pairs, without much impact on the number of correctly distinguished pairs.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 215, |
| "end": 223, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 794, |
| "end": 802, |
| "text": "Figure 6", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 833, |
| "end": 841, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 878, |
| "end": 886, |
| "text": "Figure 7", |
| "ref_id": "FIGREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Influence of reading-level on accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In real-world terms, this means that it is difficult to identify simplifications of an already simple sentence. While some of this difficulty may stem from the fact that simple sentences are likely to be shorter and thus offer less linguistic material on which an analysis can be based, it also points to a need for more research on features that can reliably distinguish lower levels of readability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Influence of reading-level on accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Summing up, the experiments discussed in this section show that a document-level readability model trained on the WeeBit corpus can provide insightful perspectives on the nature of simplification at the sentence level. The results emphasize the relative nature of readability and the need for more features capable of identifying characteristics distinguishing sentences at lower levels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Influence of reading-level on accuracy", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We started with constructing a document-level readability model and compared its performance with other readability systems on a standard test set. Having established the state-of-the-art performance of our document-level model, we moved on to investigate the use of the features and the model at the sentence level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In the sentence-level research, we first used the same feature set to construct a two-class readability model on the sentences from the Wikipedia-SimpleWikipedia sentence aligned corpus. The model only achieved a classification accuracy of 66%. Exploring the causes for this low performance, we studied the sentences in the aligned pairs through the lens of our document-level readability model, the regression model based on the five level data of the WeeBit corpus. Our experiment identifies most of the Simple Wikipedia sentences as belonging to the lower levels, with some sentences also showing up at higher levels. The sentences from the normal Wikipedia, on the other hand, display a uniform distribution across all reading levels. A simplified sentence (S) can thus be at a lower reading level than its paired unsimplified sentence (N) while also being at a higher reading level than another unsimplified sentence. Given this distribution of reading levels, the low performance of the binary classifier is expected. Instead of an absolute, binary difference in reading levels that counts each Wikipedia sentence from the corpus as hard and each Simple Wikipedia sentence as simple, a relative ranking of reading levels seems to better suit the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Inspecting the relative difference in the reading levels of the aligned unsimplified-simplified sentence pairs, we characterized the accuracy of predicting the relative reading level ranking in a pair correctly depending on the reading-level difference d required to required to identify a categorical difference. While the experiments were performed to verify the hypothesis that simplification is relative, they also confirm that the document-level readability model trained on the WeeBit corpus generalized well to Wikipedia-SimpleWikipedia as a different, sentence-level corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The analysis revealed that the accuracy depends on the initial reading level of the unsimplified sentence. The model performs very well when the reading level of the unsimplified sentence is higher, but the features seem limited in their ability to pick up on the differences between sentences at the lowest levels. In future work, we thus intend to add more features identifying differences between lower levels of readability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Taking the focus on the relative ranking of the readability of sentences one step further, we are currently studying if modeling the readability problem as preference learning or ordinal regression will improve the accuracy in predicting the relation between simplified and unsimplified sentence versions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Overall, the paper contributes to the state of the art by providing a methodology to quantitatively evaluate the degree of simplification performed by an automatic system. The results can also be potentially useful in providing assistive feedback for human writers preparing simplified texts given specific target user constraints. We plan to explore the idea of generating simplified text with readability constraints as suggested in Stymne et al. (2013) for Machine Translation.", |
| "cite_spans": [ |
| { |
| "start": 435, |
| "end": 455, |
| "text": "Stymne et al. (2013)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "http://celex.mpi.nl/help/elemmas.html 2 http://catalog.ldc.upenn.edu/docs/LDC96L14", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.psych.rl.ac.uk 4 http://crr.ugent.be/archives/806 5 http://projects.csail.mit.edu/jwi", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://reap.cs.cmu.edu 7 http://renlearn.com/atos 8 http://questarai.com/Products/DRPProgram 9 http://lexile.com 10 http://readingmaturity.com 11 http://naeptba.ets.org/SourceRater3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their detailed comments. Our research was funded by the LEAD Graduate School (GSC 1028, http: //purl.org/lead), a project of the Excellence Initiative of the German federal and state governments, and the European Commission's 7th Framework Program under grant agreement number 238405 (CLARA).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Readability assessment for text simplification", |
| "authors": [ |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "Aluisio", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Gasperin", |
| "suffix": "" |
| }, |
| { |
| "first": "Carolina", |
| "middle": [], |
| "last": "Scarton", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "1--9", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sandra Aluisio, Lucia Specia, Caroline Gasperin, and Carolina Scarton. 2010. Readability assessment for text simplification. In Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1-9.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The CELEX lexical databases. CDROM", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "H" |
| ], |
| "last": "Baayen", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Piepenbrock", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Gulikers", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. H. Baayen, R. Piepenbrock, and L. Gulikers. 1995. The CELEX lexical databases. CDROM, http://www.ldc.upenn.edu/Catalog/ readme_files/celex.readme.html.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Tris: A statistical sentence simplifier with log-linear models and margin-based discriminative training", |
| "authors": [ |
| { |
| "first": "Nguyen", |
| "middle": [], |
| "last": "Bach", |
| "suffix": "" |
| }, |
| { |
| "first": "Qin", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "474--482", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nguyen Bach, Qin Gao, Stephan Vogel, and Alex Waibel. 2011. Tris: A statistical sentence simplifier with log-linear models and margin-based discrimi- native training. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 474-482. Asian Federation of Natural Lan- guage Processing.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Syntactic simplification of newspaper text for aphasic readers", |
| "authors": [ |
| { |
| "first": "Yvonne", |
| "middle": [], |
| "last": "Canning", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Tait", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of SIGIR-99 Workshop on Customised Information Delivery", |
| "volume": "", |
| "issue": "", |
| "pages": "6--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yvonne Canning and John Tait. 1999. Syntactic sim- plification of newspaper text for aphasic readers. In Proceedings of SIGIR-99 Workshop on Customised Information Delivery, pages 6-11.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Simplifying text for language-impaired readers", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| }, |
| { |
| "first": "Guido", |
| "middle": [], |
| "last": "Minnen", |
| "suffix": "" |
| }, |
| { |
| "first": "Darren", |
| "middle": [], |
| "last": "Pearce", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvonne", |
| "middle": [], |
| "last": "Canning", |
| "suffix": "" |
| }, |
| { |
| "first": "Siobhan", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Tait", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 9th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "269--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Carroll, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, and John Tait. 1999. Simplifying text for language-impaired readers. In Proceedings of the 9th Conference of the European Chapter of the Association for Computational Lin- guistics (EACL), pages 269-270.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Common core state standards for english language arts & literacy in history/social studies, science, and technical subjects. appendix B: Text exemplars and sample performance tasks", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ccsso", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "National Governors Association Center for Best Practices", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "CCSSO. 2010. Common core state standards for en- glish language arts & literacy in history/social stud- ies, science, and technical subjects. appendix B: Text exemplars and sample performance tasks. Technical report, National Governors Association Center for Best Practices, Council of Chief State School Of- ficers. http://www.corestandards.org/ assets/Appendix_B.pdf.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Automatic induction of rules for text simplification", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Chandrasekar", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Srinivas", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Chandrasekar and B. Srinivas. 1996. Automatic in- duction of rules for text simplification. Technical Report IRCS Report 96-30, Upenn, NSF Science and Technology Center for Research in Cognitive Science.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Motivations and methods for text simplification", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Chandrasekar", |
| "suffix": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Doran", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Srinivas", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING)", |
| "volume": "", |
| "issue": "", |
| "pages": "1041--1044", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Chandrasekar, Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplifica- tion. In Proceedings of the 16th International Con- ference on Computational Linguistics (COLING), pages 1041-1044.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Simple english wikipedia: A new text simplification task", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Coster", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Kauchak", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "665--669", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Coster and David Kauchak. 2011. Simple en- glish wikipedia: A new text simplification task. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 665-669, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Toward a new readability: A mixed model approach", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Scott", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "F" |
| ], |
| "last": "Crossley", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [ |
| "M" |
| ], |
| "last": "Dufty", |
| "suffix": "" |
| }, |
| { |
| "first": "Danielle", |
| "middle": [ |
| "S" |
| ], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mcnamara", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 29th annual conference of the Cognitive Science Society", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott A. Crossley, David F. Dufty, Philip M. McCarthy, and Danielle S. McNamara. 2007. Toward a new readability: A mixed model approach. In Danielle S. McNamara and Greg Trafton, editors, Proceedings of the 29th annual conference of the Cognitive Sci- ence Society. Cognitive Science Society.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Features indicating readability in swedish text", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Falkenjack", |
| "suffix": "" |
| }, |
| { |
| "first": "Arne", |
| "middle": [], |
| "last": "Katarina Heimann M\u00fchlenbock", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "J\u00f6nsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 19th Nordic Conference of Computational Linguistics (NODAL-IDA)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan Falkenjack, Katarina Heimann M\u00fchlenbock, and Arne J\u00f6nsson. 2013. Features indicating readability in swedish text. In Proceedings of the 19th Nordic Conference of Computational Linguistics (NODAL- IDA).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Automatic Readability Assessment", |
| "authors": [ |
| { |
| "first": "Lijun", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lijun Feng. 2010. Automatic Readability Assessment. Ph.D. thesis, City University of New York (CUNY).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Lexical tightness and text complexity", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Flor", |
| "suffix": "" |
| }, |
| { |
| "first": "Beata", |
| "middle": [ |
| "Beigman" |
| ], |
| "last": "Klebanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [ |
| "M" |
| ], |
| "last": "Sheehan", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Second Workshop on Natural Language Processing for Improving Textual Accessibility", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Flor, Beata Beigman Klebanov, and Kath- leen M. Sheehan. 2013. Lexical tightness and text complexity. In Proceedings of the Second Workshop on Natural Language Processing for Improving Tex- tual Accessibility.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Learning when to simplify sentences for natural text simplification", |
| "authors": [ |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Gasperin", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Tiago", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [ |
| "M" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Aluisio", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Encontro Nacional de Intelig\u00eancia Artificial", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caroline Gasperin, Lucia Specia, Tiago F. Pereira, and Sandra M. Aluisio. 2009. Learning when to sim- plify sentences for natural text simplification. In Encontro Nacional de Intelig\u00eancia Artificial (ENIA- 2009).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The weka data mining software: An update", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Eibe", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Holmes", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernhard", |
| "middle": [], |
| "last": "Pfahringer", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Reutemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [ |
| "H" |
| ], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "The SIGKDD Explorations", |
| "volume": "11", |
| "issue": "", |
| "pages": "10--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: An update. In The SIGKDD Explorations, volume 11, pages 10- 18.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Readability classification for german using lexical, syntactic, and morphological features", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hancke", |
| "suffix": "" |
| }, |
| { |
| "first": "Detmar", |
| "middle": [], |
| "last": "Meurers", |
| "suffix": "" |
| }, |
| { |
| "first": "Sowmya", |
| "middle": [], |
| "last": "Vajjala", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING)", |
| "volume": "", |
| "issue": "", |
| "pages": "1063--1080", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Hancke, Detmar Meurers, and Sowmya Vajjala. 2012. Readability classification for german using lexical, syntactic, and morphological features. In Proceedings of the 24th International Conference on Computational Linguistics (COLING), pages 1063- 1080, Mumbay, India.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Sentence simplification aids protein-protein interaction extraction", |
| "authors": [ |
| { |
| "first": "Siddhartha", |
| "middle": [], |
| "last": "Jonnalagadda", |
| "suffix": "" |
| }, |
| { |
| "first": "Graciela", |
| "middle": [], |
| "last": "Gonzalez", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of The 3rd International Symposium on Languages in Biology and Medicine", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siddhartha Jonnalagadda and Graciela Gonzalez. 2009. Sentence simplification aids protein-protein interaction extraction. In Proceedings of The 3rd International Symposium on Languages in Biology and Medicine, Jeju Island, South Korea, November 8-10, 2009.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Simple, readable sub-sentences", |
| "authors": [ |
| { |
| "first": "Sigrid", |
| "middle": [], |
| "last": "Klerke", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the ACL Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sigrid Klerke and Anders S\u00f8gaard. 2013. Simple, readable sub-sentences. In Proceedings of the ACL Student Research Workshop.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Age-of-acquisition ratings for 30,000 english words", |
| "authors": [ |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Kuperman", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Stadthagen-Gonzalez", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Brysbaert", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "44", |
| "issue": "", |
| "pages": "978--990", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victor Kuperman, Hans Stadthagen-Gonzalez, and Marc Brysbaert. 2012. Age-of-acquisition ratings for 30,000 english words. Behavior Research Meth- ods, 44(4):978-990.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Tregex and tsurgeon: tools for querying and manipulating tree data structures", |
| "authors": [ |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Galen", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "5th International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roger Levy and Galen Andrew. 2006. Tregex and tsur- geon: tools for querying and manipulating tree data structures. In 5th International Conference on Lan- guage Resources and Evaluation, Genoa, Italy.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Identifying targets for syntactic simplification", |
| "authors": [ |
| { |
| "first": "Julie", |
| "middle": [], |
| "last": "Medero", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Ostendorf", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ISCA International Workshop on Speech and Language Technology in Education", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julie Medero and Marie Ostendorf. 2011. Identifying targets for syntactic simplification. In ISCA Interna- tional Workshop on Speech and Language Technol- ogy in Education (SLaTE 2011).", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Learning simple wikipedia: a cogitation in ascertaining abecedarian language", |
| "authors": [ |
| { |
| "first": "Courtney", |
| "middle": [], |
| "last": "Napoles", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids, CL&W '10", |
| "volume": "", |
| "issue": "", |
| "pages": "42--50", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Courtney Napoles and Mark Dredze. 2010. Learn- ing simple wikipedia: a cogitation in ascertaining abecedarian language. In Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids, CL&W '10, pages 42-50, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Measures of text difficulty: Testing their predictive value for grade levels and student performance", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nelson", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Perfetti", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Liben", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Liben", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Nelson, C. Perfetti, D. Liben, and M. Liben. 2012. Measures of text difficulty: Testing their predic- tive value for grade levels and student performance. Technical report, The Council of Chief State School Officers.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Improved inference for unlexicalized parsing", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "404--411", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov and Dan Klein. 2007. Improved infer- ence for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, pages 404-411, Rochester, New York, April.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Generating automated text complexity classifications that are aligned with targeted text complexity standards", |
| "authors": [ |
| { |
| "first": "Kathleen", |
| "middle": [ |
| "M" |
| ], |
| "last": "Sheehan", |
| "suffix": "" |
| }, |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Kostin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoko", |
| "middle": [], |
| "last": "Futagi", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "Flor" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kathleen M. Sheehan, Irene Kostin, Yoko Futagi, and Michael Flor. 2010. Generating automated text complexity classifications that are aligned with tar- geted text complexity standards. Technical Report RR-10-28, ETS, December.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "An architecture for a text simplification system", |
| "authors": [ |
| { |
| "first": "Advaith", |
| "middle": [], |
| "last": "Siddharthan", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Language Engineering Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Advaith Siddharthan. 2002. An architecture for a text simplification system. In In Proceedings of the Lan- guage Engineering Conference 2002 (LEC 2002).", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Syntactic simplification and text cohesion", |
| "authors": [ |
| { |
| "first": "Advaith", |
| "middle": [], |
| "last": "Siddharthan", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Advaith Siddharthan. 2004. Syntactic simplification and text cohesion. Technical Report UCAM-CL- TR-597, University of Cambridge Computer Labo- ratory.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Translating from complex to simplified sentences", |
| "authors": [ |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 9th international conference on Computational Processing of the Portuguese Language (PROPOR'10)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucia Specia. 2010. Translating from complex to sim- plified sentences. In Proceedings of the 9th interna- tional conference on Computational Processing of the Portuguese Language (PROPOR'10).", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Statistical machine translation with readability constraints", |
| "authors": [ |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Stymne", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Hardmeier", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 19th Nordic Conference of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sara Stymne, J\u00f6rg Tiedemann, Christian Hardmeier, and Joakim Nivre. 2013. Statistical machine trans- lation with readability constraints. In Proceedings of the 19th Nordic Conference of Computational Lin- guistics (NODALIDA 2013).", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Feature-rich part-of-speech tagging with a cyclic dependency network", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "252--259", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Toutanova, D. Klein, C. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In HLT-NAACL, pages 252-259, Edmonton, Canada.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "On improving the accuracy of readability classification using insights from second language acquisition", |
| "authors": [ |
| { |
| "first": "Sowmya", |
| "middle": [], |
| "last": "Vajjala", |
| "suffix": "" |
| }, |
| { |
| "first": "Detmar", |
| "middle": [], |
| "last": "Meurers", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 7th Workshop on Innovative Use of NLP for Building Educational Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "163--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sowmya Vajjala and Detmar Meurers. 2012. On im- proving the accuracy of readability classification us- ing insights from second language acquisition. In In Proceedings of the 7th Workshop on Innovative Use of NLP for Building Educational Applications, pages 163--173.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "On the applicability of readability models to web texts", |
| "authors": [ |
| { |
| "first": "Sowmya", |
| "middle": [], |
| "last": "Vajjala", |
| "suffix": "" |
| }, |
| { |
| "first": "Detmar", |
| "middle": [], |
| "last": "Meurers", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sowmya Vajjala and Detmar Meurers. 2013. On the applicability of readability models to web texts. In Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "The MRC psycholinguistic database: Machine readable dictionary, version 2", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "D" |
| ], |
| "last": "Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Behavioural Research Methods, Instruments and Computers", |
| "volume": "20", |
| "issue": "1", |
| "pages": "6--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M.D. Wilson. 1988. The MRC psycholinguistic database: Machine readable dictionary, version 2. Behavioural Research Methods, Instruments and Computers, 20(1):6-11.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Learning to simplify sentences with quasi-synchronous grammar and integer programming", |
| "authors": [ |
| { |
| "first": "Kristian", |
| "middle": [], |
| "last": "Woodsend", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristian Woodsend and Mirella Lapata. 2011. Learn- ing to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Sentence simplification by monolingual machine translation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sander Wubben", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "Emiel", |
| "middle": [], |
| "last": "Bosch", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Krahmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by mono- lingual machine translation. In Proceedings of ACL 2012.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "A monolingual tree-based translation model for sentence simplification", |
| "authors": [ |
| { |
| "first": "Zhemin", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Delphine", |
| "middle": [], |
| "last": "Bernhard", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of The 23rd International Conference on Computational Linguistics (COLING)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of The 23rd International Conference on Computational Linguistics (COLING), August 2010. Beijing, China.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Corpus-based sentence deletion and split decisions for spanish text simplification", |
| "authors": [ |
| { |
| "first": "Biljana", |
| "middle": [], |
| "last": "Sanja\u0161tajner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Drndarevic", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "CI-CLing 2013: The 14th International Conference on Intelligent Text Processing and Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sanja\u0160tajner, Biljana Drndarevic, and Horaccio Sag- gion. 2013. Corpus-based sentence deletion and split decisions for spanish text simplification. In CI- CLing 2013: The 14th International Conference on Intelligent Text Processing and Computational Lin- guistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Sentence Classification Accuracy and Training Data size" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Training size vs. classification accuracy" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Reading level distribution of the Wikipedia and SimpleWikipedia sentences" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Accurately identified S <= N" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Correctly (S < N ), equated (S = N ), and incorrectly (S > N ) identified sentence pairs At d = 0.4, around 50% of the pairs are correctly classified, 20% are misclassified, and 30% equated. At d = 0.7, the rate of pairs for which no distinction can be determined already rises above 50%. For d-values between 0.3 and 0.6, the percentage of correctly identified pairs exceeds the percentage of equated pairs, which in turn exceeds the percentage of misclassified pairs." |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Accuracy (S <= N ) for different N types" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Results for N >= 2.5" |
| }, |
| "FIGREF8": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Results for N < 2.5" |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>System</td><td colspan=\"2\">Spearman Pearson</td></tr><tr><td>Our System</td><td>0.69</td><td>0.61</td></tr><tr><td>Nelson et al. (2012):</td><td/><td/></tr><tr><td>REAP 6</td><td>0.54</td><td>-</td></tr><tr><td>ATOS 7</td><td>0.59</td><td>-</td></tr><tr><td>DRP 8</td><td>0.53</td><td>-</td></tr><tr><td>Lexile 9</td><td>0.50</td><td>-</td></tr><tr><td>Reading Maturity 10</td><td>0.69</td><td>-</td></tr><tr><td>SourceRater 11</td><td>0.75</td><td>-</td></tr><tr><td>Flor et al. (2013):</td><td/><td/></tr><tr><td>Lexical Tightness</td><td>-</td><td>-0.44</td></tr><tr><td>Flesch-Kincaid</td><td>-</td><td>0.49</td></tr><tr><td>Text length</td><td>-</td><td>0.36</td></tr><tr><td colspan=\"3\">Table 1: Performance on CommonCore data</td></tr></table>", |
| "num": null, |
| "text": "used Spearman's Rank Correlation andFlor et al. (2013) used Pearson Correlation as evaluation metrics. To facilitate comparison, for our approach we provide both measures.", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |