| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:52:21.513138Z" |
| }, |
| "title": "FrenLyS: A Tool for the Automatic Simplification of French General Language Texts", |
| "authors": [ |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Rolin", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Quentin", |
| "middle": [], |
| "last": "Langlois", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Watrin", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Lexical simplification (LS) aims at replacing words considered complex in a sentence by simpler equivalents. In this paper, we present the first automatic LS service for French, FrenLyS, which offers different techniques to generate, select and rank substitutes. The paper describes the different methods proposed by our tool, which includes both classical approaches (e.g. generation of candidates from lexical resources, frequency filter, etc.) and more innovative approaches such as the exploitation of CamemBERT, a model for French based on the RoBERTa architecture. To evaluate the different methods, a new evaluation dataset for French is introduced.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Lexical simplification (LS) aims at replacing words considered complex in a sentence by simpler equivalents. In this paper, we present the first automatic LS service for French, FrenLyS, which offers different techniques to generate, select and rank substitutes. The paper describes the different methods proposed by our tool, which includes both classical approaches (e.g. generation of candidates from lexical resources, frequency filter, etc.) and more innovative approaches such as the exploitation of CamemBERT, a model for French based on the RoBERTa architecture. To evaluate the different methods, a new evaluation dataset for French is introduced.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "It is widely acknowledged that reading difficulties, either due to insufficient education or to mental deficiencies for example, can hinder access to information, which is likely to result in a loss of autonomy and freedom (Mutabazi and Wallenhorst, 2020) . Faced with this challenge, researchers imagined applying natural language processing (NLP) to automatically transform sentences in a text in order to make it more readable, thus facilitating access to information. This is the objective pursued in the field of Automatic Text Simplification (ATS), in which the main goal is to preserve grammaticality and meaning while carrying out effective transformations to make the text simpler.", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 255, |
| "text": "(Mutabazi and Wallenhorst, 2020)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "ATS is generally investigated at the sentence level (Alva-Manchego et al., 2020) and has therefore mostly focused on two subtasks: on the one hand, lexical simplification, described by Saggion (2017) as the task of \"replacing difficult words with easy-to-read (or understand) expressions while preserving the meaning of the original text segments\", on the other hand, syntactic simplification, that consists in simplifying syntactic structures in a sentence by carrying out various transformations (splitting, clause deletion, etc.). Both tasks have been the subject of a great deal of research, as synthesized in Shardlow (2014) ; Siddharthan (2014) ; Saggion (2017); Paetzold and Specia (2017b) ; Alva-Manchego et al. (2020) ; Al-Thanyyan and Azmi (2021), but most of it has been carried out on English or, to a lesser extent, on Spanish. Other languages are hardly represented, which is also the case for French Elguendouze, 2020) . This why we have chosen to focus on this one.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 80, |
| "text": "(Alva-Manchego et al., 2020)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 614, |
| "end": 629, |
| "text": "Shardlow (2014)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 632, |
| "end": 650, |
| "text": "Siddharthan (2014)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 669, |
| "end": 696, |
| "text": "Paetzold and Specia (2017b)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 699, |
| "end": 726, |
| "text": "Alva-Manchego et al. (2020)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 915, |
| "end": 933, |
| "text": "Elguendouze, 2020)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In French, ATS was addressed at first at the syntactic level (Seretan, 2012; Brouwers et al., 2014) using rule-based systems. In parallel, lexical simplification was also investigated, based on lexicons or resources Cardon, 2018; Hmida et al., 2018) . Due to the lack of training data, machine translation approaches -which are standard for English -were applied to French (Rauf et al., 2020) only very recently, with mixed results. As a result, the situation of ATS for French is clearly lagging behind that of English. The only simplification package freely available for the research community has been published recently (Wilkens and Todirascu, 2020) and it remains preliminary and focused exclusively on syntax. AMesure, a web platform designed to help writers of administrative texts to write in plain language is more encompassing. However, it is limited to detecting complex phenomena and suggesting simplifications.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 76, |
| "text": "(Seretan, 2012;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 77, |
| "end": 99, |
| "text": "Brouwers et al., 2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 216, |
| "end": 229, |
| "text": "Cardon, 2018;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 230, |
| "end": 249, |
| "text": "Hmida et al., 2018)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 625, |
| "end": 654, |
| "text": "(Wilkens and Todirascu, 2020)", |
| "ref_id": "BIBREF60" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we aim to fill the gap in lexical simplification (LS) tools and resources for French by developing a tool in which several standard approaches of LS are available and by building a reference dataset to evaluate our results. This library, called FrenLyS for French Library for Simplification, follows the LS process first described by Shardlow (2014) as a sequence of four steps: identifying complex terms, generating candidates for substitution, selecting the best candidates, and ranking them according to their degree of readability. Our work draws from similar packages in other languages, such as LEXenstein (Paetzold and Specia, 2015) for English, the EASIER tool (Alarcon et al., 2019) and LexSis (Bott et al., 2012) for Spanish or the work by Qiang et al. (2021) .", |
| "cite_spans": [ |
| { |
| "start": 349, |
| "end": 364, |
| "text": "Shardlow (2014)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 627, |
| "end": 654, |
| "text": "(Paetzold and Specia, 2015)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 684, |
| "end": 706, |
| "text": "(Alarcon et al., 2019)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 718, |
| "end": 737, |
| "text": "(Bott et al., 2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 765, |
| "end": 784, |
| "text": "Qiang et al. (2021)", |
| "ref_id": "BIBREF50" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The article is structured as follows. Section 2 presents the state of the art of lexical simplification. In Section 3, we describe the different approaches we implemented for each of the LS steps. Section 4 describes the methodology used to evaluate our approaches, which includes a new reference dataset for French. In section 5 we report and discuss the performance of each of these approaches on our evaluation dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task of lexical simplification was first investigated by Carroll et al. (1998) who exploited a rather simple solution: they obtained candidates for substitution using WordNet synonyms (Miller, 1995) and ranked them according to their frequency. As a result of this work, researchers tried to improve different aspects of this process, either by collecting synonyms (De Belder and Moens, 2010) , or by ranking the candidates (Biran et al., 2011a) , etc. In his survey of the field, Shardlow (2014) provided a clear view of the different challenges within LS, identifying four steps in which recent work can be classified.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 82, |
| "text": "Carroll et al. (1998)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 188, |
| "end": 202, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 373, |
| "end": 396, |
| "text": "Belder and Moens, 2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 428, |
| "end": 449, |
| "text": "(Biran et al., 2011a)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Complex Word Identification The first step in lexical simplification is the complex word identification (CWI). This step has been the object of several shared tasks (Paetzold and Specia, 2016c; Yimam et al., 2018) and aims at identifying in a text the words or expressions likely to be problematic for a target audience of readers and on which the LS system should be applied. As Gooding and Kochmar (2019) pointed out, early works on the complex word identification operated by simplifying all words (Thomas and Anderson, 2012; Bott et al., 2012) or were based on a threshold t over a given metric of simplicity (e.g. word frequency) that separates simple from complex words (Biran et al., 2011b) . Another approach consists in finding complex words with the help of a lexicon : if the word appears in the resource it is considered as complex, otherwise as simple. This method has been mostly used for lexical simplification of medical texts (Chen et al., 2016; Del\u00e9ger and Zweigenbaum, 2009) . Other more recent attempts either used machine learning to classify words as either complex or simple based on some features such as word length, word frequency, number of senses, etc. (Shardlow, 2013; Alarcon et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 193, |
| "text": "(Paetzold and Specia, 2016c;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 194, |
| "end": 213, |
| "text": "Yimam et al., 2018)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 380, |
| "end": 406, |
| "text": "Gooding and Kochmar (2019)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 501, |
| "end": 528, |
| "text": "(Thomas and Anderson, 2012;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 529, |
| "end": 547, |
| "text": "Bott et al., 2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 676, |
| "end": 697, |
| "text": "(Biran et al., 2011b)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 943, |
| "end": 962, |
| "text": "(Chen et al., 2016;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 963, |
| "end": 993, |
| "text": "Del\u00e9ger and Zweigenbaum, 2009)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1181, |
| "end": 1197, |
| "text": "(Shardlow, 2013;", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 1198, |
| "end": 1219, |
| "text": "Alarcon et al., 2019)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Substitution Generation Once complex terms have been identified, the next step is to produce candidates that can replace for the target complex word. This step, called substitution generation (SG), is most often carried out querying linguistic lexical resources, as evidenced by the work of Carroll et al. (1998) , Bott et al. (2012 ), or Hmida et al. (2018 . They generate synonyms by querying lexical databases such as WordNet or synonym resources such as ReSyf (Billami et al., 2018) . As it is not always easy to find lexical databases and as those might have limited coverage, Horn et al. (2014a) proposed to use parallel corpora -Wikipedia and Simple Wikipedia -to automatically extract lexical simplification rules. Del\u00e9ger and Zweigenbaum (2009) resorted to paraphrases to replace target complex words, a strategy that is more relevant for specialized languages. A currently popular approach was first suggested by Glava\u0161 an\u010f Stajner (2015) . It consists in obtaining synonyms in an unsupervised way relying on semantic representations such as embeddings. The complex word to be substituted is projected in the semantic space in order to generate the \u039dclosest semantic neighbors. More recently, Qiang et al. (2019) used BERT in a similar fashion.", |
| "cite_spans": [ |
| { |
| "start": 291, |
| "end": 312, |
| "text": "Carroll et al. (1998)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 315, |
| "end": 332, |
| "text": "Bott et al. (2012", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 333, |
| "end": 357, |
| "text": "), or Hmida et al. (2018", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 458, |
| "end": 486, |
| "text": "ReSyf (Billami et al., 2018)", |
| "ref_id": null |
| }, |
| { |
| "start": 582, |
| "end": 601, |
| "text": "Horn et al. (2014a)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 723, |
| "end": 753, |
| "text": "Del\u00e9ger and Zweigenbaum (2009)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 923, |
| "end": 948, |
| "text": "Glava\u0161 an\u010f Stajner (2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 1203, |
| "end": 1222, |
| "text": "Qiang et al. (2019)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In order to obtain semantically correct sentences, each candidate has to go through a disambiguation step. The substitution selection (SS) step aims to decide which of the candidates collected at generation step best fits the context of the sentence to be simplified. De Belder and Moens (2012) proposed to carry out the task of disambiguation using a Latent Words Language Model (LWLM): they use Bayesian networks to represent words and their contextual meaning. Other studies took advantage of word sense disambiguation systems to explicitly label the senses of both the target and the candidates, in order to select a candidate having the same sense as the target (Thomas and Anderson, 2012; Nunes et al., 2013) . A third line of research leveraged semantic models to compare the semantic similar-ity of each candidate with the sentence to simplify. Bott et al. (2012) exploited a vector space model, whereas Paetzold and Specia (2015) rather used a word embedding model. It is also possible to perform this step in a simpler way, by removing all candidates who do not share the same part of speech as the word to be replaced (Paetzold and Specia, 2013) .", |
| "cite_spans": [ |
| { |
| "start": 271, |
| "end": 294, |
| "text": "Belder and Moens (2012)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 667, |
| "end": 694, |
| "text": "(Thomas and Anderson, 2012;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 695, |
| "end": 714, |
| "text": "Nunes et al., 2013)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 853, |
| "end": 871, |
| "text": "Bott et al. (2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 912, |
| "end": 938, |
| "text": "Paetzold and Specia (2015)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 1129, |
| "end": 1156, |
| "text": "(Paetzold and Specia, 2013)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": null |
| }, |
| { |
| "text": "Substitution Ranking After having identified the complex words, generated synonyms, and selected the most coherent ones, the final step of LS consists in ranking the remaining candidates according to their reading ease. The first LS systems generally resorted on frequency (Carroll et al., 1998; De Belder and Moens, 2010; Specia et al., 2012) where it is considered that more frequent words are easier to understand. Specia et al. (2012) showed that this simple rule actually represents a very strong baseline, as it outperformed 9 out of the 11 ranking systems engaged in this task of Se-mEval 2012 (Specia et al., 2012) . Other studies proposed simplicity metrics that can be combined word characteristics: Biran et al. 2011aand Bott et al. (2012) combine frequency with word length. Finally, it is also possible to use statistical ranking algorithms (Horn et al., 2014a; Fran\u00e7ois et al., 2016) that we can combine with neural networks (Paetzold and Specia, 2017a) .", |
| "cite_spans": [ |
| { |
| "start": 273, |
| "end": 295, |
| "text": "(Carroll et al., 1998;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 296, |
| "end": 322, |
| "text": "De Belder and Moens, 2010;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 323, |
| "end": 343, |
| "text": "Specia et al., 2012)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 418, |
| "end": 438, |
| "text": "Specia et al. (2012)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 601, |
| "end": 622, |
| "text": "(Specia et al., 2012)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 732, |
| "end": 750, |
| "text": "Bott et al. (2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 854, |
| "end": 874, |
| "text": "(Horn et al., 2014a;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 875, |
| "end": 897, |
| "text": "Fran\u00e7ois et al., 2016)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 939, |
| "end": 967, |
| "text": "(Paetzold and Specia, 2017a)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": null |
| }, |
| { |
| "text": "In parallel to the design of new LS methods, the development of reference datasets to train and evaluate those methods is key. Several datasets for lexical simplification are available for English, such as SemEval 2012 (Specia et al., 2012) , LSeval (De Belder and Moens, 2012), LexMTurk (Horn et al., 2014b) , NNSeval (Paetzold and Specia, 2016b) , and BenchLS (Paetzold and Specia, 2016d) . Other languages are not so well resourced: there are only 2 datasets for Japanese -SNOW E4 (Kajiwara and Yamamoto, 2015) and BCCWJ (Kodaira et al., 2016) -, but, to our knowledge, none for French. In French, the only available resource for text simplification is the ALECTOR corpus (Gala et al., 2020) . It consists in 79 parallel texts with information about complex words, but there are no validated simpler synonyms, which are required to assess LS approaches.", |
| "cite_spans": [ |
| { |
| "start": 219, |
| "end": 240, |
| "text": "(Specia et al., 2012)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 288, |
| "end": 308, |
| "text": "(Horn et al., 2014b)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 319, |
| "end": 347, |
| "text": "(Paetzold and Specia, 2016b)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 362, |
| "end": 390, |
| "text": "(Paetzold and Specia, 2016d)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 484, |
| "end": 513, |
| "text": "(Kajiwara and Yamamoto, 2015)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 524, |
| "end": 546, |
| "text": "(Kodaira et al., 2016)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 675, |
| "end": 694, |
| "text": "(Gala et al., 2020)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Available datasets", |
| "sec_num": null |
| }, |
| { |
| "text": "Our system is the first to offer several methods for generating candidates for substitution, selecting them based on semantic similarity with the target word and ranking them by difficulty in French. Most of these methods have been previously applied to English, and we adapted them to the case of French. A few are new. All of them are described hereafter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "It should be noted that FrenLyS does not implement any complex word identification algorithm. We believe this is a very complex task, which should be addressed as a whole and actually is (Yimam et al., 2018) , especially because CWI requires to take into account the readers' characteristics. Methods based on lexical characteristics or word lists overlook the reader's characteristics and Lee and Yeung (2019) have rightly stressed that current approaches offer the same substitutions regardless of users. This tool is therefore based on the prerequisite that complex words already have been identified. For the sake of the evaluation of our tool, we relied on a manual annotation of complex words in our test set (see Section 4).", |
| "cite_spans": [ |
| { |
| "start": 187, |
| "end": 207, |
| "text": "(Yimam et al., 2018)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 390, |
| "end": 410, |
| "text": "Lee and Yeung (2019)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The task of substitution generation aims to generate candidate synonyms to replace complex words. To carry out this step, FrenLyS proposes three methods: synonyms are directly obtained from a resource of synonyms (see ReSyf generator), or are generated by embeddings, either produced by Fast-Text (see FastText generator) or by the BERT architecture (see CamemBERT generator). We also propose a 4th approach (see CamemBERT union) that combines Camembert with Resyf or with Fasttext to take advantage of the contextual information that the model can provide.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Generation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "ReSyf generator This module generates candidates from the graduated and disambiguated synonyms resource ReSyF (Billami et al., 2018). It is built from the semantic network JeuxDeMots and contains 57,589 entries that are connected to 148,648 synonyms (both in their lemma form). Its main asset is that synonyms corresponding to different meanings of a word have been manually and automatically clustered into different synsets. Another interesting feature of ReSyF is that the synonyms gathered in the same synset are ranked according to reading ease. Based on those characteristics, our method simply consults ReSyF using the lemma of the complex word and returns the lemmas of the top three simplest synonyms in each corresponding 'synset'. At this step, we do not try to disambiguate the meaning of the word to substitute, as this is the role of the substitution selection step.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Generation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "FastText generator FastText (Bojanowski et al., 2017 ) is a library for efficient learning of word representations. Its advantage for our task is that it proposes character n-gram embeddings: we can thus obtain a vector representation even for a word that does not exist in the training corpus. Thanks to this technique, we return, for any given complex word, its k-nearest semantic neighbors (the inflected forms) based on cosine similarity.", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 52, |
| "text": "(Bojanowski et al., 2017", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Generation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "CamemBERT generator In the same way that Qiang et al. (2019) generated synonyms with BERT, we rely on a pre-trained version of Camem-BERT (Martin et al., 2020) , based on the RoBERTa architecture, on the French subcorpus of the multilingual corpus OSCAR. Our method employs the masked language model (MLM) that masks some percentage of input tokens and predicts the masked words from their right and left contexts (Qiang et al., 2019) . The idea is to mask the complex word and use the top predicted words (inflected forms) as candidate for substitution. As FastText, CamemBERT proposes a solution to deal with outof-vocabulary tokens through their decomposition in wordpieces.", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 159, |
| "text": "(Martin et al., 2020)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 414, |
| "end": 434, |
| "text": "(Qiang et al., 2019)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Generation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "CamemBERT union This method is based on the 2 following observations : ReSyf and FastText generators only care about the complex word and not its context while CamemBERT only cares about the context but does not know the complex word. The solution is to combine the advantages of both approaches by computing the CamemBERT-score for each substitute generated by ReSyf or FastText. For this purpose, we predicted the n best candidates with the CamemBERT generator and the k best candidates for the other method. Then we retain only the words that are in the intersection of the two generators and sort them by their new score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Generation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This step takes the list of candidate synonyms and selects only those that are acceptable in the context of the complex word to replace.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We have decided to implement two of the four approaches covered in section 2, as they rely on very different strategies: either by eliminating candidates that do not have the same part of speech (see POS selector), or by leveraging language models such as FastText to verify the semantic compatibility between the candidate and the context (Fast-TextWord selector, FastTextSentence selector).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "POS selector Following Paetzold and Specia (2013), we decided to include a function within our generation methods that checks wether the generated candidates and the word to be replaced share the same parts of the speech. To do this, we used the possible tags for this word as given in the Delaf dictionary 1 . If the intersection of POS-tags for the 2 words (complex word and candidate) is empty, the candidate is rejected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "FastTextWord selector For each synonym, this selector first retrieves the FastText embeddings of the complex word and this synonym and compute the cosine similarity between both vectors. The more similar the meanings of both words are, the closer their vectors are. We therefore select candidates for which the cosine similarity with the complex word is greater than the heuristic threshold of 0.5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "FastTextSentence selector Instead of directly comparing the vectors of the complex words and their synonyms, as in the previous approach, we use the context of the complex word for vectorization: we compute the cosine distance between the vectors of the synonyms and the vector of the complex sentence. We select the candidates with a similarity score greater than the heuristic threshold of 0.35.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Finally, the last part of our system classifies the synonyms according to their degree of reading ease.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Ranking", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For this step, we referred to common ranking methods in the literature (cf. section 2) as we relied on frequency (see Lexique3 ranker) and in a slightly more original way, we provide a method that ranks words according to the number of meanings they have and frequency (see FreNetic ranker). We also propose a method that combines various linguistic prescriptors through an SVMRank algorithm (Herbrich et al., 2000) (see SVM ranker).", |
| "cite_spans": [ |
| { |
| "start": 392, |
| "end": 415, |
| "text": "(Herbrich et al., 2000)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Ranking", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Lexique3 ranker For this method, we use the commonly acknowledged fact that the more frequent a word is, the simpler it is. To obtain the frequency of the candidates for substitution, we use the French lexical database Lexique3 (New, 2006) which provides, for 140,000 words of the French language, their frequencies of occurrences in literary texts and movie subtitles. We have chosen to use the frequencies estimated on the corpus of film subtitles because it contains a more up-to-date vocabulary.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 239, |
| "text": "(New, 2006)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Ranking", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "FreNetic ranker In the same way as Elhadad (2006) , this ranker exploits polysemy as a measure of familiarity and therefore of difficulty. Words from the general lexicon are more polysemous while technical terms tend to be monosemic. To collect the number of senses, we relied on FreNet 2 , a python API for WOLF (Sagot and Fiser, 2008) , a free French Wordnet. Synonyms are therefore ranked according to their number of meanings (more is easier). When several words get the same number of senses, we decide to further rank them based on their frequencies.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 49, |
| "text": "Elhadad (2006)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 313, |
| "end": 336, |
| "text": "(Sagot and Fiser, 2008)", |
| "ref_id": "BIBREF53" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Ranking", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We also propose to perform the ranking task using a SVMRank algorithm described in Fran\u00e7ois et al. (2016) . It is able to rank any set of words using 21 word characteristics such as word frequency, presence of the word in a list of simple words, number of phonemes, number of letters, number of senses, number of orthographical neighbors, etc. To train it, we used the Manulex vocabulary list (L\u00e9t\u00e9 et al., 2004) that includes 19,038 lemmas annotated with their level of complexity. Based on that information, we prepared training pairs of two words, one of which is known to be more complex than the other, which were fed to the SVMRank algorithm. In their paper, Fran\u00e7ois et al. (2016) report an accuracy of 77 % with 10fold cross-validation and a mean reciprocal rank of 0.84, obtained on a reference dataset of 40 synsets including a total of 150 synonyms that were ranked by 40 human annotators.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 105, |
| "text": "Fran\u00e7ois et al. (2016)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 393, |
| "end": 412, |
| "text": "(L\u00e9t\u00e9 et al., 2004)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 665, |
| "end": 687, |
| "text": "Fran\u00e7ois et al. (2016)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SVM ranker", |
| "sec_num": null |
| }, |
| { |
| "text": "It is usual to create evaluation corpora with the help of human annotators but this requires time and lots of annotators, which may also overlook some valid synonyms. Therefore, we opted for a hybrid approach, i.e. we chose to use our synonym generation methods to propose an exhaustive list of synonyms and then we called upon annotators to select them in context and rank them according to their difficulty. The advantage of this approach is that it combines several methods from very different generations, including a synonym dictionary that was created from propositions submitted by humans (Lafourcade, 2007) . In this way, we still collect data made by humans but a priori. We explain the corpus creation process in the next section before discussing the evaluation measures we used.", |
| "cite_spans": [ |
| { |
| "start": 596, |
| "end": 614, |
| "text": "(Lafourcade, 2007)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Process", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We decided to fill the lack of resources evaluation in French LS by proposing a dataset of annotated sentences, collected from two sources. The first set of sentences was sampled from the French reference dataset ALECTOR (Gala et al., 2020) : it includes sentences with complex words and candidates for substitution 3 . Complex words were detected through a reading experiment with dyslexic children. The second set of sentences comes from texts sampled from from various French textbooks, ranging from grade 4 to grade 12 and covering three subjects: French literature, history, and sciences. Complex words have not been directly annotated in these sentences. However, they were read by various profiles of readers through an interface and reading times have been collected. Based on this information, we have manually identified seemingly complex words. Once the two sets of sentences were collected and the complex words were identified, we had to generate substitutes, and manually select them and classify them, as was described in the following subsections.", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 240, |
| "text": "(Gala et al., 2020)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Tailor-Made Evaluation Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Generate and select candidates For each complex word, we produced synonyms that we annotated, using all our generation methods. The relevance of these synonyms in the context of the original sentence was then assessed by 3 expert linguists. They had to assign a score of 1 if the word is considered correct, otherwise 0. In this process, we applied the following guidelines : a word is considered synonymous as long as its replacement does not change the meaning of the sentence. To obtain a wide range of synonyms, we decided to accept hyperonyms and hyponyms -provided they fit the context -and to accept synonyms even if their register was different from that of the original word. This task is very complicated since there is no perfect synonymity and the validity of a candidate can therefore be perceived differently from one annotators to another one. However, thanks to the annotation guidelines and a discussion session between the annotators to discuss the criteria, the inter-rater agreement (Fleiss' \u039a) between the three annotators, computed on a sample of 500 candidates, is 0.638, which corresponds to a substantial agreement (Artstein and Poesio, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 1140, |
| "end": 1167, |
| "text": "(Artstein and Poesio, 2008)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Tailor-Made Evaluation Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Ranking of substitutes Finally, for this last annotation, we resorted to on 20 non expert annotators aged from 20 to 57 years, whose native language is French. They had to rank the synonyms validated at the previous step by reading ease. To that end, we used the online LimeSurvey tool 4 to deliver 2 different questionnaires of 25 items. The survey is presented as follows: each question includes a target sentence and the complex word in bold, as well as a list of synonyms in the left column. The task of the participants is to drag all the synonyms into the right-hand column to rank them, the top word being considered the most difficult.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Tailor-Made Evaluation Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Once all annotations have been collected, we proceed to average all annotators: in the same way as Specia et al. 2012, we assign each substitution a score based on the average of the scores assigned to it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Tailor-Made Evaluation Corpus", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To compare our different methods of generation and selection, we used the following metrics as described in Paetzold and Specia (2016a) : potential, precision, recall and F1. For the evaluation of ranking methods, we also employ the metrics trank-at-i and recall-at-i as mentioned in Paetzold and Specia (2016a) .", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 135, |
| "text": "Paetzold and Specia (2016a)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 284, |
| "end": 311, |
| "text": "Paetzold and Specia (2016a)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "This section presents the results for each step of simplification and compares the different methods proposed in FrenLyS. In view of the absence of comparable work for French, we put our results into perspective with those of Paetzold and Specia (2017c) for English and Qiang et al. (2021) for Chinese.", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 253, |
| "text": "Paetzold and Specia (2017c)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 270, |
| "end": 289, |
| "text": "Qiang et al. (2021)", |
| "ref_id": "BIBREF50" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As we can see in Table 1 , FastText generator, a method based on non contextual embeddings, slightly outperforms Resyf generator, based on a dictionary (F1 is 0.25 vs. 0.23). Resyf has a higher potential and recall, but suffers from a lack in precision, which is due to the fact that no sense disambiguation is carried out in synonym selection. In contrast, FastText reject candidates that do not correspond to the most frequent meaning of a form (as FastText computes only one vector per form, the most frequent sense has the largest influence on it). CamemBERT is clearly the less efficient technique. It is not a complete surprise, as it can generate words that fits the context, but are not valid synonyms of the complex word. We therefore tried to combine the advantages of CamemBERT (suitability to the context) with those of Resyf and FastText (better synonym generation). Considering the union between CamemBERT and FastText seems to hurt the performance (0.17 in F1 instead of 0.25), whereas combining CamemBERT and ReSyF produces our best results (0.26 in F1). It seems that ReSyF selects valid, but not necessarily context-appropriate synonyms, which are filtered by BERT based on the context. Although not directly comparable, the F1 of our best method is in line with those of Paetzold and Specia (2017c) and Qiang et al. (2021) . At the potential level, the difference observed could be explained by a variation in the number of synonyms produced by the generators: potential is correlated with the number of generated synonyms.", |
| "cite_spans": [ |
| { |
| "start": 1290, |
| "end": 1317, |
| "text": "Paetzold and Specia (2017c)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 1322, |
| "end": 1341, |
| "text": "Qiang et al. (2021)", |
| "ref_id": "BIBREF50" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 24, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Substitution Generation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We apply each of our selection methods on the union of all generated synonyms. The results obtained are presented in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 124, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Results clearly reveal the importance of selecting synonyms that share the same POS as the word to substitute. This allows our system to reach a F1 of 0.31 for the generation of synonyms. It is however surprising that the POS approach outperforms both FastText methods by 0.04. Once more, our results appears to be comparable with those of Paetzold and Specia (2017c) , as our F1 is clearly higher, but we were not able to obtain such a high potential. It is interesting to notice that our selectors mostly improve precision.", |
| "cite_spans": [ |
| { |
| "start": 340, |
| "end": 367, |
| "text": "Paetzold and Specia (2017c)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Substitution Selection", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Finally, we tested our different ranking methods on the part of corpus that has been also annotated for the reading ease of synonyms. Results are displayed in Table 3 . Ranking candidates based on frequency remains a strong baseline, as the Lexique3 ranker has a TRank-at-1 of 0.42 and a Recall-at-3 of 0.73. This means that, in 42% of our test sentences, using only the frequency allows to correctly predict the synonym defined to as the simplest by human judges (gold). Using the number of senses per words in addition to frequency does not bring much improvement, only 2% for TRank-at-1. In contrast, a much more sophisticated ranker using 21 word features, clearly improves performance, as it is able to select the easier synonyms in 50% of the cases. In this step, our results remain lower than those of Paetzold and Specia (2017c) in terms of TRank-at-1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 159, |
| "end": 166, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Substitution Ranking", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To conclude, we described the first tool for French lexical simplification that carries out three of the four classic LS steps. Our tool, FrenLyS, will be made available to the scientific community via a freely accessible web service 5 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "FrenLyS includes five synonym generators, based on the two principal approaches in the field: using a resource and querying embeddings. Whereas Hmida et al. (2018) had concluded that using ReSyF as a resource was able to outperform the approach of Glava\u0161 and\u0160tajner (2015), we found that relying on FastText was more efficient. However, our best method combines a synonym database with CamemBERT as a way to filter inappropriate synonyms in context. These two sources bring information about the complex word semantic (ReSyf ) and its context (CamemBERT), which comes close to the twofold strategy of Qiang et al. (2019) . They indeed generate synonyms based on one sentence in which the complex is masked (contextual information) and the same sentence in the complex word is present in order to keep the semantic information conveyed by the complex word.", |
| "cite_spans": [ |
| { |
| "start": 144, |
| "end": 163, |
| "text": "Hmida et al. (2018)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 601, |
| "end": 620, |
| "text": "Qiang et al. (2019)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "FrenLyS offers three of them and the results showed that using a simple POS filter is sufficient to improve the F1 of our generators. Ranking synonyms can be done through three techniques, the best of which integrates 21 word characteristics into a SVM ranker. The results obtained for ranking seem lower than those of Paetzold and Specia (2017c) . This could be due to variations in the test data, but maybe also to the use of a neural classifier. We plan to improve our ranking algorithm using a neural ranker in the future to investigate this issue.", |
| "cite_spans": [ |
| { |
| "start": 319, |
| "end": 346, |
| "text": "Paetzold and Specia (2017c)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, in addition to the implementation of the first complete LS tool for French, this paper also proposes the first evaluation dataset for French LS. This dataset will be distributed through the same web site as the API 6 . We hope that the availability of both resources could help boosting current LS research in French, which lacks behind similar research for other European languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The Delaf(Courtois, 2004) contains about 792,260 entries (inflected forms). For each entry, the dictionary provides the following information: lemma, pos and inflectional information (e.g. dictionaries,dictionary.N).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/hardik-vala/FreNetic", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The same sentence can be found several times with different complex words", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://www.limesurvey.org/fr/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://cental.uclouvain.be/frenlysAPI/ 6 https://cental.uclouvain.be/frenlysAPI/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This project was partially funded by the \"Direction de la language fran\u00e7aise\" from the Federation Wallonia-Brussels (AMesure project). We also want to thank Dr. R\u00e9mi Cardon for his in-depth reading and comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Automated text simplification: A survey", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Suha S Al-Thanyyan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Azmi", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "ACM Computing Surveys (CSUR)", |
| "volume": "54", |
| "issue": "2", |
| "pages": "1--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Suha S Al-Thanyyan and Aqil M Azmi. 2021. Auto- mated text simplification: A survey. ACM Comput- ing Surveys (CSUR), 54(2):1-36.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Lexical simplification approach using easy-to-read resources", |
| "authors": [ |
| { |
| "first": "Rodrigo", |
| "middle": [], |
| "last": "Alarcon", |
| "suffix": "" |
| }, |
| { |
| "first": "Lourdes", |
| "middle": [], |
| "last": "Moreno", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabel", |
| "middle": [], |
| "last": "Segura-Bedmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Paloma", |
| "middle": [], |
| "last": "Martinez", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Procesamiento de Lenguaje Natural", |
| "volume": "63", |
| "issue": "", |
| "pages": "95--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rodrigo Alarcon, Lourdes Moreno, Isabel Segura- Bedmar, and Paloma Martinez. 2019. Lexical sim- plification approach using easy-to-read resources. Procesamiento de Lenguaje Natural, 63:95-102.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Data-driven sentence simplification: Survey and benchmark", |
| "authors": [ |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Alva-Manchego", |
| "suffix": "" |
| }, |
| { |
| "first": "Carolina", |
| "middle": [], |
| "last": "Scarton", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Computational Linguistics", |
| "volume": "46", |
| "issue": "1", |
| "pages": "135--187", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fernando Alva-Manchego, Carolina Scarton, and Lu- cia Specia. 2020. Data-driven sentence simplifica- tion: Survey and benchmark. Computational Lin- guistics, 46(1):135-187.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Survey article: Inter-coder agreement for computational linguistics", |
| "authors": [ |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "4", |
| "pages": "555--596", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/coli.07-034-R2" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ron Artstein and Massimo Poesio. 2008. Survey ar- ticle: Inter-coder agreement for computational lin- guistics. Computational Linguistics, 34(4):555- 596.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "ReSyf: a French lexicon with ranked synonyms", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mokhtar Boumedyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Billami", |
| "suffix": "" |
| }, |
| { |
| "first": "N\u00faria", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gala", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of 27th International Conference on Computational Linguistics (COLING 2018)", |
| "volume": "", |
| "issue": "", |
| "pages": "2570--2581", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mokhtar Boumedyen Billami, Thomas Fran\u00e7ois, and N\u00faria Gala. 2018. ReSyf: a French lexicon with ranked synonyms. In Proceedings of 27th Inter- national Conference on Computational Linguistics (COLING 2018), pages 2570-2581.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Putting it simply: a context-aware approach to lexical simplification", |
| "authors": [ |
| { |
| "first": "Or", |
| "middle": [], |
| "last": "Biran", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Brody", |
| "suffix": "" |
| }, |
| { |
| "first": "No\u00e9mie", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "496--501", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Or Biran, Samuel Brody, and No\u00e9mie Elhadad. 2011a. Putting it simply: a context-aware approach to lexi- cal simplification. In Proceedings of the 49th ACL, pages 496-501.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Putting it simply: a context-aware approach to lexical simplification", |
| "authors": [ |
| { |
| "first": "Or", |
| "middle": [], |
| "last": "Biran", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Brody", |
| "suffix": "" |
| }, |
| { |
| "first": "No\u00e9mie", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "496--501", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Or Biran, Samuel Brody, and No\u00e9mie Elhadad. 2011b. Putting it simply: a context-aware approach to lex- ical simplification. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 496-501, Portland, Oregon, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Enriching word vectors with subword information", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "135--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Can spanish be simpler? lexsis: Lexical simplification for spanish", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Bott", |
| "suffix": "" |
| }, |
| { |
| "first": "Luz", |
| "middle": [], |
| "last": "Rello", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of COLING 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "357--374", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefan Bott, Luz Rello, Biljana Drndarevic, and Hora- cio Saggion. 2012. Can spanish be simpler? lexsis: Lexical simplification for spanish. In Proceedings of COLING 2012, pages 357-374.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Syntactic sentence simplification for french", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ligozat", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 3rd International Workshop on Predicting and Improving Text Readability for Target Reader Populations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ligozat, and T. Fran\u00e7ois. 2014. Syntactic sentence simplification for french. In Proceedings of the 3rd International Workshop on Predicting and Improv- ing Text Readability for Target Reader Populations (PITR 2014).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Approche lexicale de la simplification automatique de textes m\u00e9dicaux", |
| "authors": [ |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Cardon", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Actes de la conf\u00e9rence Traitement Automatique de la Langue Naturelle", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R\u00e9mi Cardon. 2018. Approche lexicale de la simplifi- cation automatique de textes m\u00e9dicaux. In Actes de la conf\u00e9rence Traitement Automatique de la Langue Naturelle, TALN 2018, page 159.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Practical simplification of english newspaper text to assist aphasic readers", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| }, |
| { |
| "first": "Guido", |
| "middle": [], |
| "last": "Minnen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvonne", |
| "middle": [], |
| "last": "Canning", |
| "suffix": "" |
| }, |
| { |
| "first": "Siobhan", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Tait", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "7--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Carroll, Guido Minnen, Yvonne Canning, Siob- han Devlin, and John Tait. 1998. Practical simpli- fication of english newspaper text to assist aphasic readers. In Proceedings of AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Tech- nology, pages 7-10.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Finding important terms for patients in their electronic health records: A learning-to-rank approach using expert annotations", |
| "authors": [ |
| { |
| "first": "Jinying", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiaping", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "JMIR Medical Informatics", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.2196/medinform.6373" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinying Chen, Jiaping Zheng, and Hong Yu. 2016. Finding important terms for patients in their elec- tronic health records: A learning-to-rank approach using expert annotations. JMIR Medical Informat- ics, 4:e40.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Syntaxe et Lexique-Grammaire. Papers in Honour of Maurice Gross", |
| "authors": [ |
| { |
| "first": "Blandine", |
| "middle": [], |
| "last": "Courtois", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Lecl\u00e8re Christian, Laporte\u00c9ric, Piot Mireille, and Silberztein Max", |
| "volume": "24", |
| "issue": "", |
| "pages": "113--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blandine Courtois. 2004. Dictionnaires\u00e9lectroniques DELAF anglais et fran\u00e7ais. In Lecl\u00e8re Christian, Laporte\u00c9ric, Piot Mireille, and Silberztein Max, ed- itors, Lexique, Syntaxe et Lexique-Grammaire. Pa- pers in Honour of Maurice Gross, Lingvisticae In- vestigationes Supplementa 24, pages 113-123. Am- sterdam/Philadelphia : Benjamins. Incollection.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Text simplification for children", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [ |
| "De" |
| ], |
| "last": "Belder", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Francine", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the SI-GIR workshop on accessible search systems", |
| "volume": "", |
| "issue": "", |
| "pages": "19--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan De Belder and Marie-Francine Moens. 2010. Text simplification for children. In Proceedings of the SI- GIR workshop on accessible search systems, pages 19-26. ACM; New York.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "A dataset for the evaluation of lexical simplification", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [ |
| "De" |
| ], |
| "last": "Belder", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Francine", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 13th CICLing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan De Belder and Marie-Francine Moens. 2012. A dataset for the evaluation of lexical simplification. In Proceedings of the 13th CICLing.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Extracting lay paraphrases of specialized expressions from monolingual comparable medical corpora", |
| "authors": [ |
| { |
| "first": "Louise", |
| "middle": [], |
| "last": "Del\u00e9ger", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Zweigenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Nonparallel Corpora (BUCC)", |
| "volume": "", |
| "issue": "", |
| "pages": "2--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Louise Del\u00e9ger and Pierre Zweigenbaum. 2009. Ex- tracting lay paraphrases of specialized expressions from monolingual comparable medical corpora. In Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Non- parallel Corpora (BUCC), pages 2-10.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Simplification de textes: un\u00e9tat de l'art", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sofiane Elguendouze", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Rencontre des\u00c9tudiants Chercheurs en Informatique pour le TAL", |
| "volume": "3", |
| "issue": "", |
| "pages": "96--109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sofiane Elguendouze. 2020. Simplification de textes: un\u00e9tat de l'art. In Actes de TALN2020. Volume 3: Rencontre des\u00c9tudiants Chercheurs en Informa- tique pour le TAL, pages 96-109. ATALA.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Comprehending technical texts: Predicting and defining unfamiliar terms. AMIA", |
| "authors": [ |
| { |
| "first": "Noemie", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Annual Symposium proceedings / AMIA Symposium. AMIA Symposium", |
| "volume": "", |
| "issue": "", |
| "pages": "239--282", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noemie Elhadad. 2006. Comprehending technical texts: Predicting and defining unfamiliar terms. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium, 2006:239-43.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Bleu, contusion, ecchymose : tri automatique de synonymes en fonction de leur difficult\u00e9 de lecture et compr\u00e9hension", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "" |
| }, |
| { |
| "first": "Mokhtar", |
| "middle": [], |
| "last": "Billami", |
| "suffix": "" |
| }, |
| { |
| "first": "N\u00faria", |
| "middle": [], |
| "last": "Gala", |
| "suffix": "" |
| }, |
| { |
| "first": "Delphine", |
| "middle": [], |
| "last": "Bernhard", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "15--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Fran\u00e7ois, Mokhtar Billami, N\u00faria Gala, and Delphine Bernhard. 2016. Bleu, contusion, ecchy- mose : tri automatique de synonymes en fonction de leur difficult\u00e9 de lecture et compr\u00e9hension. In JEP- TALN-RECITAL 2016, volume 2 of TALN, pages 15- 28, Paris, France.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Amesure: A web platform to assist the clear writing of administrative texts", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "" |
| }, |
| { |
| "first": "Adeline", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| }, |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Rolin", |
| "suffix": "" |
| }, |
| { |
| "first": "Magali", |
| "middle": [], |
| "last": "Norr\u00e9", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Fran\u00e7ois, Adeline M\u00fcller, Eva Rolin, and Mag- ali Norr\u00e9. 2020. Amesure: A web platform to assist the clear writing of administrative texts. In Proceed- ings of the 1st Conference of the Asia-Pacific Chap- ter of the Association for Computational Linguistics and the 10th International Joint Conference on Nat- ural Language Processing: System Demonstrations, pages 1-7.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "La simplification de textes, une aide\u00e0 l'apprentissage de la lecture", |
| "authors": [ |
| { |
| "first": "Nuria", |
| "middle": [], |
| "last": "Gala", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "" |
| }, |
| { |
| "first": "Ludivine", |
| "middle": [], |
| "last": "Javourey-Drevet", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [ |
| "Christoph" |
| ], |
| "last": "Ziegler", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Langue fran\u00e7aise", |
| "volume": "3", |
| "issue": "", |
| "pages": "123--131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nuria Gala, Thomas Fran\u00e7ois, Ludivine Javourey- Drevet, and Johannes Christoph Ziegler. 2018. La simplification de textes, une aide\u00e0 l'apprentissage de la lecture. Langue fran\u00e7aise, 3:123-131.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Alector: A parallel corpus of simplified french texts with alignments of misreadings by poor and dyslexic readers", |
| "authors": [ |
| { |
| "first": "N\u00faria", |
| "middle": [], |
| "last": "Gala", |
| "suffix": "" |
| }, |
| { |
| "first": "Ana\u00efs", |
| "middle": [], |
| "last": "Tack", |
| "suffix": "" |
| }, |
| { |
| "first": "Ludivine", |
| "middle": [], |
| "last": "Javourey-Drevet", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Ziegler", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Language Resources and Evaluation for Language Technologies (LREC)", |
| "volume": "", |
| "issue": "", |
| "pages": "1353--1361", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N\u00faria Gala, Ana\u00efs Tack, Ludivine Javourey-Drevet, Thomas Fran\u00e7ois, and Johannes Ziegler. 2020. Alec- tor: A parallel corpus of simplified french texts with alignments of misreadings by poor and dyslexic readers. In Language Resources and Evaluation for Language Technologies (LREC), page 1353-1361.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Simplifying lexical simplification: Do we need simplified corpora?", |
| "authors": [ |
| { |
| "first": "Goran", |
| "middle": [], |
| "last": "Glava\u0161", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sanja\u0161tajner", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL 2015", |
| "volume": "2", |
| "issue": "", |
| "pages": "63--68", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Goran Glava\u0161 and Sanja\u0160tajner. 2015. Simplifying lex- ical simplification: Do we need simplified corpora? In Proceedings of ACL 2015: Volume 2: Short Pa- pers), pages 63-68.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Complex word identification as a sequence labelling task", |
| "authors": [ |
| { |
| "first": "Sian", |
| "middle": [], |
| "last": "Gooding", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Kochmar", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sian Gooding and Ekaterina Kochmar. 2019. Complex word identification as a sequence labelling task. In ACL.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Large margin rank boundaries for ordinal regression", |
| "authors": [ |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Herbrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Thore", |
| "middle": [], |
| "last": "Graepel", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Obermayer", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Advances in Large Margin Classifiers", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralf Herbrich, Thore Graepel, and Klaus Obermayer. 2000. Large margin rank boundaries for ordinal re- gression. Advances in Large Margin Classifiers, 88.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Assisted Lexical Simplification for French Native Children with Reading Difficulties", |
| "authors": [ |
| { |
| "first": "Firas", |
| "middle": [], |
| "last": "Hmida", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mokhtar Boumedyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Billami", |
| "suffix": "" |
| }, |
| { |
| "first": "Nuria", |
| "middle": [], |
| "last": "Fran\u00e7ois", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gala", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "The Workshop of Automatic Text Adaptation, 11th International Conference on Natural Language Generation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Firas Hmida, Mokhtar Boumedyen Billami, Thomas Fran\u00e7ois, and Nuria Gala. 2018. Assisted Lexi- cal Simplification for French Native Children with Reading Difficulties. In The Workshop of Auto- matic Text Adaptation, 11th International Confer- ence on Natural Language Generation, Tilbourg, Netherlands.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Learning a lexical simplifier using wikipedia", |
| "authors": [ |
| { |
| "first": "Colby", |
| "middle": [], |
| "last": "Horn", |
| "suffix": "" |
| }, |
| { |
| "first": "Cathryn", |
| "middle": [], |
| "last": "Manduca", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Kauchak", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL 2014", |
| "volume": "2", |
| "issue": "", |
| "pages": "458--463", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P14-2075" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colby Horn, Cathryn Manduca, and David Kauchak. 2014a. Learning a lexical simplifier using wikipedia. In Proceedings of ACL 2014, volume 2, pages 458- 463.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Learning a lexical simplifier using wikipedia", |
| "authors": [ |
| { |
| "first": "Colby", |
| "middle": [], |
| "last": "Horn", |
| "suffix": "" |
| }, |
| { |
| "first": "Cathryn", |
| "middle": [], |
| "last": "Manduca", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Kauchak", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL (2)", |
| "volume": "", |
| "issue": "", |
| "pages": "458--463", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colby Horn, Cathryn Manduca, and David Kauchak. 2014b. Learning a lexical simplifier using wikipedia. In ACL (2), pages 458-463.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Evaluation dataset and system for Japanese lexical simplification", |
| "authors": [ |
| { |
| "first": "Tomoyuki", |
| "middle": [], |
| "last": "Kajiwara", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuhide", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the ACL-IJCNLP 2015 Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "35--40", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P15-3006" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomoyuki Kajiwara and Kazuhide Yamamoto. 2015. Evaluation dataset and system for Japanese lexical simplification. In Proceedings of the ACL-IJCNLP 2015 Student Research Workshop, pages 35-40, Bei- jing, China. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Controlled and balanced dataset for japanese lexical simplification", |
| "authors": [ |
| { |
| "first": "Tomonori", |
| "middle": [], |
| "last": "Kodaira", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoyuki", |
| "middle": [], |
| "last": "Kajiwara", |
| "suffix": "" |
| }, |
| { |
| "first": "Mamoru", |
| "middle": [], |
| "last": "Komachi", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-3001" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomonori Kodaira, Tomoyuki Kajiwara, and Mamoru Komachi. 2016. Controlled and balanced dataset for japanese lexical simplification. pages 1-7.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Making people play for lexical acquisition with the jeuxdemots prototype", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mathieu Lafourcade", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "SNLP'07: 7th international symposium on natural language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mathieu Lafourcade. 2007. Making people play for lexical acquisition with the jeuxdemots prototype. In SNLP'07: 7th international symposium on natu- ral language processing, page 7.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Personalized substitution ranking for lexical simplification", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Chak Yan", |
| "middle": [], |
| "last": "Yeung", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 12th International Conference on Natural Language Generation", |
| "volume": "", |
| "issue": "", |
| "pages": "258--267", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-8634" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Lee and Chak Yan Yeung. 2019. Personalized substitution ranking for lexical simplification. In Proceedings of the 12th International Conference on Natural Language Generation, pages 258-267, Tokyo, Japan. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Manulex : A grade-level lexical database from French elementary-school readers", |
| "authors": [ |
| { |
| "first": "Bernard", |
| "middle": [], |
| "last": "L\u00e9t\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Liliane", |
| "middle": [], |
| "last": "Sprenger-Charolles", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascale", |
| "middle": [], |
| "last": "Col\u00e9", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Behavior Research Methods, Instruments and Computers", |
| "volume": "36", |
| "issue": "", |
| "pages": "156--166", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernard L\u00e9t\u00e9, Liliane Sprenger-Charolles, and Pas- cale Col\u00e9. 2004. Manulex : A grade-level lexi- cal database from French elementary-school readers. Behavior Research Methods, Instruments and Com- puters, 36:156-166.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot", |
| "authors": [ |
| { |
| "first": "Louis", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Muller", |
| "suffix": "" |
| }, |
| { |
| "first": "Pedro Javier Ortiz", |
| "middle": [], |
| "last": "Su\u00e1rez", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoann", |
| "middle": [], |
| "last": "Dupont", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Romary", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Ville- monte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. Camembert: a tasty french language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Wordnet: A lexical database for english", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Commun. ACM", |
| "volume": "38", |
| "issue": "11", |
| "pages": "39--41", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/219717.219748" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Une citoyennet\u00e9 de seconde classe ? n'ayons pas peur des mots ! Zeitschrift f\u00fcr Bildungsforschung", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Mutabazi", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathana\u00ebl", |
| "middle": [], |
| "last": "Wallenhorst", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Mutabazi and Nathana\u00ebl Wallenhorst. 2020. Une citoyennet\u00e9 de seconde classe ? n'ayons pas peur des mots ! Zeitschrift f\u00fcr Bildungsforschung.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Lexique 3: Une nouvelle base de donn\u00e9es lexicales", |
| "authors": [ |
| { |
| "first": "Boris", |
| "middle": [], |
| "last": "New", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "TALN Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "892--900", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boris New. 2006. Lexique 3: Une nouvelle base de donn\u00e9es lexicales. TALN Conference, pages 892- 900.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "As simple as it gets-a sentence simplifier for different learning levels and contexts", |
| "authors": [ |
| { |
| "first": "Ricardo", |
| "middle": [], |
| "last": "Bernardo Pereira Nunes", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Kawase", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Siehndel", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Marco", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Casanova", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dietze", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "2013 IEEE 13th international conference on advanced learning technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "128--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernardo Pereira Nunes, Ricardo Kawase, Patrick Siehndel, Marco A Casanova, and Stefan Dietze. 2013. As simple as it gets-a sentence simplifier for different learning levels and contexts. In 2013 IEEE 13th international conference on advanced learning technologies, pages 128-132. IEEE.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Text simplification as tree transduction", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "STIL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2013. Text sim- plification as tree transduction. In STIL.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Lexenstein: A framework for lexical simplification", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL-IJCNLP 2015 System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "85--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2015. Lexen- stein: A framework for lexical simplification. In Proceedings of ACL-IJCNLP 2015 System Demon- strations, pages 85-90.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Benchmarking lexical simplification systems", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2016a. Bench- marking lexical simplification systems. In LREC.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "NNSeval: Evaluating Lexical Simplification for Non-Natives", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.5281/zenodo.2552381" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2016b. NNSeval: Evaluating Lex- ical Simplification for Non-Natives.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Semeval 2016 task 11: Complex word identification", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "560--569", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S16-1085" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2016c. Se- meval 2016 task 11: Complex word identification. pages 560-569.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Unsupervised lexical simplification for non-native speakers", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16", |
| "volume": "", |
| "issue": "", |
| "pages": "3761--3767", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2016d. Unsu- pervised lexical simplification for non-native speak- ers. In Proceedings of the Thirtieth AAAI Con- ference on Artificial Intelligence, AAAI'16, page 3761-3767. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Lexical simplification with neural ranking", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EACL 2017", |
| "volume": "2", |
| "issue": "", |
| "pages": "34--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2017a. Lexical simplification with neural ranking. In Proceedings of EACL 2017: Volume 2, Short Papers, pages 34- 40.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "A survey on lexical simplification", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2017b. A sur- vey on lexical simplification.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "A survey on lexical simplification", |
| "authors": [ |
| { |
| "first": "Gustavo", |
| "middle": [ |
| "H" |
| ], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "60", |
| "issue": "", |
| "pages": "549--593", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gustavo H. Paetzold and Lucia Specia. 2017c. A sur- vey on lexical simplification. Journal of Artificial Intelligence Research, 60:549-593.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "A simple bert-based approach for lexical simplification", |
| "authors": [ |
| { |
| "first": "Jipeng", |
| "middle": [], |
| "last": "Qiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yunhao", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Xindong", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan, and Xin- dong Wu. 2019. A simple bert-based approach for lexical simplification.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Chinese lexical simplification", |
| "authors": [ |
| { |
| "first": "Jipeng", |
| "middle": [], |
| "last": "Qiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xinyu", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yun-Hao", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Xindong", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Speech, and Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jipeng Qiang, Xinyu Lu, Yun Li, Yun-Hao Yuan, and Xindong Wu. 2021. Chinese lexical simplification. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Simplification automatique de texte dans un contexte de faibles ressources", |
| "authors": [ |
| { |
| "first": "Anne-Laure", |
| "middle": [], |
| "last": "Sadaf Abdul Rauf", |
| "suffix": "" |
| }, |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Ligozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Yvon", |
| "suffix": "" |
| }, |
| { |
| "first": "Thierry", |
| "middle": [], |
| "last": "Illouz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hamon", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Actes de TALN2020", |
| "volume": "2", |
| "issue": "", |
| "pages": "332--341", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sadaf Abdul Rauf, Anne-Laure Ligozat, Fran\u00e7ois Yvon, Gabriel Illouz, and Thierry Hamon. 2020. Simplification automatique de texte dans un con- texte de faibles ressources. In Actes de TALN2020. Volume 2, pages 332-341. ATALA.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Automatic text simplification", |
| "authors": [], |
| "year": 2017, |
| "venue": "Synthesis Lectures on Human Language Technologies", |
| "volume": "10", |
| "issue": "1", |
| "pages": "1--137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Horacio Saggion. 2017. Automatic text simplification. Synthesis Lectures on Human Language Technolo- gies, 10(1):1-137.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Construction d'un wordnet libre du fran\u00e7ais\u00e0 partir de ressources multilingues", |
| "authors": [ |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "Darja", |
| "middle": [], |
| "last": "Fiser", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beno\u00eet Sagot and Darja Fiser. 2008. Construction d'un wordnet libre du fran\u00e7ais\u00e0 partir de ressources mul- tilingues.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Acquisition of syntactic simplification rules for french", |
| "authors": [ |
| { |
| "first": "Violeta", |
| "middle": [], |
| "last": "Seretan", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the eight international conference on language resources and evaluation (LREC'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "4019--4026", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Violeta Seretan. 2012. Acquisition of syntactic simpli- fication rules for french. In Proceedings of the eight international conference on language resources and evaluation (LREC'12), pages 4019-4026.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "A comparison of techniques to automatically identify complex words", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Shardlow", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the ACL Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "103--109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Shardlow. 2013. A comparison of techniques to automatically identify complex words. In Pro- ceedings of the ACL Student Research Workshop, pages 103-109.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "A survey of automated text simplification", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Shardlow", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "International Journal of Advanced Computer Science and Applications", |
| "volume": "4", |
| "issue": "1", |
| "pages": "58--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Shardlow. 2014. A survey of automated text simplification. International Journal of Advanced Computer Science and Applications, 4(1):58-70.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "A survey of research on text simplification", |
| "authors": [ |
| { |
| "first": "Advaith", |
| "middle": [], |
| "last": "Siddharthan", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ITL-International Journal of Applied Linguistics", |
| "volume": "165", |
| "issue": "2", |
| "pages": "259--298", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Advaith Siddharthan. 2014. A survey of research on text simplification. ITL-International Journal of Ap- plied Linguistics, 165(2):259-298.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Semeval-2012 task 1: English lexical simplification", |
| "authors": [ |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Sujay", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Jauhar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "347--355", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucia Specia, Sujay K. Jauhar, and Rada Mihalcea. 2012. Semeval-2012 task 1: English lexical simpli- fication. In Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 347-355.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Wordnetbased lexical simplification of a document", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "Sven", |
| "middle": [], |
| "last": "Anderson", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "80--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Thomas and Sven Anderson. 2012. Wordnet- based lexical simplification of a document. In KON- VENS, pages 80-88.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Un corpus d'\u00e9valuation pour un syst\u00e8me de simplification discursive", |
| "authors": [ |
| { |
| "first": "Rodrigo", |
| "middle": [], |
| "last": "Wilkens", |
| "suffix": "" |
| }, |
| { |
| "first": "Amalia", |
| "middle": [], |
| "last": "Todirascu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Actes de TALN 2020", |
| "volume": "2", |
| "issue": "", |
| "pages": "361--369", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rodrigo Wilkens and Amalia Todirascu. 2020. Un cor- pus d'\u00e9valuation pour un syst\u00e8me de simplification discursive. In Actes de TALN 2020. Volume 2, pages 361-369. ATALA.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "A report on the complex word identification shared task 2018", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Seid Muhie Yimam", |
| "suffix": "" |
| }, |
| { |
| "first": "Shervin", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Malmasi", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Gustavo", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Paetzold", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "Ana\u00efs", |
| "middle": [], |
| "last": "Sanja\u0161tajner", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcos", |
| "middle": [], |
| "last": "Tack", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zampieri", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1804.09132" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seid Muhie Yimam, Chris Biemann, Shervin Malmasi, Gustavo H Paetzold, Lucia Specia, Sanja\u0160tajner, Ana\u00efs Tack, and Marcos Zampieri. 2018. A report on the complex word identification shared task 2018. arXiv preprint arXiv:1804.09132.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "", |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Benchmarking results for SS with substitutions generated by all generators", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF5": { |
| "text": "Benchmarking results for SR", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |