ACL-OCL / Base_JSON /prefixP /json /P19 /P19-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P19-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:31:46.535722Z"
},
"title": "Neural Text Simplification of Clinical Letters with a Domain Specific Phrase Table",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Shardlow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Manchester Metropolitan University",
"location": {}
},
"email": "m.shardlow@mmu.ac.uk"
},
{
"first": "Raheel",
"middle": [],
"last": "Nawaz",
"suffix": "",
"affiliation": {},
"email": "r.nawaz@mmu.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Clinical letters are infamously impenetrable for the lay patient. This work uses neural text simplification methods to automatically improve the understandability of clinical letters for patients. We take existing neural text simplification software and augment it with a new phrase table that links complex medical terminology to simpler vocabulary by mining SNOMED-CT. In an evaluation task using crowdsourcing, we show that the results of our new system are ranked easier to understand (average rank 1.93) than using the original system (2.34) without our phrase table. We also show improvement against baselines including the original text (2.79) and using the phrase table without the neural text simplification software (2.94). Our methods can easily be transferred outside of the clinical domain by using domain-appropriate resources to provide effective neural text simplification for any domain without the need for costly annotation.",
"pdf_parse": {
"paper_id": "P19-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "Clinical letters are infamously impenetrable for the lay patient. This work uses neural text simplification methods to automatically improve the understandability of clinical letters for patients. We take existing neural text simplification software and augment it with a new phrase table that links complex medical terminology to simpler vocabulary by mining SNOMED-CT. In an evaluation task using crowdsourcing, we show that the results of our new system are ranked easier to understand (average rank 1.93) than using the original system (2.34) without our phrase table. We also show improvement against baselines including the original text (2.79) and using the phrase table without the neural text simplification software (2.94). Our methods can easily be transferred outside of the clinical domain by using domain-appropriate resources to provide effective neural text simplification for any domain without the need for costly annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text Simplification is the process of automatically improving the understandability of a text for an end user. In this paper, we use text simplification methods to improve the understandability of clinical letters. Clinical letters are written by doctors and typically contain complex medical language that is beyond the scope of the lay reader. A patient may see these if they are addressed directly, or via online electronic health records. If a patient does not understand the text that they are reading, this may cause them to be confused about their diagnosis, prognosis and clinical findings. Recently, the UK Academy of Medical Royal Colleges introduced the \"Please Write to Me\" Campaign, which encouraged clinicians to write directly to patients, avoid latin-phrases and acronyms, ditch redundant words and generally write in a manner that is accessible to a non-expert (Academy of Medical Royal Colleges, 2018) . Inspired by this document, we took data from publicly available datasets of clinical letters (Section 3), used state of the art Neural Text Simplification software to improve the understandability of these documents (Section 4) analysed the results and identified errors (Section 5), built a parallel vocabulary of complex and simple terms (Section 6), integrated this into the simplification system and evaluated this with human judges, showing an overall improvement (Section 7).",
"cite_spans": [
{
"start": 878,
"end": 919,
"text": "(Academy of Medical Royal Colleges, 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The idea of simplifying texts through machine translation has been around some time (Wubben et al., 2012; Xu et al., 2016) , however with recent advances in machine translation leveraging deep learning (Wu et al., 2016 ), text simplification using neural networks (Wang et al., 2016; Nisioi et al., 2017; Sulem et al., 2018) has become a realistic prospect. The Neural Text Simplification (NTS) system (Nisioi et al., 2017) uses the freely available OpenNMT (Klein et al., 2017) software package 1 which provides sequence to sequence learning between a source and target language. In the simplification paradigm, the source language is difficult to understand language and the target language is an easier version of that language (in our case both English, although other languages can be simplified using the same architecture). The authors of the NTS system provide models trained on parallel data from English Wikipedia and Simple English Wikipedia which can be used to simplify source documents in English. NTS provides lexical simplifications at the level of both single lexemes and multiword expressions in addition to syntactic simplifications such as paraphrasing or removing redundant grammatical structures. Neural Machine Translation is not perfect and may sometimes result in errors. A recent study found that one specific area of concern was lexical cohesion (Voita et al., 2019) , which would affect the readability and hence simplicity of a resulting text.",
"cite_spans": [
{
"start": 84,
"end": 105,
"text": "(Wubben et al., 2012;",
"ref_id": "BIBREF26"
},
{
"start": 106,
"end": 122,
"text": "Xu et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 202,
"end": 218,
"text": "(Wu et al., 2016",
"ref_id": "BIBREF25"
},
{
"start": 264,
"end": 283,
"text": "(Wang et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 284,
"end": 304,
"text": "Nisioi et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 305,
"end": 324,
"text": "Sulem et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 402,
"end": 423,
"text": "(Nisioi et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 458,
"end": 478,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 1373,
"end": 1393,
"text": "(Voita et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Phrase tables for simplification have also been applied in the context of paraphrasing systems where paraphrases are identified manually (Hoard et al., 1992) or learnt from corpora (Yatskar et al., 2010; Grabar et al., 2014; Hasan et al., 2016) and stored in a phrase table for later application to a text. A paraphrase consists of a complex phrase paired with one or more simplifications of that phrase. These are context specific and must be applied at the appropriate places to avoid semantic errors that lead to loss of meaning (Shardlow, 2014) .",
"cite_spans": [
{
"start": 137,
"end": 157,
"text": "(Hoard et al., 1992)",
"ref_id": "BIBREF9"
},
{
"start": 181,
"end": 203,
"text": "(Yatskar et al., 2010;",
"ref_id": "BIBREF28"
},
{
"start": 204,
"end": 224,
"text": "Grabar et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 225,
"end": 244,
"text": "Hasan et al., 2016)",
"ref_id": null
},
{
"start": 532,
"end": 548,
"text": "(Shardlow, 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The clinical/medical domain recieves much attention for NLP (Shardlow et al., 2018; Yunus et al., 2019; Jahangir et al., 2017; Nawaz et al., 2012) and is well suited to the task of text simplification as there is a need for experts (i.e., clinicians) to communicate with non-experts (i.e., patients) in a language commonly understood by both. Previous efforts to address this issue via text simplification have focussed on (a) public health information (Kloehn et al., 2018) , where significant investigations have been undertaken to understand what makes language difficult for a patient and (b) the simplification of medical texts in the Swedish language (Abrahamsson et al., 2014) , which presents its own unique set of challenges for text simplification due to compound words.",
"cite_spans": [
{
"start": 60,
"end": 83,
"text": "(Shardlow et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 84,
"end": 103,
"text": "Yunus et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 104,
"end": 126,
"text": "Jahangir et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 127,
"end": 146,
"text": "Nawaz et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 453,
"end": 474,
"text": "(Kloehn et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 657,
"end": 683,
"text": "(Abrahamsson et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To assess the impact of simplification on patient understanding, we obtained 2 datasets representing clinical texts that may be viewed by a patient. We selected data from the i2b2 shared task, as well as data from MIMIC. A brief description of each dataset, along with the preprocessing we applied is below. We selected 149 records from i2b2 and 150 from MIMIC. Corpus statistics are given in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 400,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3"
},
{
"text": "The i2b2 2006 Deidentification and Smoking Challenge (Uzuner et al., 2007) consists of 889 unannotated, de-identified discharge summaries. We selected the test-set of 220 patient records and i2b2 MIMIC Total Records 149 150 299 Words 80,273 699,798 780,071 Avg. Words 538.7 4665.3 2,608.9 Table 1 : Corpus statistics filtered these for all records containing more than 10 tokens. This gave us 149 records to work with. We concatenated all the information from each record into one file and did no further preprocessing of this data as it was already tokenised and normalised sufficiently.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "(Uzuner et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 202,
"end": 279,
"text": "Total Records 149 150 299 Words 80,273 699,798 780,071 Avg. Words 538.7",
"ref_id": "TABREF1"
},
{
"start": 295,
"end": 302,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "i2b2",
"sec_num": "3.1"
},
{
"text": "In addition to i2b2, we also downloaded data from MIMIC-III v1.4 (Johnson et al., 2016 ) (referred to herein as MIMIC). MIMIC provides over 58,000 hospital records, with detailed clinical information regarding a patient's care. One key difference between MIMIC and i2b2 was that each of MIMIC's records contained multiple discrete statements separated by time. We separated these sub-records, and selected the 150 with the largest number of tokens. This ensured that we had selected a varied sample from across the documents that were available to us. We did not use all the data available to us due to the time constraints of (a) running the software and (b) performing the analysis on the resulting documents. We preprocessed this data using the tokenisation algorithm distributed with OpenNMT.",
"cite_spans": [
{
"start": 50,
"end": 86,
"text": "MIMIC-III v1.4 (Johnson et al., 2016",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIMIC",
"sec_num": "3.2"
},
{
"text": "We used the publicly available NTS system (Nisioi et al., 2017) . This package is freely available via GitHub 2 . We chose to use this rather than reimplementing our own system as it allows us to better compare our work to the current state of the art and makes it easier for others to reproduce our work. We have not included details of the specific algorithm that underlies the OpenNMT framework, as this is not the focus of our paper and is reported on in depth in the original paper, where we would direct readers. Briefly, their system uses an Encoder-Decoder LSTM layer with 500 hidden units, dropout and attention. Original words are substituted when an out of vocabulary word is detected, as this is appropriate in mono-lingual machine translation. The simplification model that underpins the NTS software is trained using aligned English Wikipedia and Simple English Wikipedia data. This model is distributed as part of the software. We ran the NTS software on each of our 299 records to generate a new simplified version of each original record. We used the standard parameters given with the NTS software as follows:",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "(Nisioi et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Text Simplification",
"sec_num": "4"
},
{
"text": "Beam Size = 5: This parameter controls the beam search that is used to select a final sentence. A beam size of 1 would indicate greedy search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Text Simplification",
"sec_num": "4"
},
{
"text": "n-best = 4: This causes the 4 best translations to be output, although in practice, we only selected the best possible translation in each case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Text Simplification",
"sec_num": "4"
},
{
"text": "model = NTS-w2v epoch11 10.20.t7: Two models were provided with the NTS software, we chose the model with the highest BLEU score in the original NTS paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Text Simplification",
"sec_num": "4"
},
{
"text": "replace unk: This parameter forces unknown words to be replaced by the original token in the sentence (as opposed to an <UNK> marker).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Text Simplification",
"sec_num": "4"
},
{
"text": "To identify whether our system was performing some form of simplification we calculated three readability indices, 3 each of which took into account different information about the text. We have not reported formulae here as they are available in the original papers, and abundantly online.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Indices",
"sec_num": "4.1"
},
{
"text": "Flesch-Kincaid: The Flesch-Kincaid reading grade calculator (Kincaid et al., 1975) takes into account the ratio of words to sentences and the ratio of syllables to words in a text. This tells us information about how long each sentence is and how many long words are used in each text. The output of Flesch-Kincaid is an approximation of the appropriate US Reading Grade for the text. Table 2 : The results of calculating 3 readability indices on the texts before and after simplification. We show a significant reduction in the metrics in each case indicating that the texts after simplification are suitable for a lower reading grade level.",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "(Kincaid et al., 1975)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 385,
"end": 392,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Readability Indices",
"sec_num": "4.1"
},
{
"text": "takes into account the ratio of words to sentences and the proportion of words in a text which are deemed to be complex, where a complex word is considered to be any words of more than 3 syllables, discounting suffixes. The results of each of these metrics for the i2b2 and MIMIC documents are shown in Table 2 . In each case, using the NTS software improved the readability of the document. We calculated the statistical significance of this improvement with a t-test, receiving a p-value of less than 0.001 in each case. However, readability indices say nothing about the understandability of the final text and it could be the case that the resultant text was nonsensical, but still got a better score. This concern led us to perform the error analysis in the following section.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Readability Indices",
"sec_num": "4.1"
},
{
"text": "Our previous analysis showed that the documents were easier to read according to automated indices, however the automated indices were not capable of telling us anything about the quality of the resulting text. To investigate this further, we analysed 1000 sentences (500 from i2b2 and 500 from MIMIC) that had been processed by the system and categorised each according to the following framework:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Type 1: A change has been made with no loss or alteration of the original meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Type 2: No change has been made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Type 3: A significant reduction in the information has been made, which has led to critical information being missed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Type 4: A single lexical substitution has been made, which led to loss or alteration of the original meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Type 5: An incorrect paraphrase or rewording of the sentence has been made, which led to loss or alteration of the original meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Type 6: A single word from the original text is repeated multiple times in the resulting text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "We developed this framework by looking at the 1000 sentences in our corpus. Although the framework does not give any information about the readability of sentences, it does tell us about the existing pitfalls of the algorithm. We were able to categorise every sentence using these six categories. Each category represents an increased level of severity in terms of the consequences for the readability of the text. A Type 1 sentence may have a positive impact on the readability of a text. 4 A Type 2 sentence will not have any impact as no modification has been made. A Type 3 sentence may improve the readability according to the automated metric and may help the reader understand one portion of the text, however some critical information from the original text has been missed. In a clinical setting, this could lead to the patient missing some useful information about their care. Types 4, 5 and 6 represent further errors of increasing severity. In these cases, the resulting sentences did not convey the original meaning of the text and would diminish the understandability of a text if shown to a reader.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "The first author of this paper went through each sentence with the categories above and assigned each sentence to an appropriate category. Where one sentence crossed multiple categories, the highest (i.e., most severe) category was chosen. However, this only occurred in a small proportion of Type i2b2 MIMIC Total 1 25 33 58 2 337 322 659 3 41 55 96 4 55 61 116 5 25 21 46 6 17 8 25 Table 3 : The results of the error analysis. 500 sentences each were annotated from i2b2 and MIMIC to give 1000 annotated sentences in the 'Total' column.",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 396,
"text": "Total 1 25 33 58 2 337 322 659 3 41 55 96 4 55 61 116 5 25 21 46 6",
"ref_id": "TABREF1"
},
{
"start": 405,
"end": 412,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "the data and would not significantly affect our results had we recorded these separately. The results of the error analysis are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "The results show that the majority of the time the system does not make a change to the text (659/1000 = 65.9% of the time). We would not expect every single sentence to be simplified by the system, as some sentences may not require simplification to be understood by an end user. Other sentences may require simplification, but the system does not realise this, in which case the system may still choose not to simplify the text. Only in 5.8% of the cases is a valid simplification made. These generally consisted of helpful lexical substitutions, however there were also some examples of helpful rephrasing or paraphrasing. In addition to the 5.8% of valid simplifications, a further 9.6% of cases were instances where a significant chunk of a sentence had been removed. In these cases, the resulting sentence was still readable by an end user, however some important information was missing. These sentences do not necessarily constitute an error in the system's behaviour as the information that was omitted may not have been relevant to the patient and removing it may have helped the patient to better understand the text overall, despite missing some specific detail. The rate of Type 4 errors is 11.6%. These errors significantly obfuscated the text as an incorrect word was placed in the text, where the original word would have been more useful. 4.6% of errors were incorrect rewordings (Type 5) and a further 2.5% were cases of a word being repeated multiple times. In total this gives 18.7% of sentences that result in errors. The error rate clearly informs the use of the NTS software. It may be the case that in a clinical setting, NTS could be used as an aid to the doctor when writing a patient letter to suggest simplifications, however it is clear that it would not be appropriate to simplify a doctor's letter and send this directly to a patient without any human intervention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "The NTS system is trained on parallel Wikipedia and Simple Wikipedia documents. Whilst these may contain some medical texts, they are not specific to the clinical genre and we should not expect that direct simplification of medical language will occur. Indeed, when we examined the texts, it was clear that the majority of simplifications that were made concerned general language, rather than simplifying medical terminology. One way of overcoming this would be to create a large parallel corpus of simplified clinical letters. However this is difficult due to the licensing conditions of the source texts that we are using, where an annotator would be required to agree to the licence conditions of the dataset(s). In addition, we would require clinical experts who were capable of understanding and simplifying the texts. The clinical experts would have to produce vast amounts of simplified texts in order to provide sufficient training data for the OpenNMT system to learn from. Although this is possible, it would require significant time and financial resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Table Development",
"sec_num": "6"
},
{
"text": "OpenNMT provides an additional feature that allows a pre-compiled phrase table to be used when an out-of-vocabulary word is identified. This can be used in cross-lingual translation to provide idioms, loan words or unusual translations. In monolingual translation, we can use this feature to provide specific lexical replacements that will result in easier to understand text. This allows us to use a general language simplification model, with a domain-specific phrase table and effectively simplify complex vocabulary from the (clinical) domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Table Development",
"sec_num": "6"
},
{
"text": "We downloaded the entire SNOMED-CT clinical thesaurus (Donnelly, 2006) , which contains 2,513,952 clinical terms, each associated with a concept identifier. We chose this resource over the full UMLS Metathesaurus as SNOMED-CT contains terms specific to the clinical domain and we expected this would lead to fewer false positives. Where terms share an identifier, these are considered synonymous with each other, allowing us to create groups of semantically equivalent terms. We filtered out terms that were greater than 4 tokens long or contained punctuation, As these indicated sentential terms that were not appropriate for our purposes. We identified abbreviations and automatically removed any explanations that were associated with these. We used the Google Web1T frequencies to identify which terms were the most common in general language use. Although this is not a direct measure of how easy to understand each word will be, it has been shown previously that lexical frequency correlates well with ease of understanding (Paetzold and Specia, 2016) . Where there were multi-word expressions, we took the average frequency of all words in the multi-word expression, rather than taking the frequency of the N-gram. For each set of semantically equivalent terms, we took the most frequent term as the easiest to understand and added one entry to our phrase table for each of the other terms contained in the group. So, for a group of 3 terms, A, B and C, where B is the most frequent, we would add 2 pairs to our phrase table A-B, and C-B. This means that whenever A or C are seen in the original texts and they are considered to be out-of-vocabulary words, i.e., technical medical terms that were not present in the training texts, then the more frequent term B, will be substituted instead. We identified any instances where one word had more than one simplification (due to it being present in more than one synonym group). If the original word was an acronym, we removed all simplifications as an acronym may have multiple expansions and there is no way for the system to distinguish which is the correct expansion. If the original word with more than one simplification is not an acronym then we selected the most frequent simplification and discarded any others. This resulted in 110,415 pairs of words that were added to the phrase table.",
"cite_spans": [
{
"start": 54,
"end": 70,
"text": "(Donnelly, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 1030,
"end": 1057,
"text": "(Paetzold and Specia, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Table Development",
"sec_num": "6"
},
{
"text": "In Table 4 , we have shown examples of the types of simplifications that were extracted using the methodology outlined above. Clearly these are the type of simplifications that would be helpful for patients. In some cases, it may be possible that the resulting simplified term would still be difficult to understand for an end user, for example 'hyperchlorhydria' is translated to 'increased gastric acidity', where the term 'gastric' may still be difficult for an end user. A human may have simplified this to 'increased stomach acidity', which would have been easier to understand. This phrase was not in the SNOMED-CT vocabulary and so was not available for the construction of our phrase ta-ble. Nonetheless, the type of simplifications that are produced through this methodology appear to improve the overall level of understanding of difficult medical terms.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Phrase Table Development",
"sec_num": "6"
},
{
"text": "The methodology we have outlined above is suitable for domains outside of medical terminology. The only domain-specific resource that is required is a thesaurus of terms that are likely to occur in the domain. By following the methodology we have outlined, it would be simple to create a phrase table for any domain, which could be applied to the NTS software that we have used in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Table Development",
"sec_num": "6"
},
{
"text": "In our final section of experiments, we wanted to determine the effect that our system had on the ease of understanding of sentences from the original texts. We evaluated this through the use of human judges. In order to thoroughly evaluate our system we compared the original texts from i2b2 and MIMIC to three methods of transformation as detailed below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "7"
},
{
"text": "Original Texts (ORIG): We used the original texts as they appeared after preprocessing. This ensured that they were equivalent to the transformed texts and that any effects would be from the system, not the preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "7"
},
{
"text": "We ran the sentences through the NTS system using the configuration described in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTS:",
"sec_num": null
},
{
"text": "We ran the sentences through the NTS system. We configured OpenNMT to use the phrase table that we described in Section 6. Note that the phrase table was only used by the system when OpenNMT identified a word as being out-of-vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTS + Phrase Table (NTS + PT):",
"sec_num": null
},
{
"text": "To demonstrate that the benefit of our system comes from using the phrase table in tandem with the NTS system, we also provided a baseline which applied the phrase table to any word that it was possible to replace in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Table Baseline (PTB):",
"sec_num": null
},
{
"text": "We collected the sentences for each of the methods as described above from both of our data sources and collated these so as we could compare the results. We analysed the data and removed any instances of errors that had resulted from the NTS system, according to our error analysis. The sentences that we selected correspond to Type 1, in our categorisation. Type 1 does not necessarily indicate a simplification, instead it implies that a transformation has been successfully completed, with the potential for simplification. Selecting against errors allows us to see the simplification potential of our system. We do not claim that NTS can produce error-free text, but instead we want to demonstrate that the error-free portion of the text is easier to understand when using our phrase table. We selected 50 4-tuples from each dataset (i2b2 and MIMIC) to give 100 4-tuples, where one 4-tuple contained parallel sentences from each of the methods described above. Sentences within a 4-tuple were identical, apart from the modifications that had been made by each system. No two sentences in a 4-tuple were the same. We have put an example 4-tuple in Table 5 , to indicate the type of text that was contained in each.",
"cite_spans": [],
"ref_spans": [
{
"start": 1152,
"end": 1159,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phrase Table Baseline (PTB):",
"sec_num": null
},
{
"text": "We used crowd sourcing via the Figure Eight platform to annotate our data. As we had a relatively small dataset, we chose to ask for 10 annotations for each 4-tuple. We allowed each annotator to complete a maximum of 20 annotations to ensure that we had a wide variety of perspectives on our data. No annotator saw the same 4-tuple twice. We provided a set of test annotations, which we intended to use to filter out bad-actors, although we found that all annotators passed the test adequately. We selected for annotators with a higher than average rating on the Figure Eight platform (level 2 and above). In each annotation, we asked the annotator to rank the 4 sentences according to their ease of understanding, where the top ranked sentence (rank 1) was the easiest to understand and the bottom ranked sentence (rank 4) was the hardest to understand. We explicitly instructed annotators to rank all sentences, and to use each rank exactly once. If an annotator found 2 sentences to be of the same complexity, they were instructed to default to the order in which the sentences were displayed. We posed our task as 4 separate questions with the exact wording shown in the supplementary material, where we have reproduced the instructions we provided to our annotators. In our post analysis we identified that 20 out of the 1000 annotations that we collected (100 4-tuples, with 10 annotation per 4-tuple) did not use all 4 ranks (i.e., 2 or more sentences were at the same rank). There was no clear pattern of spamming and we",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Table Baseline (PTB):",
"sec_num": null
},
{
"text": "Simple Term ability to be ambulant ability to walk carcinoma of stomach cancer of stomach hyperchlorhydria increased gastric acidity hypertension high blood pressure lying supine lying on back osteophyte bony spur photophobia intolerance to light talipes congenital clubfoot AACTG aids clinical trial group BIPLEDS bilateral periodic epileptiform discharges BLADES bristol language development scale Patient has been suffering from sensitivity to light and asthmatic breath sounds. Table 5 : An example of the type of text produced by our system. The NTS system has performed a syntactic simplification, converting \"has been suffering\" to \"suffers\", the NTS + PT system has simplified \"photophobia\" to \"sensitivity to light\" and the baseline system (PTB) has further simplified \"wheezing\" to \"asthmatic breath sounds\".",
"cite_spans": [],
"ref_spans": [
{
"start": 482,
"end": 489,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Complex Term",
"sec_num": null
},
{
"text": "chose to ignore these 20 sentences in our further analysis, giving us 980 rankings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complex Term",
"sec_num": null
},
{
"text": "In Table 6 , we have shown the raw results of our crowd sourcing annotations as well as the average rank of each system. We calculate average rank r s of a system s as",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Complex Term",
"sec_num": null
},
{
"text": "r s = 4 i=1 i \u00d7 f (s, i) 4 i=1 f (s, i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complex Term",
"sec_num": null
},
{
"text": "where i is a rank from 1 to 4 and f (s, i) is a function that maps the system and rank to the number of times that system was placed at that rank (as shown in Table 6 ). We can see that our system using NTS and the phrase table has the highest average rank, indicating that the text it produced was the easiest to understand more often than other systems. The NTS is ranked second highest indicating that in many cases this system still produces text which is easier to understand than the original. The original texts are ranked third most frequently, ahead of the baseline system which is most often ranked in last position. The baseline system overzealously applied simplifications from our phrase table and this led to long winded explanations and words being simplified that did not require it. Table 6 : The results of our crowdsourcing annotations. We have ordered the annotations by their average rank and highlighted the most common rank for each system. The first column in the table shows the system. Columns 2 through 5 show the number of times each system was ranked at rank 1, 2, 3 or 4 and column 6 shows the average rank calculated according to the formula in Section 7",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 6",
"ref_id": null
},
{
"start": 800,
"end": 807,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Complex Term",
"sec_num": null
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "In our work we have applied NTS software to clinical letters and adapted the software using a bespoke phrase table mined from SNOMED-CT. We have shown the types of errors that can occur when using NTS software and we have evaluated our improved algorithm against the state of the art, showing an improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Our system improved over the original NTS software when adapted to use our phrase table. The NTS software was developed by using parallel sentences from Wikipedia and Simple Wikipedia and training OpenNMT to learn simplifications from these. OpenNMT learns an internal set of vocabulary substitutions, however these will have been targeted towards general language, rather than medical specific language. By using our phrase table, we are able to give specific simplifications for medical terms. The system only accesses the phrase table when it detects a word which is out-of-vocabulary, i.e., a word that was not seen sufficiently often in the training texts to be incorporated into the model that was produced. This works well at modelling a lay reader, where the vocabulary understood by the system is analogous to the vocabulary understood by a typical (i.e., non-specialised) reader of English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "In addition to the NTS system adapted to use our phrase table, we also tested a baseline which greedily applied the phrase table at all possible points in a sentence. However, this system was ranked as least understandable more often than any other system. The text it produced was generally much longer than the original text. The benefit of our work comes from using the phrase table together with the neural text simplification software, which is capable of applying the phrase table at the correct points in the text. This can be seen in Table 5 , where the NTS system has altered the language being used, but has not simplified a medical term, the NTS + PT system has simplified the medical term (photophobia), but left a term which would be generally understood (wheezing) and the baseline system has correctly simplified the difficult medical term, but has also changed the generally understood term. Our phrase table is additional to the NTS system and could be applied to other, improved neural models for text simplification as research in this field is progressed. We have shown that our phrase table adds value to the NTS system in the clinical setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 542,
"end": 549,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "We have demonstrated in Section 5 that the type of text produced by NTS software and by our adapted NTS software will contain errors. This is true of any translation software which relies on learning patterns from data to estimate future translations of unseen texts. In cross-lingual translation, a small error rate may be acceptable as the text is transformed from something that is initially incomprehensible to text in the reader's own lan-guage which may be intelligible to some degree. With simplification, however, even a small error rate may lead to the resulting text becoming more difficult to understand by an end user, or the meaning of a text being changed. This is particularly the case in the clinical setting, where life changing information may be communicated. It is important then to consider how to use Neural Text Simplification in a clinical setting. We would propose that the clinician should always be kept in the loop when applying this type of simplification. The system could be applied within a word editor which suggests simplifications of sentences as and when they are discovered. The clinician could then choose whether or not to accept and integrate the simplified text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "We have presented our methodology in the context of the clinical domain, however it would be trivial to apply this elsewhere. Our methodology is particularly suitable when 3 conditions are met: (a) There is text being produced by experts that is read by lay readers. (b) that text contains specialised terminology that will be unintelligible to the intended audience and (c) a comprehensive thesaurus of domain specific terms exists, which can be used to generate a domain appropriate phrase table. Given these conditions are met, our work could be applied in the legal, financial, educational or any other domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "We have made significant use of licensed resources (i2b2, MIMIC and SNOMED-CT). These are available for research purposes from their providers, given the user has signed a licensing agreement. We are not at liberty to share these resources ourselves and this inhibits our ability to provide direct examples of the simplifications we produced in our paper. To overcome this, we have provided the following GitHub repository, which provides all of the code we used to process the data: https://github.com/ MMU-TDMLab/ClinicalNTS. Instructions for replication are available via the GitHub.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Our work has explored the use of neural machine translation for text simplification in the clinical domain. Doctors and patients speak a different language and we hope that our work will help them communicate. We have shown that general language simplification needs to be augmented with domain specific simplifications and that doing so leads to an improvement in the understandability of the resulting text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion + Future Work",
"sec_num": "9"
},
{
"text": "One clear avenue of future work is to apply this system in a clinical setting and to test the results with actual patients. We will look to develop software that uses NTS to identify possible simplifications for a clinician when they are writing a letter for a patient. We could also look to use parallel simplified medical text to augment the general language parallel text used in the NTS system. Additionally, we could improve the measure of lexical complexity for single and multi word expressions. Currently, we are only using frequency as an indicator of lexical complexity, however other factors such as word length, etymology, etc. may be used. Finally, we will explore adaptations of our methodology for general (non-medical) domains, e.g., simplified search interfaces (Ananiadou et al., 2013) for semantically annotated news (Thompson et al., 2017) .",
"cite_spans": [
{
"start": 779,
"end": 803,
"text": "(Ananiadou et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 836,
"end": 859,
"text": "(Thompson et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion + Future Work",
"sec_num": "9"
},
{
"text": "http://opennmt.net/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/senisioi/ NeuralTextSimplification/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "using the implementations at: https://github. com/mmautner/readability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "note, we do not claim that all Type 1 sentences are simplifications, only that the system has made a change which is attempting to simplify the text. This may or may not result in the text being easier to understand by a reader.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language",
"authors": [
{
"first": "Emil",
"middle": [],
"last": "Abrahamsson",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Forni",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Skeppstedt",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Kvist",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)",
"volume": "",
"issue": "",
"pages": "57--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emil Abrahamsson, Timothy Forni, Maria Skeppstedt, and Maria Kvist. 2014. Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Popu- lations (PITR), pages 57-65.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Please, write to me. Writing outpatient clinic letters to patients",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Academy of Medical Royal Colleges. 2018. Please, write to me. Writing outpatient clinic letters to pa- tients.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enhancing search: Events and their discourse context",
"authors": [
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Raheel",
"middle": [],
"last": "Nawaz",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "318--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sophia Ananiadou, Paul Thompson, and Raheel Nawaz. 2013. Enhancing search: Events and their discourse context. In International Conference on Intelligent Text Processing and Computational Lin- guistics, pages 318-334. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A computer readability formula designed for machine scoring",
"authors": [
{
"first": "Meri",
"middle": [],
"last": "Coleman",
"suffix": ""
},
{
"first": "Ta",
"middle": [
"Lin"
],
"last": "Liau",
"suffix": ""
}
],
"year": 1975,
"venue": "Journal of Applied Psychology",
"volume": "60",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2):283.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Snomed-ct: The advanced terminology and coding system for ehealth",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Donnelly",
"suffix": ""
}
],
"year": 2006,
"venue": "Studies in health technology and informatics",
"volume": "121",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Donnelly. 2006. Snomed-ct: The advanced ter- minology and coding system for ehealth. Studies in health technology and informatics, 121:279.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic diagnosis of understanding of medical words",
"authors": [
{
"first": "Natalia",
"middle": [],
"last": "Grabar",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Hamon",
"suffix": ""
},
{
"first": "Dany",
"middle": [],
"last": "Amiot",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalia Grabar, Thierry Hamon, and Dany Amiot. 2014. Automatic diagnosis of understanding of medical words. In Proceedings of the 3rd Work- shop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 11-20.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The technique of clear writing",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Gunning",
"suffix": ""
}
],
"year": 1952,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Gunning. 1952. The technique of clear writing. McGraw-Hill, New York.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural clinical paraphrase generation with attention",
"authors": [
{
"first": "Oladimeji",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP)",
"volume": "",
"issue": "",
"pages": "42--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oladimeji Farri. 2016. Neural clinical paraphrase generation with attention. In Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP), pages 42-53, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An automated grammar and style checker for writers of simplified english",
"authors": [
{
"first": "E",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Hoard",
"suffix": ""
},
{
"first": "Katherina",
"middle": [],
"last": "Wojcik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holzhauser",
"suffix": ""
}
],
"year": 1992,
"venue": "Computers and Writing",
"volume": "",
"issue": "",
"pages": "278--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James E Hoard, Richard Wojcik, and Katherina Holzhauser. 1992. An automated grammar and style checker for writers of simplified english. In Com- puters and Writing, pages 278-296. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An expert system for diabetes prediction using auto tuned multi-layer perceptron",
"authors": [
{
"first": "M",
"middle": [],
"last": "Jahangir",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Afzal",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Khurshid",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Nawaz",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 Intelligent Systems Conference (IntelliSys)",
"volume": "",
"issue": "",
"pages": "722--728",
"other_ids": {
"DOI": [
"10.1109/IntelliSys.2017.8324209"
]
},
"num": null,
"urls": [],
"raw_text": "M. Jahangir, H. Afzal, M. Ahmed, K. Khurshid, and R. Nawaz. 2017. An expert system for diabetes prediction using auto tuned multi-layer perceptron. In 2017 Intelligent Systems Conference (IntelliSys), pages 722-728.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mimic-iii, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "H Lehman",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Li-Wei",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3:160035.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel",
"authors": [
{
"first": "Robert P Fishburne",
"middle": [],
"last": "Peter Kincaid",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"L"
],
"last": "Jr",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"S"
],
"last": "Rogers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chissom",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count and flesch reading ease formula) for navy enlisted personnel.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstra- tions, pages 67-72.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving consumer understanding of medical text: Development and validation of a new subsimplify algorithm to automatically generate term explanations in english and spanish",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Kloehn",
"suffix": ""
},
{
"first": "Gondy",
"middle": [],
"last": "Leroy",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kauchak",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Sonia",
"middle": [],
"last": "Colina",
"suffix": ""
},
{
"first": "Nicole",
"middle": [
"P"
],
"last": "Yuan",
"suffix": ""
},
{
"first": "Debra",
"middle": [],
"last": "Revere",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of medical Internet research",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Kloehn, Gondy Leroy, David Kauchak, Yang Gu, Sonia Colina, Nicole P Yuan, and Debra Revere. 2018. Improving consumer understanding of medi- cal text: Development and validation of a new sub- simplify algorithm to automatically generate term explanations in english and spanish. Journal of med- ical Internet research, 20(8).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Identification of manner in bio-events",
"authors": [
{
"first": "Raheel",
"middle": [],
"last": "Nawaz",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2012,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "3505--3510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raheel Nawaz, Paul Thompson, and Sophia Anani- adou. 2012. Identification of manner in bio-events. In LREC, pages 3505-3510.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Exploring neural text simplification models",
"authors": [
{
"first": "Sergiu",
"middle": [],
"last": "Nisioi",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Sanja\u0161tajner",
"suffix": ""
},
{
"first": "Liviu P",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dinu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "85--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergiu Nisioi, Sanja\u0160tajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text sim- plification models. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 85-91.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semeval 2016 task 11: Complex word identification",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "560--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Paetzold and Lucia Specia. 2016. Semeval 2016 task 11: Complex word identification. In Pro- ceedings of the 10th International Workshop on Se- mantic Evaluation (SemEval-2016), pages 560-569.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Out in the open: Finding and categorising errors in the lexical simplification pipeline",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Shardlow",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "1583--1590",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Shardlow. 2014. Out in the open: Finding and categorising errors in the lexical simplification pipeline. In LREC, pages 1583-1590.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Identification of research hypotheses and new knowledge from scientific literature",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Shardlow",
"suffix": ""
},
{
"first": "Riza",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Raheel",
"middle": [],
"last": "Nawaz",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mcnaught",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2018,
"venue": "BMC Medical Informatics and Decision Making",
"volume": "18",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s12911-018-0639-1"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Shardlow, Riza Batista-Navarro, Paul Thomp- son, Raheel Nawaz, John McNaught, and Sophia Ananiadou. 2018. Identification of research hy- potheses and new knowledge from scientific litera- ture. BMC Medical Informatics and Decision Mak- ing, 18(1):46.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Simple and effective text simplification using semantic and neural methods",
"authors": [
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "162--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Simple and effective text simplification using se- mantic and neural methods. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 162-173.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enriching news events with meta-knowledge information. Language Resources and Evaluation",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Raheel",
"middle": [],
"last": "Nawaz",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mcnaught",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "51",
"issue": "",
"pages": "409--438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Thompson, Raheel Nawaz, John McNaught, and Sophia Ananiadou. 2017. Enriching news events with meta-knowledge information. Language Re- sources and Evaluation, 51(2):409-438.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluating the state-of-the-art in automatic deidentification",
"authors": [
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of the American Medical Informatics Association",
"volume": "14",
"issue": "5",
"pages": "550--563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. Evaluating the state-of-the-art in automatic de- identification. Journal of the American Medical In- formatics Association, 14(5):550-563.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "When a good translation is wrong in context: Context-aware machinetranslation improves on deixis, ellipsis, and lexical cohesion",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in con- text: Context-aware machinetranslation improves on deixis, ellipsis, and lexical cohesion. In Proceed- ings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Text simplification using neural machine translation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Rochford",
"suffix": ""
},
{
"first": "Jipeng",
"middle": [],
"last": "Qiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Wang, Ping Chen, John Rochford, and Jipeng Qiang. 2016. Text simplification using neural ma- chine translation. In Thirtieth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentence simplification by monolingual machine translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Sander Wubben",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "1015--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by mono- lingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers-Volume 1, pages 1015-1024. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Optimizing statistical machine translation for text simplification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Quanze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "401--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Lillian",
"middle": [
"Lee"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "365--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu- Mizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical sim- plifications from wikipedia. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 365- 368, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A framework to estimate the nutritional value of food in real time using deep learning techniques",
"authors": [
{
"first": "R",
"middle": [],
"last": "Yunus",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Arif",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Afzal",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Amjad",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Abbas",
"suffix": ""
},
{
"first": "H",
"middle": [
"N"
],
"last": "Bokhari",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Haider",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Zafar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Nawaz",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "2643--2652",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2018.2879117"
]
},
"num": null,
"urls": [],
"raw_text": "R. Yunus, O. Arif, H. Afzal, M. F. Amjad, H. Abbas, H. N. Bokhari, S. T. Haider, N. Zafar, and R. Nawaz. 2019. A framework to estimate the nutritional value of food in real time using deep learning techniques. IEEE Access, 7:2643-2652.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Coleman-Liau: The Coleman-Liau index(Coleman and Liau, 1975) estimates the US reading grade level of a text. It takes into account the average numbers of letters per word and sentences per word in a text.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>System</td><td>Sentence</td></tr><tr><td>ORIG</td><td>Patient has been suffering from photophobia and wheezing.</td></tr><tr><td>NTS</td><td>Patient suffers from photophobia and wheezing.</td></tr><tr><td>NTS +</td><td/></tr></table>",
"html": null,
"text": "Term pairs that were created for our phrase table. PT Patient suffers from sensitivity to light and wheezing. PTB",
"type_str": "table"
}
}
}
}