Limitoy / ACL_23_with_limitation /ACL23_1025.json
Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1025",
"Title": "Efficient Diagnosis Assignment Using Unstructured Clinical Notes",
"5 Limitations": "HyDE has yet to be tested in a large-scale and multisite setting, which may offer more generalization challenges. Furthermore, an evaluation of notelevel classification performance was not conducted. Although we expect that HyDE would perform well under such an evaluation, this would require heuristics to aggregate multiple MCMs per note.",
"abstractText": "Electronic phenotyping entails using electronic health records (EHRs) to identify patients with specific health outcomes and determine when those outcomes occurred. Unstructured clinical notes, which contain a vast amount of information, are a valuable resource for electronic phenotyping. However, traditional methods, such as rule-based labeling functions or neural networks, require significant manual effort to tune and may not generalize well to multiple indications. To address these challenges, we propose HyDE (hybrid diagnosis extractor). HyDE is a simple framework for electronic phenotyping that integrates labeling functions and a diseaseagnostic neural network to assign diagnoses to patients. By training HyDE’s model to correct predictions made by labeling functions, we are able to disambiguate hypertension true positives and false positives with a supervised area under the precision-recall curve (AUPRC) of 0.85. We extend this hypertension-trained model to zero-shot evaluation of four other diseases, generating AUPRC values ranging from 0.82 0.95 and outperforming a labeling function baseline by 44 points in F1 score and a Word2Vec baseline by 24 points in F1 score on average. Furthermore, we demonstrate a speedup of > 4× by pruning the length of inputs into our language model to ∼ 2.3% of the full clinical notes, with negligible impact to the AUPRC. HyDE has the potential to improve the efficiency and efficacy of interpreting largescale unstructured clinical notes for accurate EHR phenotyping.",
"1 Introduction": "The widespread adoption of electronic health records (EHRs) by health systems has created vast clinical datastores. One of the essential steps in utilizing these data is identifying patients with specific clinical outcomes and the timing of these outcomes, through a process called electronic phenotyping (Banda et al., 2018). Electronic phenotyping\nis critical for using EHR data to support clinical care (Kaelber et al., 2012; LePendu et al., 2012), inform public health decision-making (Dubberke et al., 2012), and train predictive models (Chaves et al., 2021; Blankemeier et al., 2022; Steinberg et al., 2021, 2023; Lee et al., 2022).\nElectronic phenotyping is a complex task that involves combining structured data (e.g. lab results and codes) with unstructured data (e.g. clinical notes). Rule-based heuristics can be applied to structured data. However, the unstructured nature of information rich (Kern et al., 2006; Wei et al., 2012; Martin-Sanchez and Verspoor, 2014) clinical notes makes phenotyping based on these notes particularly challenging.\nSeveral solutions exist for electronic phenotyping using unstructured clinical notes (Peng et al., 2018; Fries et al., 2021; Zhang et al., 2021a,b), but lack convenience for generalizing to new conditions. For example, labeling functions that consist of rules authored by domain experts are interpretable and readily shared without compromising data privacy, but can be laborious to create. Neural networks (NNs) that are trained to identify specific diseases can eliminate the need for handcrafted labeling functions and often provide more accurate results. However, NNs require extensive manual labeling time and often generalize poorly to diseases not seen during training.\nTo address this, we introduce HyDE (hybrid diagnosis extractor). HyDE is a simple approach to electronic phenotyping that combines the strengths of labeling functions and neural networks and allows for generalization to new diseases with minimal overhead.\nOur key contributions are as follows:\n1. We demonstrate that our model effectively discriminates between true cases of hypertension and false positives generated by labeling functions, as demonstrated by a supervised area under the precision recall curve (AUPRC) of\n485\n0.85. This same model achieves AUPRCs of 0.90, 0.82, 0.84, and 0.95 in zero-shot evaluations for diabetes, osteoporosis, chronic kidney disease, and ischemic heart disease, respectively. HyDE outperforms a labeling function baseline by 44 points in F1 score and a Word2Vec baseline (Mikolov et al., 2013b,a) by 24 points in F1 score on average across seen and unseen diseases.\n2. HyDE requires minimal setup. The labeling functions used in HyDE can be simple, reducing the manual effort often required to design labeling functions with high precision and recall.\n3. HyDE is computationally efficient, as only small portions of a subset of clinical notes need to be passed through the neural network for processing, thus minimizing the computational resources required to run HyDE on large datasets. We show that pruning the length of the inputs by 4× to just 2.3% of the full clinical notes impacts performance by an average of only 0.017 AUPRC while providing a speedup of > 4×.",
"2 Methods": "Our proposed method, HyDE (hybrid diagnosis extractor), aims to accurately identify the earliest occurrence of specific diseases in clinical patient encounter notes. We accomplish this by using a combination of labeling functions and a fine-tuned biomedical language model. The labeling functions are designed to be simple and identify as many mentions of the disease as possible, including false positives. The neural network is then used to differentiate between the true positives and false positives by analyzing small segments of the clinical notes around the location identified by the labeling functions. This approach allows for identifying potential mentions of the disease, while also utilizing the neural network to improve precision. It is worth noting that the components of HyDE are modular, allowing for the substitution of other methods for identifying disease-specific mentions beyond the labeling functions used in this paper. For example, Trove (Fries et al., 2021), offers ontology-based labeling functions that eliminate the need for coding task-specific labeling rules.\nOur method (Fig. 1), involves the following steps: The user first develops a simple labeling\nfunction for the disease of interest. In the case of diabetes, this could be the regular expression diabetes | diabetic. This labeling function is then applied to the clinical notes to identify mentions of the disease. Additionally, the user identifies peripheral terms that frequently appear before or after mentions of the disease, such as insulin-dependent or mellitus in the case of diabetes. The text matching the labeling function and peripheral terms are then replaced with [MASK], and a context around the resulting mask is extracted, resulting in a masked contextual mention (MCM). These MCMs are used to fine-tune a biomedical language model to determine whether the context suggests that the patient actually has the condition in question. We hypothesize that this approach allows the language model to generalize to various conditions without additional training. Thus, for a zero-shot transfer to other diseases, only a simple disease-specific labeling function and peripheral terms are required. We adopt the term zero-shot in this context as each disease comes with distinct comorbidities, symptoms, and interventions.",
"2.1 Dataset": "After obtaining approval from the institutional review board, we obtained ∼8.8 million clinical notes from 23,467 adult patients who had an encounter at our tertiary care center between 2012 and 2018.",
"2.2 Disease Phenotypes": "We apply our electronic phenotyping method to five chronic diseases: hypertension (HTN), diabetes mellitus (DM), osteoporosis (OST), chronic kidney disease (CKD), and ischemic heart disease (IHD). These diseases were selected due to their high prevalence (HTN, 2021; DM, 2022; CKD, 2021; IHD, 2022; Clynes et al., 2020), the costs they incur to the healthcare system, and the potential for positive intervention (Blankemeier et al., 2022). For initial model training, we used hypertension as it is the most prevalent of these diseases (affecting 116 million in the US) (HTN, 2021) and we hypothesize that it generates the most diverse MCMs. Table 6 shows the labeling functions that we used to extract these mentions for each disease.",
"2.3 Data Labeling": "Mask Contextual Mention Categories: We manually identified 6 categories of MCMs - (0) true positive; (1) false positive (otherwise unspecified); (2) referring to someone other than the patient; (3) referring to the patient but negated; (4) providing information / instructions / conditional statements (i.e. instructions for how to take a medication); (5) uncertain (i.e. differential diagnosis). Thus, category 0 is the true positive category and categories 1 - 5 are false positive categories. We formulate this problem as a binary classification where categories 1 - 5 are merged into class 1.\nAmplifying False Positive Examples: The prevalence of false positives from our labeling functions were relatively low (Table 3). We thus sought to increase the number of category 2 false positive examples in our training dataset beyond the baseline prevalence of the 250 random MCM samples that were initially labeled (RS in Table 1). We applied a family labeling function to randomly sampled MCMs. This labeling function is positive if an MCM contains any term listed in A.1 relating to familial mentions. We generated 200 such category 2 amplified examples for subsequent labeling. Based on the annotations, we found that only 1.5% of the examples selected by this labeling function were actually true positives examples.\nTo increase the number of category 3 false positive examples, we applied the Negex algorithm (Chapman et al., 2001) to a separate set of randomly sampled masked contextual mentions. For further details see A.2. Based on manual annotation of 200 such examples, we found that 22% of the examples selected by this labeling function were actually true positive examples.\nFiltering Masked Contextual Mentions: Applying the disease-specific labeling functions generated 827k, 555k, 87k, 199k, and 80k notes for HTN, DM, OST, CKD, and IHD respectively from roughly 8.1 million clinical notes (Table 4). Since clinical notes often contain duplicate information from multiple patient visits, we deduplicate the MCMs by comparing the 20 characters on either side of the masked mentions associated with a particular patient. If these characters are the same across multiple MCMs, we keep the MCM that was authored first and discard the others. Deduplication allows us to reduce the number of masked contextual mentions by 3.3×, 3.6×, 4.2×, 3.7×, and 3.3× for HTN, DM, OST, CKD, and IHD respectively (Table 4). This method can be applied at inference to increase the computational efficiency of HyDE. Additionally, the length and number of MCMs per clinical note represents an average of 9% of the full notes for a context length of 64 words, which can improve the efficiency of inference on large datasets.\nActive Learning: To further improve the performance of HyDE, we implement a human-inthe-loop uncertainty-based active learning strategy. This involves multiple iterations of training where after each iteration, 100 examples with corresponding probabilities closest to 0.5 are manually labeled and added to the training dataset for the next training iteration. Table 1 shows performance across the active learning iterations (A1-A4).",
"2.4 Model Training": "We select PubMedBERT (Gu et al., 2021) (100 million parameters) as the model that we fine-tune due to its simple architecture and widespread validation. We use a train batch size of 8, an Adam optimizer with β1 = 0.9 and β2 = 0.999, and a learning rate of 3e-5. We train for 25 epochs and choose the model checkpoint with the best validation set performance. 1,150 HTN examples are used for training and 250 HTN examples are used for validation. For disease specific fine-tuning experiments,\nbetween 90 and 100 disease-specific examples are used for both validation and training. There was no overlap between the patients used for the hypertension training and validation sets and the patients used for test sets as well as disease-specific validation sets. Our test sets consisted of 442 - 500 labeled cases for each disease.",
"2.5 Evaluation": "While labeling functions can be evaluated at a note level, we evaluate at a MCM-level since a single clinical note can consist of multiple MCMs. Furthermore, disease assignment based on clinical notes can be combined with assignment based on structured EHR, increasing the number of patients that are identified. Thus, we want to ensure high precision in identifying patients using clinical notes. For each MCM, we measure the fine-tuned language model’s ability to correctly classify it as either true positive or false positive using area under the precision recall curve (AUPRC) and F1.\nFor our labeling function baseline (LF in Table 2), we use both the family labeling function described previously and Negex (Chapman et al., 2001). Although additional terms could be added to this labeling function, those same terms could also be added to HyDE, making this a fair comparison.\nWe also include a Word2Vec baseline in our comparison (Mikolov et al., 2013b,a). This technique leverages a pre-trained model which has been trained on a corpus of around 100 billion words from Google News. For each MCM, we aggregate word embeddings by calculating their mean and then train an XGBoost model (Chen and Guestrin, 2016) over the computed averages of the HTN training dataset MCM embeddings. To optimize the performance of our XGBoost model, we fine-tune its hyperparameters by conducting a grid search using our HTN validation dataset. It’s worth mentioning that this strategy does not retain the sequential order of words.\nTo demonstrate the generalizability of our method on external data, we apply it to the assertion classification task from the 2010 i2b2/VA Workshop on Natural Language Processing (Uzuner et al., 2011). This dataset consists of 871 progress reports annotated with medical problems that are further classified as present, absent, possible, conditional, hypothetical, or not associated with the patient. We mapped the present category to class 0 and collated all other categories under class 1.\nWe used regular expressions to extract mentions of HTN, DM, OST, CKD, and IHD. We filtering out diseases with less than 30 mentions. Consequently, our external validation was conducted on HTN, DM, and CKD.",
"3 Results": "Supervised and Zero-Shot Model Performance: Table 1 depicts AUPRC performance of our Word2Vec (W2V) baseline compared to fine-tuned PubMedBERT models trained with various training dataset compositions (all rows except the first). We demonstrate supervised performance on HTN, as well as zero-shot generalization to DM, OST, CKD, and IHD. The performance of HyDE surpasses that of our labeling function baseline by 44 points in F1 score and our Word2Vec baseline by 24 points in F1 score on average (Table 2). We find that fine-tuning the best PubMedBERT model (RS+C+A4 training dataset) on ∼100 additional disease-specific examples does not significantly improve performance, with scores of 0.91, 0.84, 0.81, and 0.95 on DM, OST, CKD, and IHD, respectively. This supports the conclusion that our model generalizes well to other diseases, without requiring disease-specific fine-tuning. On the external i2b2/VA dataset we achieve the following AUPRC scores without any additional finetuning - 0.79 for HTN (336 patients), 0.99 for DM (213 patients), and 0.95 for CKD (45\npatients).\nContext Length Ablation: Fig. 2 shows that RS+C+A4 (RS: 250 random MCM samples; C: 400 category 2 and 3 amplified MCMs; A4: 400 samples from active learning) trained models saturate with increasing context lengths. Table 5 shows that reducing the context length from 64 words to 16 words speeds up the model by 4.5x while only lowering average AUPRC by 0.017. From Table 4 we observe that this represents an average of 2.3% of the full clinical notes among notes that contain at least one MCM.",
"4 Conclusion": "With its minimal setup, computational efficiency, and generalization capability, HyDE offers a promising tool for electronic phenotyping from unstructured clinical notes. By improving the ability to extract patient health status, we hope that HyDE will enable more informative large scale studies using EHR data, ultimately leading to public health insights and improved patient care.",
"6 Ethics Statement": "The authors have carefully considered the implications of their work, including potential positive and negative impacts. A potential risk associated with this approach would be the leakage of protected health information (PHI) following a release of the model. To mitigate this risk, we will conduct a thorough review of the training data and consult with experts before deciding to release the model. Additionally, the authors have reviewed the ACM Code of Ethics and Professional Conduct document and attest that this work adheres to the principles outlined in that document.",
"A Appendix": "A.1 Family Labeling Function The family labeling function is positive if any of the following terms match within a masked con-\ntextual mention: relative, relatives, family, father, mother, grandmother, grandfather, sister, brother, sibling, aunt, uncle, nephew, niece, son, daughter, cousin, parents.\nA.2 Negex Algorithm\nIn order to increase the recall of the Negex (Chapman et al., 2001) algorithm for manual labeling in order to amplify false positives for HyDE training, we modified it slightly to allow negative terms to match within 7 words of the mention, rather than 5. However, for the labeling function baseline we used Negex with a conventional window of 5 words, as opposed to the 7 word window used during HyDE training.\nWe modify the Negex keywords slightly based on manual examination of the MCMs. The original keywords were extracted from the negspaCy en_clinical termset. This function is positive if any of the following terms appear within the specified number of words before the disease mention: declined, denied, denies, denying, no sign of, no signs of, not, not demonstrate, symptoms atypical, doubt, negative for, no, versus, without, doesn't, doesnt, don't, dont, didn't, didnt, wasn't, wasnt, weren't, werent, isn't, isnt', aren't, arent, cannot, can't, cant, couldn't, couldnt', never, none, resolved, absence of or if any of the following terms appear within the specified number of words after the disease mention: declined, unlikely, was not, were not, wasn't, wasnt, weren't, werent, not, no, none.\nA.3 Qualitative Evaluation of Active Learning Examples\nQualitatively, the examples surfaced during active learning appear to be challenging cases. For example, some were examples that would have been counted as false positives by Negex but shouldn’t be. One such example is \"Insulin dependent diabetes mellitus ¬ø [MASK] No past medical history pertinent negatives\". Here, ¬ø denotes a deidentified date. Another challenging example is \"4. Screening for [MASK]\". Often when items are enumerated, they indicate a positive diagnosis. However, in this case, the patient was only screened for the condition.\nACL 2023 Responsible NLP Checklist",
"3 A1. Did you describe the limitations of your work?": "Section 5\n7 A2. Did you discuss any potential risks of your work? Section 6",
"3 A3. Do the abstract and introduction summarize the paper’s main claims?": "End of the abstract and end of the introduction (section 1)",
"3 A4. Have you used AI writing assistants when working on this paper?": "We used ChatGPT to propose suggestions for improving the grammar and phrasing of author generated writing. We used this parts of each section of the paper, but did not always use the suggestions generated by ChatGPT and we always modified the suggestions.\nB 7 Did you use or create scientific artifacts? Left blank.",
"B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided": "that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response.",
"B4. Did you discuss the steps taken to check whether the data that was collected / used contains any": "information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response.",
"B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and": "linguistic phenomena, demographic groups represented, etc.? No response.\nB6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.",
"No response.": "C 3 Did you run computational experiments? Section 2 (methods) and section 3 (results)",
"3 C1. Did you report the number of parameters in the models used, the total computational budget": "(e.g., GPU hours), and computing infrastructure used? Section 2.5 (model training) and Table 5 in the appendix.\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.",
"3 C2. Did you discuss the experimental setup, including hyperparameter search and best-found": "hyperparameter values? Section 2.4",
"3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary": "statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3 (results)\nC4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank.\nD 3 Did you use human annotators (e.g., crowdworkers) or research with human participants? Section 2.3 (data labeling)",
"3 D1. Did you report the full text of instructions given to participants, including e.g., screenshots,": "disclaimers of any risks to participants or annotators, etc.? Section 2.3 (masked contextual mention categories)\n7 D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? Annotation was done by the authors of the paper.\n7 D3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Annotation was done by the authors of the paper.",
"3 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?": "Section 2.1 (dataset)\n7 D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Annotation was done by the authors of the paper."
}