ACL-OCL / Base_JSON /prefixU /json /U16 /U16-1014.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U16-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:10:42.151655Z"
},
"title": "Learning cascaded latent variable models for biomedical text classification",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": "ming.m.liu@monash.edu"
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": "gholamreza.haffari@monash.edu"
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": "wray.buntine@monash.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we develop a weakly supervised version of logistic regression to help to improve biomedical text classification performance when there is limited annotated data. We learn cascaded latent variable models for the classification tasks. First, with a large number of unlabelled but limited amount of labelled biomedical text, we will bootstrap and semi-automate the annotation task with partially and weakly annotated data. Second, both coarse-grained (document) and fine-grained (sentence) levels of each individual biomedical report will be taken into consideration. Our experimental work shows this achieves higher classification results.",
"pdf_parse": {
"paper_id": "U16-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we develop a weakly supervised version of logistic regression to help to improve biomedical text classification performance when there is limited annotated data. We learn cascaded latent variable models for the classification tasks. First, with a large number of unlabelled but limited amount of labelled biomedical text, we will bootstrap and semi-automate the annotation task with partially and weakly annotated data. Second, both coarse-grained (document) and fine-grained (sentence) levels of each individual biomedical report will be taken into consideration. Our experimental work shows this achieves higher classification results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, large amounts of biomedical text have become available with the development of electronic medical record (EMR) systems. The type of biomedical text ranges from reports of CT scans to doctoral notes and discharge summaries. Based on these biomedical text, there are medical tasks such as disease identification, diagnostic surveillance and evaluation and other clinical support services. Manual extraction and classification for these medical tasks from biomedical text is a time-consuming and often costly effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Biomedical text classification systems which consider both manual effort (e.g. annotation) and predictive performance are more appropriate in the medical context than those which only consider classification predictive performance. Early biomedical classification methods are rule-based (Tinoco et al., 2011; Matheny et al., 2012) , which requires medical experts to develop logical rules to identify reports consistent with some diseases.",
"cite_spans": [
{
"start": 287,
"end": 308,
"text": "(Tinoco et al., 2011;",
"ref_id": "BIBREF9"
},
{
"start": 309,
"end": 330,
"text": "Matheny et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main advantages of such rule-based systems is that high precision can be achieved, but the weakness lies in the fact that the process is not easily transferable to similar tasks, because medical experts have to carefully develop specific types of rules and formulas for different kinds of diseases. In recent years, machine learning methods have been widely used in disease identification from biomedical text (Ehrentraut et al., 2012; Bejan et al., 2012; Martinez et al., 2015; Hassanpour and Langlotz, 2015) , which also ask medical experts to do some annotation work for building training data. Unlabelled free biomedical text in hospitals and other clinical organizations is abundant but manual annotation is very expensive.",
"cite_spans": [
{
"start": 414,
"end": 439,
"text": "(Ehrentraut et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 440,
"end": 459,
"text": "Bejan et al., 2012;",
"ref_id": "BIBREF1"
},
{
"start": 460,
"end": 482,
"text": "Martinez et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 483,
"end": 513,
"text": "Hassanpour and Langlotz, 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Exploiting fine-grained sentence level properties for coarse-grained document level classification has attracted large amounts of attention. Pang (Pang and Lee, 2004) first explored subjectivity extraction methods based on a minimum cut formulation, in which they performed subjectivity detection on individual sentences and implemented document level polarity classification by leveraging those extracted subjective sentences.",
"cite_spans": [
{
"start": 146,
"end": 166,
"text": "(Pang and Lee, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "McDonald (T\u00e4ckstr\u00f6m and McDonald, 2011) proposed a structured model for jointly classifying the sentiment of text at varying levels of granularity, they showed that this task can be reduced to sequential classification with constrained inference. Yessenalina (Yessenalina et al., 2010 ) described a joint two-level approach for document level sentiment classification that simultaneously extracts useful sentences, and Fang (Fang and Huang, 2012) extended it by incorporating aspect information to the structured model to aspect level sentiment analysis.",
"cite_spans": [
{
"start": 9,
"end": 39,
"text": "(T\u00e4ckstr\u00f6m and McDonald, 2011)",
"ref_id": "BIBREF8"
},
{
"start": 259,
"end": 284,
"text": "(Yessenalina et al., 2010",
"ref_id": "BIBREF12"
},
{
"start": 424,
"end": 446,
"text": "(Fang and Huang, 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a cascaded latent variable model for biomedical text classification that combines logistic regression and EM, which is trained with a large number of unlabelled but limited amount of labelled biomedical text. Exper-imental results show that the combined cascaded model is efficient in biomedical text classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we propose variants developed from a cascaded logistic regression model: the partially supervised model called as logistic regression with hard EM (LREM) and the weakly supervised model named as weak logistic regression with hard EM (WLREM). LREM is trained with part of the fully-annotated data and all of the partially-annotated data. WLREM is trained with the same part of the fully-annotated data and all of the weakly annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Let d be a document consisting of n sen- tences, X = (X i ) n i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2.1"
},
{
"text": ", with a document-sentencesequence pair denoted d = (d, X). Let y d denote the document level polarity and Z = (Z i ) n i=1 be the sequence of sentence level polarity. In what follows, we assume that there are three types of training sets: a small set of fully labeled instances D F which are annotated at both sentence and document levels, another small set of partially labeled instances D P which are annotated only at the document level, and a large set of weakly annotated instances D U (explained later). Besides, we assume that all Z i take values in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2.1"
},
{
"text": "{P OS(+1), N EG(\u22121), N EU (0)} while y d is in {P OS(+1), N EG(\u22121)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2.1"
},
{
"text": "The following three cascaded models are based on logistic regression, with the following standard parametrization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2.1"
},
{
"text": "P \u03b8 y d |X = Z P \u03b1 (y d |Z)P \u03b2 (Z|X) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2.1"
},
{
"text": "where \u03b8 = {\u03b1, \u03b2}, and \u03b1 and \u03b2 are the parameters of document and sentence level classifiers respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2.1"
},
{
"text": "The partially supervised model (LREM) is trained from the sets of fully labeled data D F and partially labelled data D P . Since the sentence polarity is unknown in D P , a hard EM algorithm is used to iteratively estimate Z and maximize the cascaded goal function. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The partially supervised model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1, \u03b2 = arg max \u03b1,\u03b2 N d=1 log P \u03b8 (y d |X)",
"eq_num": "(2)"
}
],
"section": "The partially supervised model",
"sec_num": "2.2"
},
{
"text": "where N = |D F \u222a D P |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The partially supervised model",
"sec_num": "2.2"
},
{
"text": "The weakly supervised model (WLREM) is trained from the sets of fully labeled data D F and weakly labelled data D U . In our case, the document polarity is unknown from D U , while U represents the patient level diagnostic result in the treating hospital. Generally, if a patient is diagnosed with positive infection in the hospital, the reports of this patient are more likely to be positive. We get this estimated probability from a confusion matrix of D F as shown in table 1. We notice that P (U = P OS|y = P OS) = 0.803, which is a trustful prior information for guessing y, thus we can extend the previous partially supervised model into a weakly one. Figure 2 shows WLREM. The parameters, \u03b1 and \u03b2, of this model can be estimated by maximizing the joint conditional likelihood function",
"cite_spans": [],
"ref_spans": [
{
"start": 658,
"end": 666,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The weakly supervised model",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u03b8 (U d |X) = y,Z P \u03b2 (Z|X)P \u03b1 (y d |Z)P (U d |y d ) \u03b1, \u03b2 = arg max \u03b1,\u03b2 M d=1 P \u03b8 (U d |X)",
"eq_num": "(3)"
}
],
"section": "The weakly supervised model",
"sec_num": "2.3"
},
{
"text": "where ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The weakly supervised model",
"sec_num": "2.3"
},
{
"text": "M = |D F \u222a D U |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The weakly supervised model",
"sec_num": "2.3"
},
{
"text": "The partially and weakly supervised models both have their merits. The former requires document level annotation, while the latter can be used directly with available documents except for an initial guess of the document level polarity. In order to achieve the best predictive performance, we propose to combine the merits of these two models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining partial and weak supervision",
"sec_num": "3"
},
{
"text": "Given in Algorithm 1, ComLREM is an integration of the above two models (LREM+WLREM), which can make full use of the partially and weakly annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A combined cascaded latent variable model",
"sec_num": "3.1"
},
{
"text": "Algorithm 1 ComLREM \u03b1, \u03b2 \u2190 update for data D F via Eqn (2) Z i \u2190 0 for all the sentences in D P \u222a D U y \u2190 1 for all the documents in D U while the convergence condition is meet do for every document d \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A combined cascaded latent variable model",
"sec_num": "3.1"
},
{
"text": "D P do n d \u2190number of sentences of d for k = 1 to n d in d do Z d k = arg max Z d k P \u03b8 y d |X from Equation (1) for every document d \u2208 D U do n d \u2190number of sentences in d for k = 1 to n d in d do Z d k , Y d = arg max Z d k ,Y d P \u03b2 (Z|X)P \u03b1 (y d |Z)P (U d |y d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A combined cascaded latent variable model",
"sec_num": "3.1"
},
{
"text": "\u03b1, \u03b2 \u2190 update for all data via Eqns (2), Dates, time and numbers are normalised into DATE, TIME, and NUM symbols. Reports are segmented into sentences using the JulieLab (Tomanek et al., 2007) automatic sentence segmentor. Stop words are terms and phrases which are regarded as not conveying any significant semantics to the sentences and reports, NLTK stop word list was chosen to do the filtering. The Genia Tagger (Tsuruoka et al., 2005 ) is used to do tokenization and lemmatization. The MetaMap concepts (Aronson, 2001 ) come from the mappings of biomedical knowledge representation. Table 1 illustrates the feature representation at the sentence and report levels.",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Tomanek et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 417,
"end": 439,
"text": "(Tsuruoka et al., 2005",
"ref_id": "BIBREF11"
},
{
"start": 509,
"end": 523,
"text": "(Aronson, 2001",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 589,
"end": 596,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "A combined cascaded latent variable model",
"sec_num": "3.1"
},
{
"text": "As shown in (Martinez et al., 2015) , CT reports for fungal disease detection were collected from three hospitals. For each report, only the free text section were used, which contains the radiologist's understanding of the scan and the reason for the requested scan as written by clinicians. Every report was de-identified: any potentially identifying information such as name, address, age/birthday, gender were removed. Table 2 shows the number of distribution of reports over fully-annotated, partially-annotated and verified data sets.",
"cite_spans": [
{
"start": 12,
"end": 35,
"text": "(Martinez et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 423,
"end": 430,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Receiver operating characteristic (ROC) curve and Precision recall (PR) curve are used for the model evaluation. Area under ROC curve and PR curve is an estimated measure of the test accuracy.The results presented here are 5-fold cross validation outcomes on the fully-annotated data. Fig. 3 and 4 show the ROC curves and PR curves of the four models: LR is the baseline algorithm, LREM is trained based on part of the fullyannotated data and all of the partially-annotated data, WLREM is trained based on part of the fullyannotated data and all of the unannotated data, and ComLREM is an integration of the above two models. We can see from Fig. 3 that WLREM obtained higher ROC score than LR, the area under LREM and WLREM ROC curve is 0.774 and 0.861, which shows that the involvement of weakly annotated data contributes higher than that of partially annotated data to the improvement of classification performance. It is noticed WLREM achieved greater improvement than LREM, because the D U contains big volume and trustful prior information. The highest ROC score (0.870) was achieved with a combination of the above two models, which is within our expectation. Fig. 4 shows the PR curves of the four models, there is a trade-off between precision and recall with recall as the most important metric. When the threshold is set to obtain a high recall (> 0.9), ComLREM obtained higher precision than other models. Overall, with true positive rate or recall as the first priority, the combined model ComLREM achieved the best classification performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 297,
"text": "Fig. 3 and 4",
"ref_id": "FIGREF2"
},
{
"start": 642,
"end": 648,
"text": "Fig. 3",
"ref_id": "FIGREF2"
},
{
"start": 1168,
"end": 1174,
"text": "Fig. 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "We also compared our model with Martinez's system (Martinez et al., 2015) , in which they applied conservative rules over sentenceclassification output. Their sentence-level classifier used SVMs with Bag-of-words and Bag-ofconcepts features. Since the conservative rules indicate that a report is labeled as positive if any sentence in it is labeled positive, the report-level prediction is not probabilistic and the PR curve can not be drawn accordingly. In order to make some comparison, we adjusted the threshold of our report-level logistic regression classifier to make our recall the same as theirs (0.930), and see whether the precision improves. Table 3 shows the compared results, we noticed that both WL-REM and ComLREM outperforms the Conservative SVM approach, which indicates that the estimation we made from the unlabelled data is trustful and can be used to improve classification performance. ",
"cite_spans": [
{
"start": 50,
"end": 73,
"text": "(Martinez et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 654,
"end": 661,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Learning classification models in a fully supervised manner is expensive in the biomedical do-main. We therefore proposed a combined cascaded latent variable model, which effectively combines both partial and weak supervision for biomedical text classification. Sentence label is regarded as a latent variable in this model, and both fine-grained and coarse-grained features are considered in the learning process. In the future, we consider to develop active learning methods towards our cascaded latent variable model and further reduce manual annotation cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Ming Liu, Gholamreza Haffari and Wray Buntine. 2016. Learning cascaded latent variable models for biomedical text classification. In Proceedings of Australasian Language Technology Association Workshop, pages 128\u2212132.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Effective mapping of biomedical text to the umls metathesaurus: the MetaMap program",
"authors": [
{
"first": "",
"middle": [],
"last": "Alan R Aronson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the AMIA Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan R Aronson. 2001. Effective mapping of biomed- ical text to the umls metathesaurus: the MetaMap program. In Proceedings of the AMIA Symposium, page 17. American Medical Informatics Associa- tion.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pneumonia identification using statistical feature selection",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Cosmin Adrian Bejan",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Meliha",
"middle": [],
"last": "Wurfel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yetisgen-Yildiz",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of the American Medical Informatics Association",
"volume": "19",
"issue": "5",
"pages": "817--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cosmin Adrian Bejan, Fei Xia, Lucy Vanderwende, Mark M Wurfel, and Meliha Yetisgen-Yildiz. 2012. Pneumonia identification using statistical feature se- lection. Journal of the American Medical Informat- ics Association, 19(5):817-823.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Detection of hospital acquired infections in sparse and noisy Swedish patient records. A machine learning approach using Na\u00efve Bayes",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Ehrentraut",
"suffix": ""
},
{
"first": "Hideyuki",
"middle": [],
"last": "Tanushi",
"suffix": ""
},
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Ehrentraut, Hideyuki Tanushi, Hercules Dalia- nis, and J\u00f6rg Tiedemann. 2012. Detection of hospi- tal acquired infections in sparse and noisy Swedish patient records. A machine learning approach using Na\u00efve Bayes, Support Vector Machines and C, 4.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fine granular aspect analysis using latent structural models",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers",
"volume": "2",
"issue": "",
"pages": "333--337",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Fang and Minlie Huang. 2012. Fine granular as- pect analysis using latent structural models. In Pro- ceedings of the 50th Annual Meeting of the Associ- ation for Computational Linguistics: Short Papers- Volume 2, pages 333-337. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Information extraction from multi-institutional radiology reports. Artificial intelligence in medicine",
"authors": [
{
"first": "Saeed",
"middle": [],
"last": "Hassanpour",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Curtis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saeed Hassanpour and Curtis P Langlotz. 2015. Infor- mation extraction from multi-institutional radiology reports. Artificial intelligence in medicine.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic detection of patients with invasive fungal disease from freetext computed tomography (CT) scans",
"authors": [
{
"first": "David",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "Michelle",
"middle": [
"R"
],
"last": "Ananda-Rajah",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Suominen",
"suffix": ""
},
{
"first": "Monica",
"middle": [
"A"
],
"last": "Slavin",
"suffix": ""
},
{
"first": "Karin",
"middle": [
"A"
],
"last": "Thursky",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Cavedon",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "53",
"issue": "",
"pages": "251--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Martinez, Michelle R Ananda-Rajah, Hanna Suominen, Monica A Slavin, Karin A Thursky, and Lawrence Cavedon. 2015. Automatic detection of patients with invasive fungal disease from free- text computed tomography (CT) scans. Journal of biomedical informatics, 53:251-260.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Detection of infectious symptoms from va emergency department and primary care clinical documentation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Fern",
"middle": [],
"last": "Matheny",
"suffix": ""
},
{
"first": "Theodore",
"middle": [],
"last": "Fitzhenry",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"K"
],
"last": "Speroff",
"suffix": ""
},
{
"first": "Michelle",
"middle": [
"L"
],
"last": "Green",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"E"
],
"last": "Griffith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vasilevskis",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Elliot",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fielstein",
"suffix": ""
},
{
"first": "Steven H",
"middle": [],
"last": "Peter L Elkin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 2012,
"venue": "International journal of medical informatics",
"volume": "81",
"issue": "3",
"pages": "143--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael E Matheny, Fern FitzHenry, Theodore Sper- off, Jennifer K Green, Michelle L Griffith, Eduard E Vasilevskis, Elliot M Fielstein, Peter L Elkin, and Steven H Brown. 2012. Detection of infectious symptoms from va emergency department and pri- mary care clinical documentation. International journal of medical informatics, 81(3):143-156.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd annual meeting on Association for Computational Linguistics, page 271. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summa- rization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Compu- tational Linguistics, page 271. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semisupervised latent variable models for sentence-level sentiment analysis",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "569--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan McDonald. 2011. Semi- supervised latent variable models for sentence-level sentiment analysis. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 569-574. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comparison of computerized surveillance and manual chart review for adverse events",
"authors": [
{
"first": "Aldo",
"middle": [],
"last": "Tinoco",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Catherine",
"middle": [
"J"
],
"last": "Staes",
"suffix": ""
},
{
"first": "James",
"middle": [
"F"
],
"last": "Lloyd",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"M"
],
"last": "Rothschild",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Haug",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "4",
"pages": "491--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aldo Tinoco, R Scott Evans, Catherine J Staes, James F Lloyd, Jeffrey M Rothschild, and Peter J Haug. 2011. Comparison of computerized surveillance and manual chart review for adverse events. Journal of the American Medical Informatics Association, 18(4):491-497.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A reappraisal of sentence and token splitting for life sciences documents",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Tomanek",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wermter",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 2007,
"venue": "Medinfo 2007: Proceedings of the 12th World Congress on Health (Medical) Informatics; Building Sustainable Health Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Tomanek, Joachim Wermter, Udo Hahn, et al. 2007. A reappraisal of sentence and token split- ting for life sciences documents. In Medinfo 2007: Proceedings of the 12th World Congress on Health (Medical) Informatics; Building Sustainable Health Systems, page 524. IOS Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Developing a robust partof-speech tagger for biomedical text",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Yuka",
"middle": [],
"last": "Tateishi",
"suffix": ""
},
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mcnaught",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in informatics",
"volume": "",
"issue": "",
"pages": "382--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka, Yuka Tateishi, Jin-Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Junichi Tsujii. 2005. Developing a robust part- of-speech tagger for biomedical text. Advances in informatics, pages 382-392.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-level structured models for documentlevel sentiment classification",
"authors": [
{
"first": "Ainur",
"middle": [],
"last": "Yessenalina",
"suffix": ""
},
{
"first": "Yisong",
"middle": [],
"last": "Yue",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1046--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010. Multi-level structured models for document- level sentiment classification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1046-1056. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1outlines LREM. The parameters, \u03b1 and \u03b2, of this model can be es-Figure 1: A partially supervised model. timated by maximizing the joint conditional likelihood function",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "A weakly supervised model.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "ROC curve of LR, LREM, WLREM and ComLREM.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "PR curve of LR, LREM, WLREM and ComLREM.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"3\">: Confusion matrix of fully-annotated</td></tr><tr><td>dataset</td><td/><td/></tr><tr><td>D F</td><td colspan=\"2\">y=POS y=NEG</td></tr><tr><td colspan=\"2\">U=POS 167</td><td>68</td></tr><tr><td>U=NEG</td><td>41</td><td>82</td></tr></table>"
},
"TABREF1": {
"html": null,
"text": "Feature representation",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Feature level</td><td>Discription</td></tr><tr><td colspan=\"2\">Sentence-level Uni-gram tokens + MetaMap concepts</td></tr><tr><td/><td>Pos sentence exists or not</td></tr><tr><td/><td>Neg sentence exists or not</td></tr><tr><td/><td>No. of pos sentences</td></tr><tr><td/><td>No. of neg sentences</td></tr><tr><td/><td>No. of other sentences</td></tr><tr><td>Report-level</td><td>Polarity of the first sentence Polarity of the last sentence Percentage of pos sentences</td></tr><tr><td/><td>Percentage of neg sentences</td></tr><tr><td/><td>Pos sentence exists in the beginning</td></tr><tr><td/><td>Pos sentence exists in the end</td></tr><tr><td/><td>Neg sentence exists in the beginning</td></tr><tr><td/><td>Neg sentence exists in the end</td></tr><tr><td colspan=\"2\">3.2 Feature representation</td></tr><tr><td colspan=\"2\">Two main types of features are explored: Bag</td></tr><tr><td colspan=\"2\">and Structural. Bag features are applied to the</td></tr><tr><td colspan=\"2\">sentence-level classification, while structural fea-</td></tr><tr><td colspan=\"2\">tures are built on the results of sentence-level clas-</td></tr><tr><td>sification.</td><td/></tr></table>"
},
"TABREF2": {
"html": null,
"text": "Fully-annotated, partially-annotated and weakly annotated datasets",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Datasets</td><td>D F</td><td>D P</td><td>D U</td></tr><tr><td colspan=\"2\">Pos fungal 150</td><td>51</td><td>431</td></tr><tr><td colspan=\"2\">Neg fungal 208</td><td>53</td><td>816</td></tr></table>"
},
"TABREF3": {
"html": null,
"text": "Comparison of the experimental results",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Models</td><td colspan=\"3\">Recall Precision F score</td></tr><tr><td colspan=\"2\">Conservative SVM 0.930</td><td>0.694</td><td>0.795</td></tr><tr><td>LR</td><td>0.930</td><td>0.646</td><td>0.762</td></tr><tr><td>LREM</td><td>0.930</td><td>0.656</td><td>0.769</td></tr><tr><td>WLREM</td><td>0.930</td><td>0.703</td><td>0.801</td></tr><tr><td>ComLREM</td><td>0.930</td><td>0.707</td><td>0.802</td></tr></table>"
}
}
}
}