ACL-OCL / Base_JSON /prefixU /json /U15 /U15-1003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U15-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:09:57.033795Z"
},
"title": "Analysis of Word Embeddings and Sequence Features for Clinical Information Extraction",
"authors": [
{
"first": "Lance",
"middle": [],
"last": "De Vine",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queensland University of Technology",
"location": {}
},
"email": "l.devine@qut.edu.au"
},
{
"first": "Mahnoosh",
"middle": [],
"last": "Kholghi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queensland University of Technology",
"location": {}
},
"email": "m1.kholghi@qut.edu.au"
},
{
"first": "Guido",
"middle": [],
"last": "Zuccon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queensland University of Technology",
"location": {}
},
"email": "g.zuccon@qut.edu.au"
},
{
"first": "Laurianne",
"middle": [],
"last": "Sitbon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queensland University of Technology",
"location": {}
},
"email": "laurianne.sitbon@qut.edu.au"
},
{
"first": "Anthony",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": "anthony.nguyen@csiro.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This study investigates the use of unsupervised features derived from word embedding approaches and novel sequence representation approaches for improving clinical information extraction systems. Our results corroborate previous findings that indicate that the use of word embeddings significantly improve the effectiveness of concept extraction models; however, we further determine the influence that the corpora used to generate such features have. We also demonstrate the promise of sequence-based unsupervised features for further improving concept extraction.",
"pdf_parse": {
"paper_id": "U15-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "This study investigates the use of unsupervised features derived from word embedding approaches and novel sequence representation approaches for improving clinical information extraction systems. Our results corroborate previous findings that indicate that the use of word embeddings significantly improve the effectiveness of concept extraction models; however, we further determine the influence that the corpora used to generate such features have. We also demonstrate the promise of sequence-based unsupervised features for further improving concept extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Clinical concept extraction involves the identification of sequences of terms which express meaningful concepts in a clinical setting. The identification of such concepts is important for enabling secondary usage of reports of patient treatments and interventions, e.g., in the context of cancer monitoring and reporting (Koopman et al., 2015) , and for further processing in downstream eHealth workflows (Demner-Fushman et al., 2009) .",
"cite_spans": [
{
"start": 321,
"end": 343,
"text": "(Koopman et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 405,
"end": 434,
"text": "(Demner-Fushman et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A significant challenge is the identification of concepts that are referred to in ways not captured within current lexical resources such as relevant domain terminologies like SNOMED CT. Furthermore, clinical language is sensitive to ambiguity, polysemy, synonymy (including acronyms) and word order variations. Finally, the information presented in clinical narratives is often unstructured, ungrammatical, and fragmented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "State of the art approaches in concept extraction from free-text clinical narratives extensively apply supervised machine learning approaches. The effectiveness of such approaches generally depends on three main factors: (1) the availability of a considerable amount of high quality annotated data,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) the selected learning algorithm, and (3) the quality of features generated from the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, clinical information extraction and retrieval challenges like i2b2 (Uzuner et al., 2011) and ShARe/CLEF (Suominen et al., 2013) have provided annotated data which can be used to apply and evaluate different machine learning approaches (e.g., supervised and semi-supervised). Conditional Random Fields (CRFs) (Lafferty et al., 2001 ) has shown to be the state-of-the-art supervised machine learning approach for this clinical task. A wide range of features has been leveraged to improve the effectiveness of concept extraction systems, including hand-crafted grammatical, syntactic, lexical, morphological and orthographical features (de Bruijn et al., 2011; Tang et al., 2013) , as well as advanced semantic features from external resources and domain knowledge (Kholghi et al., 2015) .",
"cite_spans": [
{
"start": 84,
"end": 105,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF33"
},
{
"start": 121,
"end": 144,
"text": "(Suominen et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 325,
"end": 347,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF17"
},
{
"start": 650,
"end": 674,
"text": "(de Bruijn et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 675,
"end": 693,
"text": "Tang et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 779,
"end": 801,
"text": "(Kholghi et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While there has been some recent work in the application of unsupervised machine learning methods to clinical concept extraction (Jonnalagadda et al., 2012; Tang et al., 2013) , the predominant class of features that are used are still handcrafted features.",
"cite_spans": [
{
"start": 129,
"end": 156,
"text": "(Jonnalagadda et al., 2012;",
"ref_id": "BIBREF10"
},
{
"start": 157,
"end": 175,
"text": "Tang et al., 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper discusses the application to clinical concept extraction of a specific unsupervised machine learning method, called the Skip-gram Neural Language Model, combined with a lexical string encoding approach and sequence features. Skip-gram word embeddings, where words are represented as vectors in a high dimensional vector space, have been used in prior work to create feature representations for classification and information extraction tasks, e.g., see Nikfarjam et al. (2015) and Qu et al. (2015) . The following research questions will be addressed in this paper: RQ1: are word embeddings and sequence level representation features useful when using CRFs for clinical concept extraction? RQ2: to what extent do the corpora used to gener-ate such unsupervised features influence the effectiveness?",
"cite_spans": [
{
"start": 464,
"end": 487,
"text": "Nikfarjam et al. (2015)",
"ref_id": "BIBREF23"
},
{
"start": 492,
"end": 508,
"text": "Qu et al. (2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Question one has been partially addressed by prior work that has shown word embeddings improve the effectiveness of information extraction systems (Tang et al., 2015; Nikfarjam et al., 2015) . However, we further explore this by considering the effectiveness of sequence level features, which, to the best of our knowledge, have not been investigated in clinical information extraction.",
"cite_spans": [
{
"start": 147,
"end": 166,
"text": "(Tang et al., 2015;",
"ref_id": "BIBREF30"
},
{
"start": 167,
"end": 190,
"text": "Nikfarjam et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The two primary areas that relate to this work include (a) methods for clinical concept extraction, and (b) general corpus based approaches for learning word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The strong need for effective clinical information extraction methods has encouraged the development of shared datasets such as the i2b2 challenges (Uzuner et al., 2011) and the ShARe/CLEF eHealth Evaluation Lab (Suominen et al., 2013) ; which in turn have sparked the development of novel, more effective clinical information extraction methods. For example, de Bruijn et al. 2011used token, context, sentence, section, document, and concept mapping features, along with the extraction of clustering-based word representation features using Brown clustering; they obtained the highest effectiveness in the i2b2/VA 2010 NLP challenge. In the same challenge, Jonnalagadda et al. (2012) leveraged distributional semantic features along with traditional features (dictionary/pattern matching, POS tags). They used random indexing to construct a vector-based similarity model and observed significant improvements. Tang et al. (2013) built a concept extraction system for ShARe/CLEF 2013 Task 1 that recognizes disorder mentions in clinical free text, achieving the highest effectiveness amongst systems in the challenge. They used word representations from Brown clustering and random indexing, in addition to a set of common features including token, POS tags, type of notes, section information, and the semantic categories of words based on UMLS, MetaMap, and cTAKEs. Tang et al. (2014) extracted two different types of word representation features: (1) clusteringbased representations using Brown clustering, and (2) distributional word representations using ran-dom indexing. Their findings suggest that these word representation features increase the effectiveness of clinical information extraction systems when combined with basic features, and that the two investigated distributional word representation features are complementary. Tang et al. (2014) , Khabsa and Giles (2015) and Tang et al. (2015) investigated the effect of three different types of word representation features, including clustering-based, distributional and word embeddings, on biomedical name entity recognition tasks. All developed systems demonstrated the significant role of word representations in achieving high effectiveness.",
"cite_spans": [
{
"start": 148,
"end": 169,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF33"
},
{
"start": 212,
"end": 235,
"text": "(Suominen et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 911,
"end": 929,
"text": "Tang et al. (2013)",
"ref_id": "BIBREF28"
},
{
"start": 1368,
"end": 1386,
"text": "Tang et al. (2014)",
"ref_id": "BIBREF29"
},
{
"start": 1839,
"end": 1857,
"text": "Tang et al. (2014)",
"ref_id": "BIBREF29"
},
{
"start": 1860,
"end": 1883,
"text": "Khabsa and Giles (2015)",
"ref_id": "BIBREF13"
},
{
"start": 1888,
"end": 1906,
"text": "Tang et al. (2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clinical Information Extraction",
"sec_num": "2.1"
},
{
"text": "Brown clustering (Brown et al., 1992) has probably been the most widely used unsupervised method for feature generation for concept extraction. Both random indexing (Kanerva et al., 2000) and word embeddings from neural language models, e.g., Mikolov et al. (2013) , have also been used recently, in part stimulated by renewed interest in representation learning and deep learning. Some of the more notable contributions to the use of word representations in NLP include the work of Turian et al. (2010) and Collobert et al. (2011) . Since their inception, Skip-gram word embeddings (Mikolov et al., 2013) have been used in a wide range of settings, including for unsupervised feature generation (Tang et al., 2015) . There have also been recent applications of convolutional neural nets to lexical representation. For example, Zhang and LeCun (2015) demonstrated that deep learning can be applied to text understanding from character-level inputs all the way up to abstract text concepts, using convolutional networks.",
"cite_spans": [
{
"start": 17,
"end": 37,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF1"
},
{
"start": 165,
"end": 187,
"text": "(Kanerva et al., 2000)",
"ref_id": "BIBREF12"
},
{
"start": 243,
"end": 264,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF21"
},
{
"start": 483,
"end": 503,
"text": "Turian et al. (2010)",
"ref_id": "BIBREF32"
},
{
"start": 508,
"end": 531,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 583,
"end": 605,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
},
{
"start": 696,
"end": 715,
"text": "(Tang et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 828,
"end": 850,
"text": "Zhang and LeCun (2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Based Methods for Word Representations",
"sec_num": "2.2"
},
{
"text": "We start by examining a set of baseline features that have been derived from previous work in this area. We then turn our attention to unsupervised features to be used in this task and we propose to examine features based on word embeddings, lexical vectors and sequence level vectors. These features will then be tested to inform a CRFs learning algorithm, see ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3"
},
{
"text": "We construct a baseline system using the following baseline feature groups, as described by Kholghi et al. (2015) :",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "Kholghi et al. (2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Features",
"sec_num": "3.1"
},
{
"text": "A: Orthographical (regular expression patterns), lexical and morphological (suffixes/prefixes and character n-grams), contextual (window of k words), B: Linguistic (POS tags (Toutanova et al., 2003) ) C: External resource features (UMLS and SNOMED CT semantic groups as described by Kholghi et al. (2015) ).",
"cite_spans": [
{
"start": 174,
"end": 198,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF31"
},
{
"start": 283,
"end": 304,
"text": "Kholghi et al. (2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Features",
"sec_num": "3.1"
},
{
"text": "The approach we use for generating unsupervised features consists of the following two steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Features",
"sec_num": "3.2"
},
{
"text": "1. Construct real valued vectors according to a variety of different methods, each described in Sections 3.2.1-3.2.3. 2. Transform the vectors into discrete classes via clustering, as described in Section 3.2.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Features",
"sec_num": "3.2"
},
{
"text": "While real valued feature vectors can be used directly with some CRFs software implementations, they are not supported by all. We have found that transforming our vectors into discrete classes via clustering is reasonably easy. In addition our preliminary experiments did not show advantages to working with real valued vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Features",
"sec_num": "3.2"
},
{
"text": "We use two types of vectors: semantic and lexical. We use the term \"semantic\" as an overarching term to refer to neural word embeddings as well as other distributional semantic representations such as those derived from random indexing. The semantic vectors encode a combination of semantic and syntactic information, as distinct to lexical vectors which encode information about the distribution of character patterns within tokens. We find that lexical vectors identify lexical classes within a corpus and are particular useful for corpora where there are many diverse syntactic conventions such as is the case with clinical text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Features",
"sec_num": "3.2"
},
{
"text": "To construct semantic vectors we use the recently proposed Skip-gram word embeddings. The Skipgram model (Mikolov et al., 2013) constructs term representations by optimising their ability to predict the representations of surrounding terms.",
"cite_spans": [
{
"start": 105,
"end": 127,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Vectors",
"sec_num": "3.2.1"
},
{
"text": "Given a sequence W = {w 1 , . . . , w t , . . . , w n } of training words, the objective of the Skip-gram model is to maximise the average log probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Vectors",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 2r 2r i=1 \u2212r\u2264j\u2264r,j =0 log p(w t+j |w t )",
"eq_num": "(1)"
}
],
"section": "Semantic Vectors",
"sec_num": "3.2.1"
},
{
"text": "where r is the context window radius. The context window determines which words are considered for the computation of the probability, which is computed according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Vectors",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w O |w I ) = exp(v w O v w I ) W w=1 exp(v w v w I )",
"eq_num": "(2)"
}
],
"section": "Semantic Vectors",
"sec_num": "3.2.1"
},
{
"text": "where the v w I and v w O are vector representations of the input and output (predicted) words. The value (2) is a normalized probability because of the normalization factor W w=1 exp(v w v w I ). In practice, a hierarchical approximation to this probability is used to reduce computational complexity (Morin and Bengio, 2005; Mikolov et al., 2013) .",
"cite_spans": [
{
"start": 302,
"end": 326,
"text": "(Morin and Bengio, 2005;",
"ref_id": "BIBREF22"
},
{
"start": 327,
"end": 348,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Vectors",
"sec_num": "3.2.1"
},
{
"text": "At initialisation, the vector representations of the words are assigned random values; these vector representations are then optimised using gradient descent with decaying learning rate by iterating over sentences observed in the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Vectors",
"sec_num": "3.2.1"
},
{
"text": "Various approaches have been previously used to encode lexical information in a distributed vector representation. A common idea in these approaches is the hashing and accumulation of ngrams into a single vector. This is sometimes referred to as string encoding and is used in a variety of applications, including text analysis and bio-informatics (Buhler, 2001; Buckingham et al., 2014) . The approach used here is most similar to the holographic word encoding approach of Hannagan et al. (2011) and Widdows and Cohen (2014) .",
"cite_spans": [
{
"start": 348,
"end": 362,
"text": "(Buhler, 2001;",
"ref_id": "BIBREF3"
},
{
"start": 363,
"end": 387,
"text": "Buckingham et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 474,
"end": 496,
"text": "Hannagan et al. (2011)",
"ref_id": "BIBREF9"
},
{
"start": 501,
"end": 525,
"text": "Widdows and Cohen (2014)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Vectors",
"sec_num": "3.2.2"
},
{
"text": "To create lexical vectors, we first generate and associate a random vector for each distinct character n-gram that is found in the text. Then, for each token we accumulate the vectors for each n-gram contained within the token. We use uni-grams, bigrams, tri-grams and tetra-grams, but we also include skip-grams such as the character sequence \"a b\" where the underscore is a wild-card placeholder symbol. The n-gram vectors are added together and the resulting vector is normalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Vectors",
"sec_num": "3.2.2"
},
{
"text": "Lexical feature representation is especially useful when there doesn't exist an easily available semantic representation. Some corpora, such as clinical texts, use an abundance of syntactic conventions, such as abbreviations, acronyms, times, dates and identifiers. These tokens may be represented using a lexical vector such that orthographically similar tokens will have similar vectors. An advantage of the use of these lexical vectors is that they are constructed in a completely unsupervised fashion which is corpus independent and does not rely on the use of hand-crafted rules. This is useful in the application to unseen data where there may exist tokens or patterns that have not been seen within the training set (which would in turn render most hand-crafted rules ineffective).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Vectors",
"sec_num": "3.2.2"
},
{
"text": "Many models of phrase and sentence representation have recently been proposed for tasks such as paraphrase identification, sentiment classification and question answering (Le and Mikolov, 2014; Kalchbrenner et al., 2014) , just to name a few. The simple approach adopted in this paper makes use of both semantic and lexical vectors.",
"cite_spans": [
{
"start": 179,
"end": 193,
"text": "Mikolov, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 194,
"end": 220,
"text": "Kalchbrenner et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Level Vectors",
"sec_num": "3.2.3"
},
{
"text": "To form sequence level vectors, we accumulate the word embeddings for each token in a phrase or sentence. A token is ignored if it does not have an associated word embedding. The lexical vectors for each token in a sequence are also accumulated. Both types of vectors, semantic and lexical, are normalized. We then concatenate the vectors and normalize again.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Level Vectors",
"sec_num": "3.2.3"
},
{
"text": "From time to time, some of the tokens within short text sequences may not be associated to word embeddings. In such a case the sequence is represented entirely with its accumulated lexical vectors. In this paper we evaluate the effectiveness of sentence and bi-gram phrase vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Level Vectors",
"sec_num": "3.2.3"
},
{
"text": "In our approach, the real valued vector representations obtained employing the methods above are then transformed into discrete classes. To cluster these vectors, we use K-means++ (Arthur and Vassilvitskii, 2007) with Euclidean distance using a range of different granularities akin to how multiple levels of representations are generally used in Brown clustering.",
"cite_spans": [
{
"start": 180,
"end": 212,
"text": "(Arthur and Vassilvitskii, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering Methodology",
"sec_num": "3.2.4"
},
{
"text": "Clustering of vectors is performed on a training dataset. When a model is applied to unseen data, the representation for an unseen item is projected into the nearest cluster obtained from the training data, and a feature value is assigned to the item. We experimented with different strategies for assigning feature identifiers to clusters including (a) a simple enumeration of clusters, and (b) a reduced feature space in which only clusters containing a majority of members with the same configuration of concept labels (from training data) are given an incrementing feature number. Method (b) did not improve results and so we only report the outcomes of method (a). Clustering iterations were terminated at 120 iterations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering Methodology",
"sec_num": "3.2.4"
},
{
"text": "To evaluate the feature groups studied in this paper, we use the annotated train and test sets of the i2b2/VA 2010 NLP challenge (Uzuner et al., 2011) . We evaluate the effectiveness of concept extraction systems using Precision, Recall and F1-measure. Evaluation measures are computed on the i2b2 test data using MALLET's multisegmentation evaluator (McCallum, 2002) as per the experimental setup of (Kholghi et al., 2014) .",
"cite_spans": [
{
"start": 129,
"end": 150,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF33"
},
{
"start": 351,
"end": 367,
"text": "(McCallum, 2002)",
"ref_id": "BIBREF20"
},
{
"start": 401,
"end": 423,
"text": "(Kholghi et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We compute statistical significance (p-value) using a 5*2 cross validated t-test (Dietterich, 1998) in which we combine both train and test C 1 prediabetes, insulin-dependant, endocrine., early-onset, type-2 C 2 flank/right, extremity/lower, mid-to-lower, extremity/right C 3 knife, scissors, scalpel, clamp, tourniquet C 4 instructed, attempted, allowed, refuses, urged C 5 psychosomatic, attention-deficit, delirium/dementia, depression/bipolar As supervised machine learning algorithm for concept extraction, we used a linear-chain CRFs model based on the MALLET CRFs implementation and tuned following Kholghi et al. (2014) . We use our own implementation of K-means++ for clustering. For creating the Skip-gram word embeddings we use the popular word2vec tool (Mikolov et al., 2013) , with hierarchical softmax and 5 epochs on the C1 and C2 datasets and 1 epochs on the PM and WK datasets (see below) due to computational constrains.",
"cite_spans": [
{
"start": 606,
"end": 627,
"text": "Kholghi et al. (2014)",
"ref_id": "BIBREF14"
},
{
"start": 765,
"end": 787,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We use four different corpora to generate word embeddings 1 : two clinical (C1 and C2) and two non-clinical (PM and WK); corpora details are reported below and in Table 3 ",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Corpora",
"sec_num": "4.1"
},
{
"text": "In addition to the feature groups A, B and C mentioned in Section 3.1, we consider the following feature groups: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Groups",
"sec_num": "4.2"
},
{
"text": "In this section, we first study the impact of different feature sets on the effectiveness of the learnt models. We then discuss how different training corpora affect the quality of word embeddings and sequence representations. Table 4 reports the effectiveness of CRF models built using only the word tokens appearing in the documents (Word), and this feature along with different combinations of baseline features (A, B, C). These results show that feature group A (orthographical, lexical, morphological, and contextual features) provides significantly higher effectiveness compared to other individual feature groups. Semantic features (group C) also achieve reasonably high effectiveness compared to the use of Word features alone. However, POS tags (group B) provide inferior effectiveness. Indeed, when feature group B is used in combination with either A or C, no significant differences are observed compared to using A or C alone: POS tags do not improve effectiveness when combined with another, single feature group. It is the combination of all baseline features (ABC), instead, that provides the highest effectiveness. ",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We study the effect of word embeddings on concept extraction to answer our RQ1 (see Section 1). To do so, we select the best combination of baseline features (ABC) and measure the effectiveness of adding semantic and lexical vectors features (groups D, G, and H). Results are reported in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Word Embedding Features",
"sec_num": "5.2"
},
{
"text": "The effectiveness of the derived information extraction systems is influenced by the training corpus used to produce the embeddings. Thus, the results in Table 5 are reported with respect to the corpora; the effect training corpora have on effectiveness will be discussed in Section 5.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Word Embedding Features",
"sec_num": "5.2"
},
{
"text": "The effectiveness obtained when using the word embedding features alone 3 (group D) is comparable to that observed when using baseline semantic features (group C, Table 4 ). Group D includes 8 clustering features with window sizes 2 and 5. When using features of the three words preceding and following the target word with 1024 clusters (groups G and H), higher effectiveness is observed, irrespectively of the corpus (apart from WK).",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Analysis of Word Embedding Features",
"sec_num": "5.2"
},
{
"text": "Further improvements are obtained when clustering features are used in conjunction with the baseline features. The improvements in effectiveness observed when adding both D and contextual word embedding clustering features (G and H) are statistically significant compared to feature groups ABC. These results confirm those found in previous work that explored the use of word embeddings to improve effectiveness in information extraction tasks, e.g., Tang et al. (2015) .",
"cite_spans": [
{
"start": 451,
"end": 469,
"text": "Tang et al. (2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Word Embedding Features",
"sec_num": "5.2"
},
{
"text": "Note that we did study the effectiveness of using feature groups G and H with different number of clusters (i.e., 128, 256, 512 and 1024); however, the highest effectiveness was achieved when considering 1024 clusters. Similarly, we also experimented with different settings of word embedding's window size and dimensionality; the results of these experiments are not included in this paper for brevity 4 . The outcome of these trials was that embeddings with window size 5 usually perform better than window size 2, though not significantly; however the highest effectiveness is achieved when both sizes 2 and 5 are used. We also observed that there are no significant differences between the effectiveness of learnt models using embeddings generated with 300 dimensions as opposed to 100. However, larger embeddings are computationally more costly than smaller ones (both in terms of computer clocks and memory). Therefore, in this paper, all results were produced using embeddings of dimension 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Word Embedding Features",
"sec_num": "5.2"
},
{
"text": "We also study the effect of sequence features on concept extraction to answer our RQ1. For this we select the best combination of baseline and word embedding features (ABCDGH) and measure the effectiveness of adding sequence features (groups Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis of Sequence Features",
"sec_num": "5.3"
},
{
"text": "The use of either feature groups J, K, L, M alone provide results that are comparable to the baseline semantic feature (C) or the embedding features (D), but are less effective than the use of the previous combination of features (ABCDGH) . Adding sentence features J and K separately to the remaining feature groups shows mixed results with no significant changes compared to ABCDGH. Specifically, feature group J provides small improvements across different corpora, while insignificant decrease is observed on C1 and PM with feature group K. Similar results are obtained with L and M (not reported).",
"cite_spans": [
{
"start": 230,
"end": 238,
"text": "(ABCDGH)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Sequence Features",
"sec_num": "5.3"
},
{
"text": "However, when we combine all sentence features together (ABCDGHJK) we observe small improvements across all corpora except WK. This suggests that the results are somewhat sensitive to variation in the corpora used to learn word embeddings and sequence representations -we explore this further in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Sequence Features",
"sec_num": "5.3"
},
{
"text": "When the phrase features are added to word embedding and sentence features, small improvements are observed both over word embeddings (ABCDGH) and word embeddings with sentence features (ABCDGHJK).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Sequence Features",
"sec_num": "5.3"
},
{
"text": "In summary, sequence features provide small, additional improvements over word embedding features in the task of clinical concept extraction (when clinical and biomedical corpora are used to learn sequence representations). Given the differences between word embeddings, sentence features and phrase features, the results suggest that perhaps phrase, rather than sentence level representations should be further explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Sequence Features",
"sec_num": "5.3"
},
{
"text": "The results obtained when employing embedding features (D, G, H) and sequence features (J, K, L, M) are influenced by the corpora used to compute the embeddings (see Table 5 and 6). We therefore address our RQ2: how sensitive are the features to the training corpora?",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Training Corpora",
"sec_num": "5.4"
},
{
"text": "The empirical results suggest that using a small corpus such as i2b2 (C2) to build the representations does not provide the best effectiveness, despite the test set used for evaluation contains data that is highly comparable with that in C2 (this corpus contains only i2b2's train set). However, the highest effectiveness is achieved when augmenting C2 with data from clinical corpora like Medtrack and ShARe/CLEF (C1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Training Corpora",
"sec_num": "5.4"
},
{
"text": "The results when PubMed (PM) is used to derive the feature representations are generally lower but comparable to those obtained on the larger clinical corpus (C1) and always better than those obtained on the smaller clinical corpus (C2) and the Wikipedia data (WK). Learning word embedding and sequence features from Wikipedia, in combination with the baseline features (i.e., ABCDGH and ABCDGHJKLM), results in (small) losses of effectiveness compared to the use of baseline features only (ABC), despite Wikipedia being one of the largest corpora among those experimented with. We advance two hypotheses to explain this: (1) Wikipedia contains less of the tokens that appear in the i2b2 test set than any other corpora (poor coverage), (2) for the test tokens that do appear in Wikipedia, word embedding representations as good as those obtained from medical data cannot be constructed because of the sparsity of domain aligned data (sparse domain data). The first hypothesis is supported by Table 7 , where we report the number of target tokens contained in the i2b2 test dataset but not in each of the word embedding training corpora. The second hypothesis is supported by a manual analysis of the embeddings from WK and compared e.g. to those reported for C1 in Table 1 . Indeed, we observe that embeddings and clusters in C1 address words that are misspelled or abbreviated, a common finding in clinical text; while, the representations derived from WK miss this characteristic (see also Nothman et al. (2009) ). We also observe that the predominant word senses captured by many word vectors is different between medical corpora and Wikipedia, e.g., episodes: {bouts, emesis, recurrences, ...} in C1, while episodes: {sequels, airings, series, ...} in WK.",
"cite_spans": [
{
"start": 1493,
"end": 1514,
"text": "Nothman et al. (2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 993,
"end": 1000,
"text": "Table 7",
"ref_id": "TABREF6"
},
{
"start": 1266,
"end": 1273,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Analysis of Training Corpora",
"sec_num": "5.4"
},
{
"text": "These results can be summarised into the following observations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Training Corpora",
"sec_num": "5.4"
},
{
"text": "\u2022 C2 does not provide adequate coverage of the target test tokens because of the limited amount of data, despite its clinical nature;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Training Corpora",
"sec_num": "5.4"
},
{
"text": "\u2022 when using medical corpora, the amount of data, rather than its format or domain, is often more important for generating representa-tions conducive of competitive effectiveness; \u2022 data containing biomedical content rather than clinical content can be used in place of clinical data for producing the studied feature representations without experiencing considerable loss in effectiveness. This is particularly important because large clinical datasets are expensive to compile and are often a well guarded, sensitive data source; \u2022 if content, format and domain of the data used to derive these unsupervised features is too different from that of the target corpus requiring annotations, then the features are less likely to deliver effective concept extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Training Corpora",
"sec_num": "5.4"
},
{
"text": "This paper has investigated the use of unsupervised methods to generate semantic and lexical vectors, along with sequence features for improving clinical information extraction. Specifically, we studied the effectiveness of these features and their sensitivity to the corpus used to generate them. The empirical results have highlighted that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "1. word embeddings improve information extraction effectiveness over a wide set of baseline features; 2. sequence features improve results over both baseline features (significantly) and embeddings features (to a less remarkable extent); 3. the corpora used to generate the unsupervised features influence their effectiveness, and larger clinical or biomedical corpora are conducive of higher effectiveness than small clinical corpora or large generalist corpora. These observations may be of guidance to others. This study opens up a number of directions for future work. Other approaches to create lexical vectors exits, e.g., morpheme embeddings (Luong et al., 2013) , or convolutional neural nets applied at the character level (Zhang and LeCun, 2015) , and their effectiveness in this context is yet to be studied. Similarly, we only investigated an initial (but novel) approach to forming sequence representations for feature generation. Given the promise expressed by this approach, more analysis is required to reach firm conclusions about the effectiveness of sequence features (both sentence and phrase), including the investigation of alternative approaches for generating these feature groups.",
"cite_spans": [
{
"start": 649,
"end": 669,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 732,
"end": 755,
"text": "(Zhang and LeCun, 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Pre-processing involving lower-casing and substitution of matching regular expressions was performed.2 http://mbr.nlm.nih.gov/Download/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the following, when referring to using a feature group alone, we mean using that feature group, along with the target word string.4 But can be found as an online appendix at https:// github.com/ldevine/SeqLab.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "k-means++: The advantages of careful seeding",
"authors": [
{
"first": "David",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Sergei",
"middle": [],
"last": "Vassilvitskii",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms",
"volume": "",
"issue": "",
"pages": "1027--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Arthur and Sergei Vassilvitskii. 2007. k- means++: The advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pages 1027- 1035. Society for Industrial and Applied Mathemat- ics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Desouza",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer C",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Locality-sensitive hashing for protein classification",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Buckingham",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kelly",
"suffix": ""
}
],
"year": 2014,
"venue": "Conferences in Research and Practice in Information Technology",
"volume": "158",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Buckingham, James M Hogan, Shlomo Geva, and Wayne Kelly. 2014. Locality-sensitive hashing for protein classification. In Conferences in Research and Practice in Information Technology, volume 158. Australian Computer Society, Inc.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Efficient large-scale sequence comparison by locality-sensitive hashing",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Buhler",
"suffix": ""
}
],
"year": 2001,
"venue": "Bioinformatics",
"volume": "17",
"issue": "5",
"pages": "419--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Buhler. 2001. Efficient large-scale sequence comparison by locality-sensitive hashing. Bioinfor- matics, 17(5):419-428.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Machinelearned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Berry De Bruijn",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "5",
"pages": "557--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berry de Bruijn, Colin Cherry, Svetlana Kiritchenko, Joel Martin, and Xiaodan Zhu. 2011. Machine- learned solutions for three stages of clinical infor- mation extraction: the state of the art at i2b2 2010. Journal of the American Medical Informatics Asso- ciation, 18(5):557-562.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Overview of the inex 2010 xml mining track: Clustering and classification of xml documents",
"authors": [
{
"first": "",
"middle": [],
"last": "Christopher M De",
"suffix": ""
},
{
"first": "Richi",
"middle": [],
"last": "Vries",
"suffix": ""
},
{
"first": "Sangeetha",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Kutty",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tagarelli",
"suffix": ""
}
],
"year": 2011,
"venue": "Comparative evaluation of focused retrieval",
"volume": "",
"issue": "",
"pages": "363--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher M De Vries, Richi Nayak, Sangeetha Kutty, Shlomo Geva, and Andrea Tagarelli. 2011. Overview of the inex 2010 xml mining track: Clus- tering and classification of xml documents. In Com- parative evaluation of focused retrieval, pages 363- 376. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What can natural language processing do for clinical decision support",
"authors": [
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "Chapman",
"suffix": ""
},
{
"first": "Clement J",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of biomedical informatics",
"volume": "42",
"issue": "5",
"pages": "760--772",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dina Demner-Fushman, Wendy W Chapman, and Clement J McDonald. 2009. What can natural lan- guage processing do for clinical decision support? Journal of biomedical informatics, 42(5):760-772.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Approximate statistical tests for comparing supervised classification learning algorithms",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1998,
"venue": "Neural computation",
"volume": "10",
"issue": "7",
"pages": "1895--1923",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G Dietterich. 1998. Approximate statistical tests for comparing supervised classification learn- ing algorithms. Neural computation, 10(7):1895- 1923.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Holographic string encoding",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Hannagan",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Christophe",
"suffix": ""
}
],
"year": 2011,
"venue": "Cognitive Science",
"volume": "35",
"issue": "1",
"pages": "79--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Hannagan, Emmanuel Dupoux, and Anne Christophe. 2011. Holographic string encoding. Cognitive Science, 35(1):79-118.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Enhancing clinical concept extraction with distributional semantics",
"authors": [
{
"first": "Siddhartha",
"middle": [],
"last": "Jonnalagadda",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of biomedical informatics",
"volume": "45",
"issue": "1",
"pages": "129--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddhartha Jonnalagadda, Trevor Cohen, Stephen Wu, and Graciela Gonzalez. 2012. Enhancing clini- cal concept extraction with distributional semantics. Journal of biomedical informatics, 45(1):129-140.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A convolutional neural network for modelling sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1404.2188"
]
},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural net- work for modelling sentences. arXiv preprint arXiv:1404.2188.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Random indexing of text samples for latent semantic analysis",
"authors": [
{
"first": "Pentti",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Kristofersson",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Holst",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 22nd annual conference of the cognitive science society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pentti Kanerva, Jan Kristofersson, and Anders Holst. 2000. Random indexing of text samples for latent semantic analysis. In Proceedings of the 22nd an- nual conference of the cognitive science society, vol- ume 1036.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Chemical entity extraction using crf and an ensemble of extractors",
"authors": [
{
"first": "Madian",
"middle": [],
"last": "Khabsa",
"suffix": ""
},
{
"first": "C Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2015,
"venue": "J Cheminform",
"volume": "7",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Madian Khabsa and C Lee Giles. 2015. Chemical en- tity extraction using crf and an ensemble of extrac- tors. J Cheminform, 7(Suppl 1):S12.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Factors influencing robustness and effectiveness of conditional random fields in active learning frameworks",
"authors": [
{
"first": "Mahnoosh",
"middle": [],
"last": "Kholghi",
"suffix": ""
},
{
"first": "Laurianne",
"middle": [],
"last": "Sitbon",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Zuccon",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 12th Australasian Data Mining Conference, AusDM'14",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahnoosh Kholghi, Laurianne Sitbon, Guido Zuccon, and Anthony Nguyen. 2014. Factors influencing robustness and effectiveness of conditional random fields in active learning frameworks. In Proceedings of the 12th Australasian Data Mining Conference, AusDM'14. Australian Computer Society.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "External knowledge and query strategies in active learning: A study in clinical information extraction",
"authors": [
{
"first": "Mahnoosh",
"middle": [],
"last": "Kholghi",
"suffix": ""
},
{
"first": "Laurianne",
"middle": [],
"last": "Sitbon",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Zuccon",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24rd ACM International Conference on Conference on Information and Knowledge Management, CIKM '15",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahnoosh Kholghi, Laurianne Sitbon, Guido Zuccon, and Anthony Nguyen. 2015. External knowledge and query strategies in active learning: A study in clinical information extraction. In Proceedings of the 24rd ACM International Conference on Confer- ence on Information and Knowledge Management, CIKM '15, New York, NY, USA. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic ICD-10 classification of cancers from free-text death certificates",
"authors": [
{
"first": "Bevan",
"middle": [],
"last": "Koopman",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Zuccon",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bergheim",
"suffix": ""
},
{
"first": "Narelle",
"middle": [],
"last": "Grayson",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Medical Informatics",
"volume": "84",
"issue": "11",
"pages": "956--965",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bevan Koopman, Guido Zuccon, Anthony Nguyen, Anton Bergheim, and Narelle Grayson. 2015. Auto- matic ICD-10 classification of cancers from free-text death certificates. International Journal of Medical Informatics, 84(11):956 -965.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning, pages 282-289. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1405.4053"
]
},
"num": null,
"urls": [],
"raw_text": "Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Richard Socher, and Christo- pher D Manning. 2013. Better word representa- tions with recursive neural networks for morphol- ogy. CoNLL-2013, 104.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 2002. Mal- let: A machine learning for language toolkit. http://mallet.cs.umass.edu.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hierarchical probabilistic neural network language model",
"authors": [
{
"first": "Frederic",
"middle": [],
"last": "Morin",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the international workshop on artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "246--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederic Morin and Yoshua Bengio. 2005. Hierarchi- cal probabilistic neural network language model. In Proceedings of the international workshop on artifi- cial intelligence and statistics, pages 246-252.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features",
"authors": [
{
"first": "Azadeh",
"middle": [],
"last": "Nikfarjam",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Ginn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Azadeh Nikfarjam, Abeed Sarker, Karen O'Connor, Rachel Ginn, and Graciela Gonzalez. 2015. Phar- macovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Analysing wikipedia and gold-standard corpora for ner training",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "James R",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "612--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Nothman, Tara Murphy, and James R Curran. 2009. Analysing wikipedia and gold-standard cor- pora for ner training. In Proceedings of the 12th Conference of the European Chapter of the Associa- tion for Computational Linguistics, pages 612-620. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Big data small data, in domain out-of domain",
"authors": [
{
"first": "Lizhen",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Gabriela",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Liyuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "The impact of word representation on sequence labelling tasks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.05319"
]
},
"num": null,
"urls": [],
"raw_text": "Lizhen Qu, Gabriela Ferraro, Liyuan Zhou, Wei- wei Hou, Nathan Schneider, and Timothy Baldwin. 2015. Big data small data, in domain out-of domain, known word unknown word: The impact of word representation on sequence labelling tasks. arXiv preprint arXiv:1504.05319.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Overview of the share/clef ehealth evaluation lab 2013",
"authors": [
{
"first": "Hanna",
"middle": [],
"last": "Suominen",
"suffix": ""
},
{
"first": "Sanna",
"middle": [],
"last": "Salanter\u00e4",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "Chapman",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"L"
],
"last": "South",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mowery",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2013,
"venue": "Information Access Evaluation. Multilinguality, Multimodality, and Visualization",
"volume": "",
"issue": "",
"pages": "212--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna Suominen, Sanna Salanter\u00e4, Sumithra Velupil- lai, Wendy W Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R South, Danielle L Mowery, Gareth JF Jones, et al. 2013. Overview of the share/clef ehealth evaluation lab 2013. In Information Access Evaluation. Multilinguality, Multimodality, and Visualization, pages 212-231. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Recognizing and encoding discorder concepts in clinical text using machine learning and vector space model",
"authors": [
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"C"
],
"last": "Denny",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2013,
"venue": "Workshop of ShARe/CLEF eHealth Evaluation Lab",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buzhou Tang, Yonghui Wu, Min Jiang, Joshua C Denny, and Hua Xu. 2013. Recognizing and encod- ing discorder concepts in clinical text using machine learning and vector space model. In Workshop of ShARe/CLEF eHealth Evaluation Lab 2013.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Evaluating word representation features in biomedical named entity recognition tasks",
"authors": [
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Hongxin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buzhou Tang, Hongxin Cao, Xiaolong Wang, Qingcai Chen, and Hua Xu. 2014. Evaluating word repre- sentation features in biomedical named entity recog- nition tasks. BioMed research international, 2014.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A comparison of conditional random fields and structured support vector machines for chemical entity recognition in biomedical literature",
"authors": [
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yudong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yaoyun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Jingqi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of cheminformatics",
"volume": "",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buzhou Tang, Yudong Feng, Xiaolong Wang, Yonghui Wu, Yaoyun Zhang, Min Jiang, Jingqi Wang, and Hua Xu. 2015. A comparison of conditional ran- dom fields and structured support vector machines for chemical entity recognition in biomedical litera- ture. Journal of cheminformatics, 7(supplement 1).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 173-180. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for compu- tational linguistics, pages 384-394. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "i2b2/va challenge on concepts, assertions, and relations in clinical text",
"authors": [
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brett",
"suffix": ""
},
{
"first": "Shuying",
"middle": [],
"last": "South",
"suffix": ""
},
{
"first": "Scott L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duvall",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "5",
"pages": "552--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Asso- ciation, 18(5):552-556.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Overview of the trec 2011 medical records track",
"authors": [
{
"first": "E",
"middle": [],
"last": "Voorhees",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tong",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of TREC",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Voorhees and R Tong. 2011. Overview of the trec 2011 medical records track. In Proceedings of TREC, volume 4.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Reasoning with vectors: A continuous model for fast robust inference",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2014,
"venue": "Logic Journal of IGPL",
"volume": "",
"issue": "",
"pages": "141--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows and Trevor Cohen. 2014. Reason- ing with vectors: A continuous model for fast robust inference. Logic Journal of IGPL, pages 141-173.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Text understanding from scratch",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.01710"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. arXiv preprint arXiv:1502.01710.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Figure 1.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Feature generation process and their use in concept extraction.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": ": C1: (Clinical) composed by the concatenation of the i2b2 train set (Uzuner et al., 2011), Med-Track (Voorhees and Tong, 2011), and the CLEF 2013 train and test sets (Suominen et al., 2013) C2: (Clinical) the i2b2 train set (Uzuner et al., 2011) PM: (Biomedical) PubMed, as in the 2012 dump 2 WK: (Generalist) Wikipedia, as in the 2009 dump (De Vries et al., 2011)",
"uris": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>and 2 show</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "Example of word embedding clusters.",
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"text": "Example of sentence clusters.",
"content": "<table><tr><td>C 1 Abs Eos , auto 0.1 X10E+09/L</td></tr><tr><td>ABS Lymphs 2.4 X10E+09 / L</td></tr><tr><td>ABS Monocytes 1.3 X10E+09 / L</td></tr><tr><td>Abs Eos , auto 0.2 X10E+09 / L</td></tr><tr><td>C 2 5. Dilaudid 4 mg Tablet Sig : ...</td></tr><tr><td>7. Clonidine 0.2 mg Tablet Sig : ...</td></tr><tr><td>9. Nifedipine 30 mg Tablet Sustained ...</td></tr><tr><td>10. Pantoprazole 40 mg Tablet ...</td></tr><tr><td>C 3 Right proximal humeral fracture status ...</td></tr><tr><td>Bilateral renal artery stenosis status ...</td></tr><tr><td>status post bilateral knee replacement ...</td></tr><tr><td>sets, sample 5 subsets of 30,000 sentences, split</td></tr><tr><td>each subset into train and test, and perform a</td></tr><tr><td>paired t-test for these 10 subsets.</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"text": "Training corpora for word embeddings.",
"content": "<table><tr><td>Corpus</td><td colspan=\"2\">Vocab Num. Tokens</td></tr><tr><td>C1</td><td>104,743</td><td>\u2248 29.5 M</td></tr><tr><td>C2</td><td>11,727</td><td>\u2248 221.1 K</td></tr><tr><td>PM</td><td>163,744</td><td>\u2248 1.8 B</td></tr><tr><td>WK</td><td>122,750</td><td>\u2248 415.7 M</td></tr><tr><td colspan=\"3\">D: Skip-gram clustering features with window</td></tr><tr><td colspan=\"3\">size 2 and 5 and 128, 256, 512, 1024 clusters</td></tr><tr><td colspan=\"3\">G: Window of 3 previous and next Skip-gram</td></tr><tr><td colspan=\"3\">clustering feature (window size 2) with 1024</td></tr><tr><td>clusters</td><td/><td/></tr><tr><td colspan=\"3\">H: Window of 3 previous and next Skip-gram</td></tr><tr><td colspan=\"3\">clustering feature (window size 5) with 1024</td></tr><tr><td>clusters</td><td/><td/></tr><tr><td colspan=\"3\">J: Sentence features with 1024 clusters</td></tr><tr><td colspan=\"3\">K: Sentence features with 256 clusters</td></tr><tr><td colspan=\"3\">L: Bi-gram phrase features with 512 clusters</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"text": "Results for baseline features. Statistically significant improvements (p<0.05) for F1 when compared with Word are indicated by *.",
"content": "<table><tr><td colspan=\"5\">Feature Set Precision Recall F1</td></tr><tr><td>Word</td><td/><td>0.6571</td><td colspan=\"2\">0.6011 0.6279</td></tr><tr><td>A</td><td/><td>0.8404</td><td colspan=\"2\">0.8031 0.8213</td></tr><tr><td>B</td><td/><td>0.6167</td><td colspan=\"2\">0.6006 0.6085</td></tr><tr><td>C</td><td/><td>0.7691</td><td colspan=\"2\">0.6726 0.7192</td></tr><tr><td>BC</td><td/><td>0.7269</td><td>0.712</td><td>0.7194</td></tr><tr><td>AB</td><td/><td>0.8368</td><td colspan=\"2\">0.8038 0.8200</td></tr><tr><td>AC</td><td/><td>0.8378</td><td colspan=\"2\">0.8059 0.8216</td></tr><tr><td>ABC</td><td/><td>0.8409</td><td colspan=\"2\">0.8066 0.8234*</td></tr><tr><td colspan=\"5\">Table 5: Results for word embedding features.</td></tr><tr><td colspan=\"5\">The highest effectiveness obtained by each feature</td></tr><tr><td colspan=\"5\">group is highlighted in bold. Statistically signif-</td></tr><tr><td colspan=\"5\">icant improvements (p&lt;0.05) for F1 when com-</td></tr><tr><td colspan=\"4\">pared with ABC are indicated by *.</td></tr><tr><td colspan=\"3\">Features Corp Prec.</td><td colspan=\"2\">Recall F1</td></tr><tr><td/><td>C1</td><td colspan=\"3\">0.7758 0.7392 0.7571</td></tr><tr><td>D</td><td colspan=\"4\">C2 PM 0.7776 0.7309 0.7535 0.7612 0.6926 0.7252</td></tr><tr><td/><td colspan=\"2\">WK 0.733</td><td colspan=\"2\">0.6534 0.6909</td></tr><tr><td/><td>C1</td><td colspan=\"3\">0.7868 0.7469 0.7663</td></tr><tr><td>GH</td><td colspan=\"4\">C2 PM 0.8005 0.7466 0.7726 0.7847 0.7001 0.7400</td></tr><tr><td/><td colspan=\"4\">WK 0.7106 0.6043 0.6532</td></tr><tr><td/><td>C1</td><td colspan=\"3\">0.8432 0.8123 0.8275</td></tr><tr><td>ABCD</td><td colspan=\"4\">C2 PM 0.8377 0.8126 0.8249 0.8435 0.8006 0.8215</td></tr><tr><td/><td colspan=\"4\">WK 0.8409 0.8108 0.8256</td></tr><tr><td/><td>C1</td><td colspan=\"3\">0.8509 0.8118 0.8309*</td></tr><tr><td>ABCD</td><td>C2</td><td colspan=\"3\">0.8386 0.8001 0.8189</td></tr><tr><td>GH</td><td colspan=\"4\">PM 0.8484 0.8088 0.8281</td></tr><tr><td/><td colspan=\"4\">WK 0.8397 0.8063 0.8226</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"text": "Results for sequence features. The highest effectiveness obtained by each feature group is highlighted in bold. Statistically significant improvements (p<0.05) for F1 when compared with ABC are indicated by *.",
"content": "<table><tr><td colspan=\"3\">Features Corp Prec.</td><td>Recall F1</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.6832 0.6693 0.6762</td></tr><tr><td>J</td><td colspan=\"3\">C2 PM 0.7408 0.6701 0.7037 0.5926 0.6036 0.7012</td></tr><tr><td/><td colspan=\"2\">WK 0.733</td><td>0.6534 0.6909</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.7646 0.6747 0.7169</td></tr><tr><td>K</td><td colspan=\"3\">C2 PM 0.735 0.7241 0.6639 0.6927 0.6641 0.6978</td></tr><tr><td/><td colspan=\"3\">WK 0.7237 0.6609 0.6909</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.8493 0.8136 0.8311</td></tr><tr><td>ABCD</td><td>C2</td><td colspan=\"2\">0.8463 0.7968 0.8208</td></tr><tr><td>GHJ</td><td colspan=\"3\">PM 0.8475 0.8134 0.8301</td></tr><tr><td/><td colspan=\"3\">WK 0.8388 0.8087 0.8235</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.8473 0.8066 0.8265</td></tr><tr><td>ABCD</td><td>C2</td><td colspan=\"2\">0.8494 0.7941 0.8208</td></tr><tr><td>GHK</td><td colspan=\"3\">PM 0.8423 0.8061 0.8238</td></tr><tr><td/><td colspan=\"3\">WK 0.8399 0.8103 0.8249</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.8488 0.8152 0.8316*</td></tr><tr><td>ABCD</td><td>C2</td><td colspan=\"2\">0.8491 0.7959 0.8216</td></tr><tr><td>GHJK</td><td colspan=\"3\">PM 0.8472 0.8151 0.8308</td></tr><tr><td/><td colspan=\"3\">WK 0.8364 0.8034 0.8195</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.7601 0.6763 0.7157</td></tr><tr><td>L</td><td colspan=\"3\">C2 PM 0.7624 0.6720 0.7144 0.7311 0.6014 0.6599</td></tr><tr><td/><td colspan=\"3\">WK 0.7619 0.6646 0.7099</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.7584 0.6761 0.7148</td></tr><tr><td>M</td><td colspan=\"3\">C2 PM 0.7602 0.6725 0.7137 0.6456 0.6521 0.6488</td></tr><tr><td/><td colspan=\"3\">WK 0.6588 0.6424 0.6505</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.8484 0.8103 0.8289</td></tr><tr><td>ABCD</td><td>C2</td><td colspan=\"2\">0.8460 0.7931 0.8187</td></tr><tr><td>GHJKL</td><td colspan=\"3\">PM 0.8444 0.8147 0.8293*</td></tr><tr><td/><td colspan=\"3\">WK 0.8388 0.8024 0.8202</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.8505 0.8144 0.8320*</td></tr><tr><td>ABCD</td><td>C2</td><td colspan=\"2\">0.8457 0.7967 0.8205</td></tr><tr><td>GHJKM</td><td colspan=\"3\">PM 0.8468 0.8160 0.8311</td></tr><tr><td/><td colspan=\"3\">WK 0.8306 0.8060 0.8181</td></tr><tr><td/><td>C1</td><td colspan=\"2\">0.8504 0.8116 0.8305*</td></tr><tr><td>ABCD</td><td>C2</td><td colspan=\"2\">0.8465 0.7959 0.8204</td></tr><tr><td>GHJKLM</td><td colspan=\"3\">PM 0.8477 0.8152 0.8311*</td></tr><tr><td/><td colspan=\"3\">WK 0.8391 0.8028 0.8205</td></tr><tr><td colspan=\"4\">J, K (sentence) and L, M (phrase)). Results are re-</td></tr><tr><td>ported in</td><td/><td/></tr></table>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"num": null,
"text": "Number of target tokens contained in the i2b2 test set but not in each of the word embedding training corpora.Corp # Miss. Tok. Corp # Miss. Tok.",
"content": "<table><tr><td>C1</td><td>196</td><td>PM</td><td>549</td></tr><tr><td>C2</td><td>890</td><td>WK</td><td>1152</td></tr></table>"
}
}
}
}