ACL-OCL / Base_JSON /prefixU /json /U17 /U17-1004.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U17-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:11:28.924368Z"
},
"title": "Leveraging linguistic resources for improving neural text classification",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": "ming.m.liu@monash.edu"
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": "gholamreza.haffari@monash.edu"
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": "wray.buntine@monash.edu"
},
{
"first": "Michelle",
"middle": [
"R"
],
"last": "Ananda-Rajah",
"suffix": "",
"affiliation": {
"laboratory": "Alfred Health and Monash University",
"institution": "",
"location": {}
},
"email": "michelle.ananda-rajah@monash.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a deep linguistic attentional framework which incorporates word level concept information into neural classification models. While learning neural classification models often requires a large amount of labelled data, linguistic concept information can be obtained from external knowledge, such as pre-trained word embeddings, WordNet for common text and MetaMap for biomedical text. We explore two different ways of incorporating word level concept annotations, and show that leveraging concept annotations can boost the model performance and reduce the need for large amounts of labelled data. Experiments on various data sets validate the effectiveness of the proposed method.",
"pdf_parse": {
"paper_id": "U17-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a deep linguistic attentional framework which incorporates word level concept information into neural classification models. While learning neural classification models often requires a large amount of labelled data, linguistic concept information can be obtained from external knowledge, such as pre-trained word embeddings, WordNet for common text and MetaMap for biomedical text. We explore two different ways of incorporating word level concept annotations, and show that leveraging concept annotations can boost the model performance and reduce the need for large amounts of labelled data. Experiments on various data sets validate the effectiveness of the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text classification is an important task in natural language processing, such as sentiment analysis, information retrieval, web page ranking and document classification (Pang et al., 2008) . Recently, deep neural models have been widely used in this area due to their abstract framework and good performance. While these models are being used frequently, they require a large amount of labelled data and training time.",
"cite_spans": [
{
"start": 169,
"end": 188,
"text": "(Pang et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The core idea of text neural classification models is that text signals are fed into composition and activation functions via deep neural networks, and then a softmax classifier generates the final label as a probability distribution. Unlike standard ngram models, word representation (Mikolov et al., 2013 ) is distributed and manual features are not usually necessary in deep neural models.",
"cite_spans": [
{
"start": 285,
"end": 306,
"text": "(Mikolov et al., 2013",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though promising, most current text neural classification models still lack the ability of mod-eling linguistic information of the language, especially in domains where annotations are timeconsuming and expensive such as biomedical text. In this work, we use some prior knowledge from pre-trained word embeddings or knowledge bases, and explore different ways of incorporating this prior knowledge into existing deep neural classification models. Our model is an integration of a simple neural bag of words model, which works in 2 steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. create mappings from a sequence of word tokens into concept tokens (based on the given pre-trained word embeddings or knowledge bases),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. combine the embeddings of both word and concept tokens and pass the resulting embedding through a deep feed-forward classification model to make the final prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The motivation of our work is to incorporate extra knowledge from pre-trained word embeddings or knowledge bases such as WordNet for common text, MetaMap for biomedical text. Our main contributions are: (1) creating linguistically-related concepts of words from external knowledge bases;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) incorporating the concept information either through what we call direct or gated mappings. We show that leveraging concept annotations can boost the model performance and reduce the need for large amounts of labelled data, and the concept information can be incorporated more effectively in a gated mapping manner. The remainder of this paper is organized as follows. Section 2 reviews the related work. Section 3 introduces the architecture of incorporating concept information. Data sets and implementation details are described in section 4. Section 5 demonstrates the effectiveness of our method with experiments. Finally, section 6 offers concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section describes some related work on deep neural models for text classification and several common knowledge bases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Composition functions play a key role in many deep neural models. Generally, composition functions fall into two categories: unordered and syntactic. Unordered functions regard input text as bags of word embeddings (Iyyer et al., 2015) , while syntactic models take word order and sentence structure into account (Mikolov et al., 2010; Socher et al., 2013b) . Previously published results have shown that syntactic models have outperformed unordered ones on many tasks. RecNNbased approaches (Socher et al., 2011 (Socher et al., , 2013a rely on parsing trees to construct the semantic function, in which each leaf node in the tree corresponds to a word. Recursive neural models then compute parent vectors in a bottom up fashion using different types of compositionality functions. While parsing is the first step, RecNNs are restricted to modelling short text like sentences rather than documents. Recurrent neural networks (RNNs) (Mikolov et al., 2010) are another natural choice to model text due to their capability of processing arbitrary-length sequences. Unfortunately, a problem with RNNs is that the transition function inside can cause the gradient vector to grow or decay exponentially over long sequences.",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 313,
"end": 335,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF14"
},
{
"start": 336,
"end": 357,
"text": "Socher et al., 2013b)",
"ref_id": "BIBREF22"
},
{
"start": 492,
"end": 512,
"text": "(Socher et al., 2011",
"ref_id": "BIBREF21"
},
{
"start": 513,
"end": 536,
"text": "(Socher et al., , 2013a",
"ref_id": "BIBREF20"
},
{
"start": 932,
"end": 954,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification with deep neural models",
"sec_num": "2.1"
},
{
"text": "The LSTM architecture (Hochreiter and Schmidhuber, 1997) addresses this problem by introducing a memory cell that is able to preserve state over a long period of time. Tree-LSTM (Tai et al., 2015) is an extension of standard LSTM in that Tree-LSTM computes its hidden state from the current input and the hidden states of arbitrarily many child units. Convolutional networks (Kalchbrenner et al., 2014) also model word order in local windows and have achieved performance comparable or better than that of RecNNs or RNNs on many tasks. While models that use syntactic functions need large training time and data, unordered functions allow a tradeoff between training time and model complexity. Unlike some of the previous syntactic approaches, paragraph vector (Le and Mikolov, 2014) is capable of constructing representations of input sequences of variable length. It does not re-quire task-specific tuning of the word weighting function nor does it rely on the parse trees. A compatible unordered method is also used in DANs (Iyyer et al., 2015) , which averages the embeddings for all of a document's tokens and feeds that average through multiple layers. They show nonlinearly transforming the input is more important than tailoring a network to incorporate word order and syntax.",
"cite_spans": [
{
"start": 22,
"end": 56,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 178,
"end": 196,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 375,
"end": 402,
"text": "(Kalchbrenner et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 761,
"end": 783,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF10"
},
{
"start": 1027,
"end": 1047,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text classification with deep neural models",
"sec_num": "2.1"
},
{
"text": "Besides distributed word representation, there exist many large-scale knowledge bases (KBs) in general or specific domains that can be used as prior information for text classification models. WordNet (Miller, 1995) is the most widely used lexical reference system which organizes nouns, verbs, adjectives and adverbs into synonym sets (synsets). Synsets are interlinked by a number of conceptual-semantic and lexical relations such as hypernym, synonym and meronym, etc. Word-Net has already been used in reducing vector dimensionality for many text clustering tasks and showed that the lexical categories within it is quite useful. It includes a core ontology and a lexicon. The latest version is WordNet 3.0 which consists of 155,287 lexical entries and 117,659 synsets.",
"cite_spans": [
{
"start": 201,
"end": 215,
"text": "(Miller, 1995)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting linguistic resources",
"sec_num": "2.2"
},
{
"text": "In the medical domain, some domain knowledge that may be useful to classifiers is also available in the form of existing knowledge sources (Baud et al., 1996) . The UMLS (Bodenreider, 2004) knowledge sources provide huge amounts of linguistic information readily available to the medical community. SNOMED (Spackman et al., 1997) is today the largest source of medical vocabulary (132,643 entries) organised in a systematic way. The GALEN (Rector, 1995) consortium is working together since 1992 and has produced, using the GRAIL representation language, a general model of medicine with nearly 6,000 concepts. The MED (Medical Entities Dictionary) (Cimino, 2000) is a large repository of medical concepts that are drawn from a variety of sources either developed or used at the New York Presbyterian Hospital, including the UMLS, ICD9-CM and LOINC. Currently numbering over 100,000, these concepts correspond to coded terms used in systems and applications throughout both medical centers (Columbia-Presbyterian and New York-Cornell). MetaMap (Aronson, 2001 ) was developed to map biomedical free text to biomedical knowledge representation in which concepts were classified by semantic type and both hierarchical and non-hierarchical relationships among the concepts. In spite of the fact that KBs play an important role for biomedical NLP tasks, to the best of our knowledge, there is little work on integrating KBs with word embedding models for biomedical NLP tasks.",
"cite_spans": [
{
"start": 139,
"end": 158,
"text": "(Baud et al., 1996)",
"ref_id": "BIBREF1"
},
{
"start": 170,
"end": 189,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 306,
"end": 329,
"text": "(Spackman et al., 1997)",
"ref_id": "BIBREF23"
},
{
"start": 439,
"end": 453,
"text": "(Rector, 1995)",
"ref_id": "BIBREF19"
},
{
"start": 1044,
"end": 1058,
"text": "(Aronson, 2001",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting linguistic resources",
"sec_num": "2.2"
},
{
"text": "In this paper, we propose models which incorporate concept information from such external knowledge as word clusters in pre-trained word embeddings or different knowledge bases. This prior concept knowledge is leveraged and fed into a neural bag of words model through a weighted composition. We explore two different ways of incorporation and show that our model can achieve near state of art performance on different text classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting linguistic resources",
"sec_num": "2.2"
},
{
"text": "In this paper, we investigate the feasibility of incorporating prior knowledge from pre-trained word embeddings and various knowledge bases into a traditional neural classification model. As an initial task, we aim to find out what kind of knowledge bases can be used for different domains and how the model can benefit from the additional common and specific concept information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3"
},
{
"text": "Assume that we have L training examples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3"
},
{
"text": "{X d , y d } |L| d=1 , X d is composed of a word sequence{x d i } |X d | i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3"
},
{
"text": ". Suppose we have M knowledge bases C (j) , j 2 {1, 2, 3, ..., M} and define a mapping: V ! C (j) from a word into a specific concept or topic, i.e. C (j) = {c j 1 , ..., c j K }, where x 2 V and c j k 2 C (j) . With each knowledge base C (j) , similar words are to be gathered in the same group with the same topic or concept. For instance, given the sentence Since the previous examination much of the ground-glass opacity identified has resolved. We could have such concept annotations based on different lexical resources: The question is how to incorporate the word level concept information into existing neural classification models. In the following, we first describe a simple and effective neural bag-of-words model, and explore two different ways of incorporating linguistic concept information into the model. We also find the sources which can provide different concept annotations.",
"cite_spans": [
{
"start": 151,
"end": 154,
"text": "(j)",
"ref_id": null
},
{
"start": 206,
"end": 209,
"text": "(j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3"
},
{
"text": "\u2022 WordNet",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3"
},
{
"text": "The Neural bag-of-words model (NBOW) differs from traditional bag-of-words model in that each word in a sequence is represented by a distributed rather than one-hot representation. With the above assumption, the model maps an input document",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "{x i } |X d | i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "into y with m labels. We first apply a composition function to average the sequence of word embeddings e(x i ) for x i 2 X. The output of this composition function is fed into a logistic regression function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "To be specific, in an initial setting of NBOW, we can get an averaged word embedding z for any set of words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "{x i } |X| i=1 : z = 1 |X| P |X| i=1 e(x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": ". Feeding z to a softmax layer gives probability for each output label:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "y = softmax(W s \u2022 z + b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "Alternatively, more layers can be created on top of z to generate more abstract representations. The objective function is to minimize the cross entropy error, which for a single training example with true label y is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "`(\u0177 ) = P m p=1 y p log(\u0177 p ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "The following section will describe how we extend this NBOW model by integrating linguistic concept information into z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural bag of words model",
"sec_num": "3.1"
},
{
"text": "Direct mapping:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "Given a document {x i } |X| i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": ", we can get the corresponding annotations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "{c j i } |X| i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "based on C (j) , j = {1, ..., M}, which means additional input is available for the classifier. The question is how we can effectively make use of these annotations based on various C (j) , j = {1, ..., M}. In order to represent these concept information, we design two model variants, the first one is conducted by direct mapping, and the second one is done through gated mapping.",
"cite_spans": [
{
"start": 11,
"end": 14,
"text": "(j)",
"ref_id": null
},
{
"start": 184,
"end": 187,
"text": "(j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "With direct mapping, the embeddings for a specific token x i and its concept annotation c i are initialized separately. Therefore, the input for the following composition function is the concatenation of e(x i ) and e(c j i ), j = {1, ..., M}. In this case, the new hidden representation for x i is h i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "h i = e(x i ) e(c 1 i ) ... e(c M i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": ". Gated mapping: Gated mapping leads to a concept representation by sharing weight with the word representation, the mapping is conducted through a non-linear transformation g(x) instead of direct initialization.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "g C (j) (x i ) = tanh(W C (j) \u2022 e(x i ) + b C (j) ), where W C (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "is a three dimensional weight indexing matrix which corresponds to different knowledge bases, b C (j) is the bias vector. Hence, the new hidden representation is h i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "h i = e(x i ) g C (1) (x i ) ... g C (M ) (x i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "The resulted gated representation thus computes concept embedding by transforming the original word embeddings from a word semantic space into a concept semantic space based on the given concept annotations. Figure 1 shows the difference between these two methods. The steps for feeding the newly concatenated word-concept vector h i into the following layers is the same. But not all words contribute equally to the representation of the document meaning, we further introduce an attention mechanism to extract such words that are important to the meaning of the document and aggre- Figure 2 : Framework of our model gate the representation of those informative words to form a single hidden vector. Specifically, we introduce a context vector q, Figure 2 gives the framework of our model. The two variants of the model are neural bag of words with either direct or gated mapping.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 216,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 584,
"end": 592,
"text": "Figure 2",
"ref_id": null
},
{
"start": 748,
"end": 756,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "u i = tanh(W q \u2022 h i + b q ), \u21b5 i = exp(u T i q) P |X| i=1 exp(u T i q) , z = P |X| i=1 \u21b5 i h i . With z,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Linguistic Concept Information",
"sec_num": "3.2"
},
{
"text": "We collect concept annotation from three sources: the word clusters returned by GloVe word embeddings, lexical categories from WordNet, and biomedical concepts from MetaMap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of concept information",
"sec_num": "3.3"
},
{
"text": "Word clusters from GloVe word embeddings Global K-means clustering algorithm (Likas et al., 2003) is used to create K word clusters from pre-trained GloVe word embeddings (Pennington et al., 2014) . The algorithm is conducted in an incremental approach: To create K word clusters, all intermediate problems with 1, 2, ..., K 1 clusters are sequentially solved. The core idea of this method is that an optimal solution for a clustering problem with K clusters can be obtained by using a series of local optimal searches. We tested different K which varies from 50 to 200.",
"cite_spans": [
{
"start": 77,
"end": 97,
"text": "(Likas et al., 2003)",
"ref_id": "BIBREF11"
},
{
"start": 171,
"end": 196,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of concept information",
"sec_num": "3.3"
},
{
"text": "WordNet lexical categories By using WordNet lexical categories we have mapped each word remained after the preprocessing to lexical categories. WordNet 3.0 (Miller, 1995) offers categorization of 155,287 words into 44 WordNet lexical categories. Since many words may have different categories, a word sense disambiguation technique is required in order to not add noise to the later concept mapping. We use disambiguation by context (Hotho et al., 2003) . This technique returns the concept which maximizes a function depending on the conceptual vicinity.",
"cite_spans": [
{
"start": 433,
"end": 453,
"text": "(Hotho et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of concept information",
"sec_num": "3.3"
},
{
"text": "MetaMap concepts MetaMap (Aronson, 2001) provides 133 specific concepts for biomedical words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of concept information",
"sec_num": "3.3"
},
{
"text": "In this section, we introduce our experimental datasets and some implementation details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and implementation details",
"sec_num": "4"
},
{
"text": "We select 3 datasets of different sizes, corresponding to varying classification tasks. Some statistics about these datasets is summarized in Table 1. 20 Newsgroups This is a news categorization dataset (Lang, 1995) . It has a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. Some of the newsgroups are very closely related to each other, while others are highly unrelated. Each news belongs to one out of 20 labels.",
"cite_spans": [
{
"start": 203,
"end": 215,
"text": "(Lang, 1995)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 142,
"end": 150,
"text": "Table 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "IMDB This core dataset (Maas et al., 2011) contains 50,000 reviews which are divided evenly into 25k train and 25k test sets. The overall distribution of labels is balanced (25k positive and 25k negative).",
"cite_spans": [
{
"start": 23,
"end": 42,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "CT reports Additionally, we use 1000 CT scan reports (Martinez et al., 2015) with either positive or negative labels for fungal disease. These reports have technical medical content and highly specialized conventions, which are arguably the most distant genre from the above three datasets.",
"cite_spans": [
{
"start": 53,
"end": 76,
"text": "(Martinez et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Preprocessing The same preprocessing steps were used for all the datasets. We lower-cased all the tokens, removed stop words and replaced those low-frequency tokens with a UNK representation. All the numbers were replaced with a NUM symbol. Specifically, since all the CT reports were obtained from local hospitals, any potentially identifying information such as name, address, age, birthday and gender were removed. For each CT report, we used the free-text section, which contains the radiologist's interpretation of the scan and the reason for the requested scan as written by clinicians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.2"
},
{
"text": "Word embeddings For the first 2 datasets, we initialized word embeddings with the GloVe word vectors with 400 thousand vocabulary and 6 billion tokens. For the out-of-vocabulary words, we initialized their word embeddings randomly. For pathology reports, we have another 6000 CT documents which are unannotated by doctors. Therefore, a specific biomedical word embedding was randomly initialized with both unlabelled and labelled training data alongside other model parameters. The embedding dimension is set to be 100 for biomedical text and 300 for news and review text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.2"
},
{
"text": "Learning and hyperparameters To avoid overfitting, a dropout rate 0.3 is used on the word embedding layer (Srivastava et al., 2014) . Mini-batch size is 32, the update method is AdaGrad (Duchi et al., 2011) , the initial learning rate is 0.01. During training, we conduct experiments in the following to see if word embedding update during training can have an effect on the model performance. For all experiments, we iterate over the training set for 10 times, and pick the model which has the least training loss as the final model, all the ",
"cite_spans": [
{
"start": 106,
"end": 131,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 186,
"end": 206,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.2"
},
{
"text": "We evaluate the two variants of our model with 5 types of concept information incorporation: word clusters returned by applying K-means to GloVe word vectors, lexical categories returned from WordNet, biomedical concepts from MetaMap, both clusters returned from GloVe clusters and WordNet, and all the concepts from the three knowledge sources. We first do concept annotation from GloVe word clusters and manage to find out the best K for clustering GloVe words, then see whether concept information from different knowledge bases help and compare in each case to several strong baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We assumed the number of concept clusters (K) would have an impact on the model performance, therefore we have to test K for different datasets accordingly. For 20News and IMDB, we used 10% of the training set as development data. For CT reports, we used 10-fold cross validation. In the following, we use NBOW-DM and NBOW-GM to represent our two model variants, the direct mapping and gated mapping variants, respectively. Figure 3 and 4 show the test accuracy of our two model variants with various K from 50 to 200, we find that the best results can be got when K is 120,150 for 20News and IMDB respectively. This scenario is in our expectation that for larger datasets, there tend to be more groups of concepts. For CT reports, we notice there is a large fluctuation, partly because the GloVe embeddings we used are trained on top of Wikipedia text which are not specific for the biomedical terms in CT reports. In the following comparison experiment, the most appropriate K (120, 150, 90) is set accordingly for these 3 datasets. First, we conduct several experiments in which the pre-trained word embeddings is fixed during training. We hope to answer two questions via these experiments: 1) whether the concept incorporation from different lexical resources provide additional information; 2) which incorporation method is better, direct or gated mapping. As shown in Table 2 , concept information from GloVe clusters, WordNet and MetaMap helps propagate the general topic expression to classifiers. Also, gated mapping brings more benefits than direct mapping. ",
"cite_spans": [],
"ref_spans": [
{
"start": 424,
"end": 438,
"text": "Figure 3 and 4",
"ref_id": "FIGREF3"
},
{
"start": 1375,
"end": 1382,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Choosing the number of GloVe embedding clusters",
"sec_num": "5.1"
},
{
"text": "As we had wondered if update word embeddings during training would enhance the model performance, we re-ran the experiments with all the same settings except that the original word vectors could be updated. Most current neural models for text classification are variants of either recurrent or convolutional networks. Besides NBOW, we use another two strong baselines: the first one is DCNN (Kalchbrenner et al., 2014) which extends traditional CNN with dynamic k-max pooling, the second one is SVM with unigram features as well as additional concept annotations from the same five different sources. We also test our two model variants without the attention layer, in which the attention computation is replaced by an averaged summation.",
"cite_spans": [
{
"start": 391,
"end": 418,
"text": "(Kalchbrenner et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the state-of-art with updated word embeddings",
"sec_num": "5.3"
},
{
"text": "As shown in Table 3 , on 20 Newsgroup, our first model variant NBOW-DM-Attention achieves slightly better result on 20 Newsgroup with the incorporation of GloVe clusters. It is also noticed that the incorporation of WordNet categories hurt the model in some degree, we analyze that it is caused by the limited vocabulary size compared to that of GloVe, as well as the interme- diate disambiguation step during concept annotation. Our second model variant NBOW-GM-Attention with GloVe amd WordNet concept embeddings achieves best results on 20 Newsgroup, compared with the baselines and the first model variant NBOW-DM-Attention. While on IMDB, NBOW-GM-Attention with concept incorporation from GloVe and WordNet achieves the best, even if NBOW-DM-Attention with the same setting does not beat DCNN. On CT Reports, both our two model variants achieve better accuracy with all the group information from GloVe, WordNet and MetaMap. Besides, it is noticed that the variants with attentions generally perform better than those with no attentions. Overall, the results show that NBOW-GM-Attention generally performs better than NBOW-DM-Attention, which indicates that the concept incorporation by gated mapping is more reliable than that of a direct con-cept embedding, and the incorporation of appropriate concept information with our second model variant makes a contribution to the classification tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparison with the state-of-art with updated word embeddings",
"sec_num": "5.3"
},
{
"text": "CT reports, which have technical content and highly specialized conventions, are arguably the most distant genre from news and movie reviews among those we consider. Therefore, we manually check the false predictions returned by our best model above. It turns out the classifier cannot capture two kinds of patterns: In the first, there is some context information provided in the report which contains comparison with a previous patient record, e.g. in the sentence \"hypodense liver lesion in segment has significantly decreased in size from 12mm to 7mm\", the diagnosis of whether the patient is infected or not relies on the magnitude of \"decrease\", which is highly professional. Second, human label noise occurs in some cases when doctors will not make immediate decisions, for instance \"suspicious for infection\" and \"likely to be infected\" happen in both positive and negative reports.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis and improvement",
"sec_num": "5.4"
},
{
"text": "In order to see whether modeling context information can help or not, we conduct two transformation for h i to get a newh i , one is convolutionbased (CNN-GM):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis and improvement",
"sec_num": "5.4"
},
{
"text": "h i = tanh(W c \u2022 (h i 1 h i h i+1 ) + b r ), the other is recurrence-based (RNN- GM):h i = tanh(W h \u2022 h i + W r \u2022h i 1 + b r ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis and improvement",
"sec_num": "5.4"
},
{
"text": "Thus, in the above NBOW-GM settings,h i = h i . We use the three corresponding gated mapping variant with the best settings, and compare the number of parameters and the average running time per epoch. Table 4 shows that RNN-GM generally performs best at the cost of more parameters and training time per epoch. In contrast, CNN-GM is a trade-off between model complexity and performance. All timing experiments are specific for CT reports and performed on a single core of an Intel I5 processor with 8GB of RAM. 6 Conclusions and future work",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error analysis and improvement",
"sec_num": "5.4"
},
{
"text": "In this paper, we propose two different methods for incorporating concept information from external knowledge bases into a neural bag of words model: the neural bag of words with either direct mapping (NBOW-DM) or gated mapping (NBOW-GM), which leverages both the word and concept representation through multiple hidden layers before classification. The model with gated mapping does better than direct mapping, and performs competitively with more complicated neural models as well as a traditional statistic model on different text classification tasks, and achieves good results on a practical biomedical text classification task. Moreover, our two model variants are also time efficient. They generally require less training time than their counterparts, which allow them to be used for datasets where few annotation is available or manual annotation is expensive. For future work, we will consider using some global semantic information such as Rhetorical Structure Theory (RST), which is a theory of discourse that has enjoyed popularity in NLP. RST posits that a document can be represented by a tree whose leaves are elementary discourse units. We seek to develop approaches to combine local linguistic and global semantic knowledge into our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis and improvement",
"sec_num": "5.4"
},
{
"text": "On the other hand, our proposed method takes the information from outsourced knowledge bases into account and ignores the information of unlabelled data. We will considering using deep reinforcement learning to learn how to select the query unlabelled data points in a sequential manner, formulated as a Markov decision process. With more labels as well as information from some prior knowledge bases, our model can be developed for large scale text processing and analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis and improvement",
"sec_num": "5.4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Effective mapping of biomedical text to the UMLS metathesaurus: the MetaMap program",
"authors": [
{
"first": "",
"middle": [],
"last": "Alan R Aronson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the AMIA Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan R Aronson. 2001. Effective mapping of biomed- ical text to the UMLS metathesaurus: the MetaMap program. In Proceedings of the AMIA Sympo- sium. American Medical Informatics Association, page 17.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Knowledge sources for natural language processing",
"authors": [
{
"first": "H",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Anne-Marie",
"middle": [],
"last": "Baud",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Rassinoux",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Lovis",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Pierre-Andre",
"middle": [],
"last": "Griesser",
"suffix": ""
},
{
"first": "Jean-Raoul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scherrer",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the AMIA Annual Fall Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert H Baud, Anne-Marie Rassinoux, Christian Lo- vis, Judith Wagner, Vincent Griesser, Pierre-Andre Michel, and Jean-Raoul Scherrer. 1996. Knowl- edge sources for natural language processing. In Proceedings of the AMIA Annual Fall Sympo- sium. American Medical Informatics Association, page 70.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The unified medical language system (UMLS): integrating biomedical terminology",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic acids research",
"volume": "32",
"issue": "1",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2004. The unified med- ical language system (UMLS): integrating biomedical terminology. Nucleic acids research 32(suppl 1):D267-D270.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "From data to knowledge through concept-oriented terminologies: experience with the medical entities dictionary",
"authors": [
{
"first": "J",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cimino",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of the American Medical Informatics Association",
"volume": "7",
"issue": "3",
"pages": "288--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James J Cimino. 2000. From data to knowledge through concept-oriented terminologies: experi- ence with the medical entities dictionary. Journal of the American Medical Informatics Association 7(3):288-297.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121-2159.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ontologies improve text document clustering",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hotho",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Staab",
"suffix": ""
},
{
"first": "Gerd",
"middle": [],
"last": "Stumme",
"suffix": ""
}
],
"year": 2003,
"venue": "Data Mining, 2003. ICDM 2003. Third IEEE International Conference on. IEEE",
"volume": "",
"issue": "",
"pages": "541--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hotho, Steffen Staab, and Gerd Stumme. 2003. Ontologies improve text document clustering. In Data Mining, 2003. ICDM 2003. Third IEEE In- ternational Conference on. IEEE, pages 541-544.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep unordered composition rivals syntactic methods for text classification",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "1681--1691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan L Boyd- Graber, and Hal Daum\u00e9 III. 2015. Deep unordered composition rivals syntactic methods for text classi- fication. In ACL (1). pages 1681-1691.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A convolutional neural network for modelling sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1404.2188"
]
},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural net- work for modelling sentences. arXiv preprint arXiv:1404.2188 .",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Newsweeder: Learning to filter netnews",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 12th international conference on machine learning",
"volume": "10",
"issue": "",
"pages": "331--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken Lang. 1995. Newsweeder: Learning to filter net- news. In Proceedings of the 12th international con- ference on machine learning. volume 10, pages 331- 339.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning (ICML-14)",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International Conference on Ma- chine Learning (ICML-14). pages 1188-1196.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The global k-means clustering algorithm",
"authors": [
{
"first": "Aristidis",
"middle": [],
"last": "Likas",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Vlassis",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"J"
],
"last": "Verbeek",
"suffix": ""
}
],
"year": 2003,
"venue": "Pattern Recognition",
"volume": "36",
"issue": "2",
"pages": "451--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aristidis Likas, Nikos Vlassis, and Jakob J Verbeek. 2003. The global k-means clustering algorithm. Pattern Recognition 36(2):451-461.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Daly",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computa- tional Linguistics, Portland, Oregon, USA, pages 142-150. http://www.aclweb.org/anthology/P11-",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic detection of patients with invasive fungal disease from freetext computed tomography (CT) scans",
"authors": [
{
"first": "David",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "Michelle",
"middle": [
"R"
],
"last": "Ananda-Rajah",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Suominen",
"suffix": ""
},
{
"first": "Monica",
"middle": [
"A"
],
"last": "Slavin",
"suffix": ""
},
{
"first": "Karin",
"middle": [
"A"
],
"last": "Thursky",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Cavedon",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of biomedical informatics",
"volume": "53",
"issue": "",
"pages": "251--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Martinez, Michelle R Ananda-Rajah, Hanna Suominen, Monica A Slavin, Karin A Thursky, and Lawrence Cavedon. 2015. Automatic detection of patients with invasive fungal disease from free- text computed tomography (CT) scans. Journal of biomedical informatics 53:251-260.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "Interspeech",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Inter- speech. volume 2, page 3.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "WordNet: a lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM 38(11):39-41.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Opinion mining and sentiment analysis",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Foundations and Trends R in Information Retrieval",
"volume": "2",
"issue": "1-2",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R in In- formation Retrieval 2(1-2):1-135.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the conference on empirical methods in natural language processing(EMNLP)",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the conference on empirical methods in natural language process- ing(EMNLP). volume 14, pages 1532-1543.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Coordinating taxonomies: Key to re-usable concept representations",
"authors": [
{
"first": "Alan L",
"middle": [],
"last": "Rector",
"suffix": ""
}
],
"year": 1995,
"venue": "Conference on Artificial Intelligence in Medicine in Europe",
"volume": "",
"issue": "",
"pages": "15--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan L Rector. 1995. Coordinating taxonomies: Key to re-usable concept representations. In Confer- ence on Artificial Intelligence in Medicine in Eu- rope. Springer, pages 15-28.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Parsing with compositional vector grammars",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with composi- tional vector grammars. In ACL (1). pages 455-465.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semi-supervised recursive autoencoders for predicting sentiment distributions",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "151--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of the conference on empirical methods in natural lan- guage processing. Association for Computational Linguistics, pages 151-161.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the conference on empirical methods in natural language processing",
"volume": "1631",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, et al. 2013b. Recursive deep models for semantic compositionality over a senti- ment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP). volume 1631, page 1642.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SNOMED RT: a reference terminology for health care",
"authors": [
{
"first": "Keith",
"middle": [
"E"
],
"last": "Kent A Spackman",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"A"
],
"last": "Campbell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "C\u00f4t\u00e9",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the AMIA annual fall symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kent A Spackman, Keith E Campbell, and Roger A C\u00f4t\u00e9. 1997. SNOMED RT: a reference terminology for health care. In Proceedings of the AMIA annual fall symposium. American Medical Informatics As- sociation, page 640.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search 15(1):1929-1958.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.00075"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) Direct mapping. (b) Gated mapping.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Direct and gated mapping.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "the final prediction is made with a softmax layer:\u0177 = softmax(W s \u2022 z + b).",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Accuracy of NBOW-DM-GloVe clusters Figure 4: Accuracy of NBOW-GM-GloVe clusters 5.2 Model effectiveness with fixed word embeddings",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF3": {
"num": null,
"text": "Evaluation with fixed word embeddings during training.",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF5": {
"num": null,
"text": "Evaluation with updated word embeddings during training.",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF7": {
"num": null,
"text": "Evaluation of gated mapping with convolution or recurrence transformation.",
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}